This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-12 19:15
Elapsed53m59s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 131 lines ...
I1012 19:15:49.003752    4754 up.go:43] Cleaning up any leaked resources from previous cluster
I1012 19:15:49.003778    4754 dumplogs.go:40] /logs/artifacts/9f66e190-2b90-11ec-aee5-323daf952f06/kops toolbox dump --name e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I1012 19:15:49.018021    4773 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1012 19:15:49.018096    4773 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io" not found
W1012 19:15:49.526456    4754 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1012 19:15:49.526513    4754 down.go:48] /logs/artifacts/9f66e190-2b90-11ec-aee5-323daf952f06/kops delete cluster --name e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --yes
I1012 19:15:49.544020    4783 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1012 19:15:49.544133    4783 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io" not found
I1012 19:15:50.029998    4754 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/12 19:15:50 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1012 19:15:50.039722    4754 http.go:37] curl https://ip.jsb.workers.dev
I1012 19:15:50.130564    4754 up.go:144] /logs/artifacts/9f66e190-2b90-11ec-aee5-323daf952f06/kops create cluster --name e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2905.2.5-hvm --channel=alpha --networking=kopeio --container-runtime=containerd --admin-access 34.121.253.144/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I1012 19:15:50.145277    4794 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1012 19:15:50.145358    4794 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1012 19:15:50.191943    4794 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1012 19:15:50.695722    4794 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I1012 19:16:17.277512    4754 up.go:181] /logs/artifacts/9f66e190-2b90-11ec-aee5-323daf952f06/kops validate cluster --name e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I1012 19:16:17.295170    4813 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1012 19:16:17.295294    4813 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io

W1012 19:16:18.653507    4813 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W1012 19:16:28.705221    4813 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:16:38.743929    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:16:48.774959    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:16:58.810560    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:08.859518    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:18.909605    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:28.942494    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:38.978286    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:49.023276    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:17:59.060694    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:09.104661    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:19.150439    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:29.187991    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:39.249655    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:49.280439    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:18:59.455432    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:19:09.489046    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:19:19.522025    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:19:29.810133    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1012 19:19:39.837419    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 8 lines ...
Machine	i-08d05e3b4af64ab5b				machine "i-08d05e3b4af64ab5b" has not yet joined cluster
Machine	i-0c44cc92a05d08911				machine "i-0c44cc92a05d08911" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-zjc2v		system-cluster-critical pod "coredns-5dc785954d-zjc2v" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-8ntd6	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-8ntd6" is pending
Pod	kube-system/kopeio-networking-agent-kxt29	system-node-critical pod "kopeio-networking-agent-kxt29" is pending

Validation Failed
W1012 19:19:52.892780    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 6 lines ...
KIND	NAME					MESSAGE
Machine	i-02eb4501265093bcc			machine "i-02eb4501265093bcc" has not yet joined cluster
Machine	i-08d05e3b4af64ab5b			machine "i-08d05e3b4af64ab5b" has not yet joined cluster
Machine	i-0c44cc92a05d08911			machine "i-0c44cc92a05d08911" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-qjhdq	system-cluster-critical pod "coredns-5dc785954d-qjhdq" is pending

Validation Failed
W1012 19:20:05.005951    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-61-115.eu-central-1.compute.internal	node "ip-172-20-61-115.eu-central-1.compute.internal" of role "node" is not ready
Pod	kube-system/kopeio-networking-agent-wx9z2	system-node-critical pod "kopeio-networking-agent-wx9z2" is pending

Validation Failed
W1012 19:20:16.935107    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-32-55.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-32-55.eu-central-1.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-57-193.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-57-193.eu-central-1.compute.internal" is pending

Validation Failed
W1012 19:20:28.888374    4813 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 1167 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:22:57.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:22:58.169: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 132 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:22:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8356" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:00.235: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:01.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-572" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:03.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-6385" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:03.739: INFO: Only supported for providers [azure] (not aws)
... skipping 89 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 7 lines ...
W1012 19:22:59.828883    5372 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 12 19:22:59.828: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 12 19:23:00.237: INFO: Waiting up to 5m0s for pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed" in namespace "emptydir-5904" to be "Succeeded or Failed"
Oct 12 19:23:00.364: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed": Phase="Pending", Reason="", readiness=false. Elapsed: 127.693447ms
Oct 12 19:23:02.475: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238606566s
Oct 12 19:23:04.624: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387387126s
Oct 12 19:23:06.734: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497294548s
Oct 12 19:23:08.845: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.608538633s
STEP: Saw pod success
Oct 12 19:23:08.845: INFO: Pod "pod-eef62aa8-d16e-4475-834a-011a6d0c62ed" satisfied condition "Succeeded or Failed"
Oct 12 19:23:08.957: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-eef62aa8-d16e-4475-834a-011a6d0c62ed container test-container: <nil>
STEP: delete the pod
Oct 12 19:23:10.239: INFO: Waiting for pod pod-eef62aa8-d16e-4475-834a-011a6d0c62ed to disappear
Oct 12 19:23:10.349: INFO: Pod pod-eef62aa8-d16e-4475-834a-011a6d0c62ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.850 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 128 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 19:22:58.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f" in namespace "projected-1824" to be "Succeeded or Failed"
Oct 12 19:22:59.068: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 110.556985ms
Oct 12 19:23:01.177: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220309753s
Oct 12 19:23:03.286: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329426738s
Oct 12 19:23:05.398: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440620516s
Oct 12 19:23:07.508: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550511484s
Oct 12 19:23:09.618: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Running", Reason="", readiness=true. Elapsed: 10.66126638s
Oct 12 19:23:11.728: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.770652097s
STEP: Saw pod success
Oct 12 19:23:11.728: INFO: Pod "downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f" satisfied condition "Succeeded or Failed"
Oct 12 19:23:11.836: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f container client-container: <nil>
STEP: delete the pod
Oct 12 19:23:12.482: INFO: Waiting for pod downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f to disappear
Oct 12 19:23:12.590: INFO: Pod downwardapi-volume-ed3e1053-8274-4f6c-a853-98e41f5d1c4f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.228 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:12.929: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 12 19:22:58.318: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:22:58.546: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-t89z
STEP: Creating a pod to test subpath
Oct 12 19:22:58.660: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-t89z" in namespace "provisioning-7475" to be "Succeeded or Failed"
Oct 12 19:22:58.770: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 110.007557ms
Oct 12 19:23:00.879: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218891461s
Oct 12 19:23:02.989: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328890361s
Oct 12 19:23:05.098: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438557106s
Oct 12 19:23:07.208: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548587561s
Oct 12 19:23:09.319: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658804013s
Oct 12 19:23:11.428: INFO: Pod "pod-subpath-test-inlinevolume-t89z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.767983654s
STEP: Saw pod success
Oct 12 19:23:11.428: INFO: Pod "pod-subpath-test-inlinevolume-t89z" satisfied condition "Succeeded or Failed"
Oct 12 19:23:11.537: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-t89z container test-container-volume-inlinevolume-t89z: <nil>
STEP: delete the pod
Oct 12 19:23:12.491: INFO: Waiting for pod pod-subpath-test-inlinevolume-t89z to disappear
Oct 12 19:23:12.599: INFO: Pod pod-subpath-test-inlinevolume-t89z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-t89z
Oct 12 19:23:12.599: INFO: Deleting pod "pod-subpath-test-inlinevolume-t89z" in namespace "provisioning-7475"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:13.070: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 93 lines ...
Oct 12 19:22:58.324: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 12 19:22:58.656: INFO: Waiting up to 5m0s for pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2" in namespace "security-context-test-5675" to be "Succeeded or Failed"
Oct 12 19:22:58.768: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 111.70851ms
Oct 12 19:23:00.878: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222254696s
Oct 12 19:23:02.989: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333255023s
Oct 12 19:23:05.100: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443747939s
Oct 12 19:23:07.210: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554003666s
Oct 12 19:23:09.320: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663768484s
Oct 12 19:23:11.430: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773919672s
Oct 12 19:23:13.544: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.888311796s
Oct 12 19:23:15.656: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.999376788s
Oct 12 19:23:15.656: INFO: Pod "busybox-user-65534-44cf8311-661b-4a8a-9cad-ae1909182cb2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:15.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5675" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:16.052: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 129 lines ...
• [SLOW TEST:18.535 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:16.207: INFO: Only supported for providers [openstack] (not aws)
... skipping 56 lines ...
• [SLOW TEST:19.812 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:13.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-88141694-0587-4e02-922a-4f000bf1551a
STEP: Creating a pod to test consume secrets
Oct 12 19:23:13.916: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984" in namespace "projected-6592" to be "Succeeded or Failed"
Oct 12 19:23:14.024: INFO: Pod "pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984": Phase="Pending", Reason="", readiness=false. Elapsed: 108.336172ms
Oct 12 19:23:16.133: INFO: Pod "pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216931588s
Oct 12 19:23:18.243: INFO: Pod "pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326896646s
STEP: Saw pod success
Oct 12 19:23:18.243: INFO: Pod "pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984" satisfied condition "Succeeded or Failed"
Oct 12 19:23:18.352: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:23:18.586: INFO: Waiting for pod pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984 to disappear
Oct 12 19:23:18.695: INFO: Pod pod-projected-secrets-f4ecfafe-b9e9-42bc-aeb1-d5e792d82984 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.763 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Oct 12 19:23:11.358: INFO: Waiting up to 5m0s for pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03" in namespace "downward-api-8319" to be "Succeeded or Failed"
Oct 12 19:23:11.468: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03": Phase="Pending", Reason="", readiness=false. Elapsed: 109.75741ms
Oct 12 19:23:13.578: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220231711s
Oct 12 19:23:15.690: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331678835s
Oct 12 19:23:17.801: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442908552s
Oct 12 19:23:19.911: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.552913799s
STEP: Saw pod success
Oct 12 19:23:19.911: INFO: Pod "metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03" satisfied condition "Succeeded or Failed"
Oct 12 19:23:20.021: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03 container client-container: <nil>
STEP: delete the pod
Oct 12 19:23:20.259: INFO: Waiting for pod metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03 to disappear
Oct 12 19:23:20.378: INFO: Pod metadata-volume-dc3e9219-b194-4cde-9f7a-82063d905f03 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.901 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":22,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:20.633: INFO: Only supported for providers [openstack] (not aws)
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:24.046: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 12 19:23:00.351: INFO: Waiting up to 5m0s for pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548" in namespace "emptydir-4482" to be "Succeeded or Failed"
Oct 12 19:23:00.464: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 112.417722ms
Oct 12 19:23:02.573: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221466681s
Oct 12 19:23:04.684: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332078101s
Oct 12 19:23:06.793: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441815959s
Oct 12 19:23:08.911: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559043344s
Oct 12 19:23:11.020: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668466993s
Oct 12 19:23:13.130: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778252953s
Oct 12 19:23:15.241: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 14.889894158s
Oct 12 19:23:17.352: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 17.000684431s
Oct 12 19:23:19.462: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 19.110411333s
Oct 12 19:23:21.571: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Pending", Reason="", readiness=false. Elapsed: 21.219547712s
Oct 12 19:23:23.680: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.328419881s
STEP: Saw pod success
Oct 12 19:23:23.680: INFO: Pod "pod-ec51bc3a-0729-4032-8030-7c36b65ea548" satisfied condition "Succeeded or Failed"
Oct 12 19:23:23.789: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-ec51bc3a-0729-4032-8030-7c36b65ea548 container test-container: <nil>
STEP: delete the pod
Oct 12 19:23:24.523: INFO: Waiting for pod pod-ec51bc3a-0729-4032-8030-7c36b65ea548 to disappear
Oct 12 19:23:24.631: INFO: Pod pod-ec51bc3a-0729-4032-8030-7c36b65ea548 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":1,"skipped":30,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:24.996: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:22:58.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Oct 12 19:23:00.419: INFO: Waiting up to 5m0s for pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8" in namespace "containers-1988" to be "Succeeded or Failed"
Oct 12 19:23:00.533: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 114.221077ms
Oct 12 19:23:02.643: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224563873s
Oct 12 19:23:04.755: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335827896s
Oct 12 19:23:06.865: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446812876s
Oct 12 19:23:08.978: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559599051s
Oct 12 19:23:11.091: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.672380606s
Oct 12 19:23:13.203: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.784369578s
Oct 12 19:23:15.313: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.894746683s
Oct 12 19:23:17.424: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.00563893s
Oct 12 19:23:19.535: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.116811633s
Oct 12 19:23:21.680: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.261765462s
Oct 12 19:23:23.790: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.371522005s
STEP: Saw pod success
Oct 12 19:23:23.790: INFO: Pod "client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8" satisfied condition "Succeeded or Failed"
Oct 12 19:23:23.909: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:23:24.927: INFO: Waiting for pod client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8 to disappear
Oct 12 19:23:25.038: INFO: Pod client-containers-acc7c40e-9b36-4c40-b624-47dc0662afa8 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:26.429 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:25.292: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
Oct 12 19:23:12.142: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Pending, waiting for it to be Running (with Ready = true)
Oct 12 19:23:14.143: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Pending, waiting for it to be Running (with Ready = true)
Oct 12 19:23:16.143: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Pending, waiting for it to be Running (with Ready = true)
Oct 12 19:23:18.143: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Pending, waiting for it to be Running (with Ready = true)
Oct 12 19:23:20.144: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Pending, waiting for it to be Running (with Ready = true)
Oct 12 19:23:22.142: INFO: The status of Pod server-envvars-0ab97d07-a890-401a-a5cd-00c38ba8d42b is Running (Ready = true)
Oct 12 19:23:22.479: INFO: Waiting up to 5m0s for pod "client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc" in namespace "pods-9912" to be "Succeeded or Failed"
Oct 12 19:23:22.589: INFO: Pod "client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc": Phase="Pending", Reason="", readiness=false. Elapsed: 110.010966ms
Oct 12 19:23:24.701: INFO: Pod "client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22212265s
STEP: Saw pod success
Oct 12 19:23:24.701: INFO: Pod "client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc" satisfied condition "Succeeded or Failed"
Oct 12 19:23:24.812: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc container env3cont: <nil>
STEP: delete the pod
Oct 12 19:23:25.043: INFO: Waiting for pod client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc to disappear
Oct 12 19:23:25.154: INFO: Pod client-envvars-b7c30331-d6c1-404f-ac43-e55e71ca64fc no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:27.725 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:25.494: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
• [SLOW TEST:28.326 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:26.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8232" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:26.490: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-zmnd
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:22:58.573: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zmnd" in namespace "subpath-1513" to be "Succeeded or Failed"
Oct 12 19:22:58.687: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Pending", Reason="", readiness=false. Elapsed: 114.324267ms
Oct 12 19:23:00.797: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223787254s
Oct 12 19:23:02.908: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335261752s
Oct 12 19:23:05.026: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452927289s
Oct 12 19:23:07.137: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.564456504s
Oct 12 19:23:09.247: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Running", Reason="", readiness=true. Elapsed: 10.674396116s
... skipping 3 lines ...
Oct 12 19:23:17.686: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Running", Reason="", readiness=true. Elapsed: 19.113351604s
Oct 12 19:23:19.844: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Running", Reason="", readiness=true. Elapsed: 21.271248037s
Oct 12 19:23:21.954: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Running", Reason="", readiness=true. Elapsed: 23.380960723s
Oct 12 19:23:24.063: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Running", Reason="", readiness=true. Elapsed: 25.48995965s
Oct 12 19:23:26.173: INFO: Pod "pod-subpath-test-configmap-zmnd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.599909612s
STEP: Saw pod success
Oct 12 19:23:26.173: INFO: Pod "pod-subpath-test-configmap-zmnd" satisfied condition "Succeeded or Failed"
Oct 12 19:23:26.281: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-configmap-zmnd container test-container-subpath-configmap-zmnd: <nil>
STEP: delete the pod
Oct 12 19:23:26.507: INFO: Waiting for pod pod-subpath-test-configmap-zmnd to disappear
Oct 12 19:23:26.616: INFO: Pod pod-subpath-test-configmap-zmnd no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zmnd
Oct 12 19:23:26.616: INFO: Deleting pod "pod-subpath-test-configmap-zmnd" in namespace "subpath-1513"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:27.059: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 201 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 21 lines ...
• [SLOW TEST:30.654 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:28.443: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:28.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4409" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 51 lines ...
Oct 12 19:23:20.847: INFO: PersistentVolumeClaim pvc-fhnqd found but phase is Pending instead of Bound.
Oct 12 19:23:22.957: INFO: PersistentVolumeClaim pvc-fhnqd found and phase=Bound (8.548469232s)
Oct 12 19:23:22.958: INFO: Waiting up to 3m0s for PersistentVolume local-w6tkc to have phase Bound
Oct 12 19:23:23.067: INFO: PersistentVolume local-w6tkc found and phase=Bound (109.484234ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qrqr
STEP: Creating a pod to test subpath
Oct 12 19:23:23.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qrqr" in namespace "provisioning-5208" to be "Succeeded or Failed"
Oct 12 19:23:23.506: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.15129ms
Oct 12 19:23:25.616: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218634411s
Oct 12 19:23:27.726: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328784797s
Oct 12 19:23:29.836: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439003105s
Oct 12 19:23:31.946: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548794833s
Oct 12 19:23:34.056: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.658408684s
STEP: Saw pod success
Oct 12 19:23:34.056: INFO: Pod "pod-subpath-test-preprovisionedpv-qrqr" satisfied condition "Succeeded or Failed"
Oct 12 19:23:34.165: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qrqr container test-container-volume-preprovisionedpv-qrqr: <nil>
STEP: delete the pod
Oct 12 19:23:34.435: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qrqr to disappear
Oct 12 19:23:34.544: INFO: Pod pod-subpath-test-preprovisionedpv-qrqr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qrqr
Oct 12 19:23:34.545: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qrqr" in namespace "provisioning-5208"
... skipping 39 lines ...
Oct 12 19:22:57.990: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-3618sj2nc
STEP: creating a claim
Oct 12 19:22:58.100: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-wth5
STEP: Creating a pod to test exec-volume-test
Oct 12 19:22:58.447: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-wth5" in namespace "volume-3618" to be "Succeeded or Failed"
Oct 12 19:22:58.556: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 109.35166ms
Oct 12 19:23:00.666: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219792509s
Oct 12 19:23:02.777: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330837319s
Oct 12 19:23:04.895: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448361859s
Oct 12 19:23:07.005: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558160449s
Oct 12 19:23:09.115: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668608365s
Oct 12 19:23:11.225: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.777988911s
Oct 12 19:23:13.336: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.889140661s
Oct 12 19:23:15.446: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.999299052s
Oct 12 19:23:17.558: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.111289318s
Oct 12 19:23:19.669: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.222472327s
Oct 12 19:23:21.781: INFO: Pod "exec-volume-test-dynamicpv-wth5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.33422715s
STEP: Saw pod success
Oct 12 19:23:21.781: INFO: Pod "exec-volume-test-dynamicpv-wth5" satisfied condition "Succeeded or Failed"
Oct 12 19:23:21.890: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod exec-volume-test-dynamicpv-wth5 container exec-container-dynamicpv-wth5: <nil>
STEP: delete the pod
Oct 12 19:23:22.118: INFO: Waiting for pod exec-volume-test-dynamicpv-wth5 to disappear
Oct 12 19:23:22.227: INFO: Pod exec-volume-test-dynamicpv-wth5 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-wth5
Oct 12 19:23:22.227: INFO: Deleting pod "exec-volume-test-dynamicpv-wth5" in namespace "volume-3618"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:38.471: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 86 lines ...
Oct 12 19:23:34.212: INFO: Got stdout from 3.66.27.240:22: Hello from core@ip-172-20-61-115.eu-central-1.compute.internal
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Oct 12 19:23:36.746: INFO: Got stdout from 3.67.193.7:22: stdout
Oct 12 19:23:36.746: INFO: Got stderr from 3.67.193.7:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing core@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:41.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-1460" for this suite.


• [SLOW TEST:15.967 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:41.975: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:43.923: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 269 lines ...
Oct 12 19:23:12.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Oct 12 19:23:13.485: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:23:13.705: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7436" in namespace "provisioning-7436" to be "Succeeded or Failed"
Oct 12 19:23:13.814: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 108.500664ms
Oct 12 19:23:15.925: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219386948s
Oct 12 19:23:18.034: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328567029s
Oct 12 19:23:20.144: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438392148s
Oct 12 19:23:22.253: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547508249s
Oct 12 19:23:24.363: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657347693s
Oct 12 19:23:26.472: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.766605998s
STEP: Saw pod success
Oct 12 19:23:26.472: INFO: Pod "hostpath-symlink-prep-provisioning-7436" satisfied condition "Succeeded or Failed"
Oct 12 19:23:26.472: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7436" in namespace "provisioning-7436"
Oct 12 19:23:26.585: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7436" to be fully deleted
Oct 12 19:23:26.694: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-j6lg
STEP: Creating a pod to test subpath
Oct 12 19:23:26.806: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-j6lg" in namespace "provisioning-7436" to be "Succeeded or Failed"
Oct 12 19:23:26.914: INFO: Pod "pod-subpath-test-inlinevolume-j6lg": Phase="Pending", Reason="", readiness=false. Elapsed: 108.536004ms
Oct 12 19:23:29.031: INFO: Pod "pod-subpath-test-inlinevolume-j6lg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224980353s
Oct 12 19:23:31.183: INFO: Pod "pod-subpath-test-inlinevolume-j6lg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377657522s
Oct 12 19:23:33.293: INFO: Pod "pod-subpath-test-inlinevolume-j6lg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.487680608s
STEP: Saw pod success
Oct 12 19:23:33.294: INFO: Pod "pod-subpath-test-inlinevolume-j6lg" satisfied condition "Succeeded or Failed"
Oct 12 19:23:33.402: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-j6lg container test-container-subpath-inlinevolume-j6lg: <nil>
STEP: delete the pod
Oct 12 19:23:33.630: INFO: Waiting for pod pod-subpath-test-inlinevolume-j6lg to disappear
Oct 12 19:23:33.740: INFO: Pod pod-subpath-test-inlinevolume-j6lg no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-j6lg
Oct 12 19:23:33.740: INFO: Deleting pod "pod-subpath-test-inlinevolume-j6lg" in namespace "provisioning-7436"
STEP: Deleting pod
Oct 12 19:23:33.848: INFO: Deleting pod "pod-subpath-test-inlinevolume-j6lg" in namespace "provisioning-7436"
Oct 12 19:23:34.066: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7436" in namespace "provisioning-7436" to be "Succeeded or Failed"
Oct 12 19:23:34.176: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 109.539423ms
Oct 12 19:23:36.284: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218135715s
Oct 12 19:23:38.393: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327200947s
Oct 12 19:23:40.503: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436845486s
Oct 12 19:23:42.612: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545775869s
Oct 12 19:23:44.722: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655961115s
Oct 12 19:23:46.831: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76480707s
Oct 12 19:23:48.941: INFO: Pod "hostpath-symlink-prep-provisioning-7436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.874706952s
STEP: Saw pod success
Oct 12 19:23:48.941: INFO: Pod "hostpath-symlink-prep-provisioning-7436" satisfied condition "Succeeded or Failed"
Oct 12 19:23:48.941: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7436" in namespace "provisioning-7436"
Oct 12 19:23:49.054: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7436" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:49.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7436" for this suite.
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:52.287: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 172 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:52.456: INFO: Driver local doesn't support ext4 -- skipping
... skipping 60 lines ...
Oct 12 19:23:35.870: INFO: PersistentVolumeClaim pvc-jp2rl found but phase is Pending instead of Bound.
Oct 12 19:23:37.982: INFO: PersistentVolumeClaim pvc-jp2rl found and phase=Bound (4.336541446s)
Oct 12 19:23:37.982: INFO: Waiting up to 3m0s for PersistentVolume local-gcnhc to have phase Bound
Oct 12 19:23:38.093: INFO: PersistentVolume local-gcnhc found and phase=Bound (110.8095ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pwk4
STEP: Creating a pod to test exec-volume-test
Oct 12 19:23:38.425: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pwk4" in namespace "volume-4085" to be "Succeeded or Failed"
Oct 12 19:23:38.536: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 110.080816ms
Oct 12 19:23:40.647: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221505962s
Oct 12 19:23:42.757: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332005157s
Oct 12 19:23:44.870: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444307368s
Oct 12 19:23:46.981: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555576865s
Oct 12 19:23:49.094: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668459871s
Oct 12 19:23:51.205: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.779958614s
STEP: Saw pod success
Oct 12 19:23:51.205: INFO: Pod "exec-volume-test-preprovisionedpv-pwk4" satisfied condition "Succeeded or Failed"
Oct 12 19:23:51.323: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-pwk4 container exec-container-preprovisionedpv-pwk4: <nil>
STEP: delete the pod
Oct 12 19:23:51.550: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pwk4 to disappear
Oct 12 19:23:51.661: INFO: Pod exec-volume-test-preprovisionedpv-pwk4 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pwk4
Oct 12 19:23:51.661: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pwk4" in namespace "volume-4085"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:53.082: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 19:23:44.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26" in namespace "downward-api-9265" to be "Succeeded or Failed"
Oct 12 19:23:44.942: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26": Phase="Pending", Reason="", readiness=false. Elapsed: 109.218346ms
Oct 12 19:23:47.052: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21897103s
Oct 12 19:23:49.163: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3299939s
Oct 12 19:23:51.273: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439510792s
Oct 12 19:23:53.383: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.550282382s
STEP: Saw pod success
Oct 12 19:23:53.384: INFO: Pod "downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26" satisfied condition "Succeeded or Failed"
Oct 12 19:23:53.493: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26 container client-container: <nil>
STEP: delete the pod
Oct 12 19:23:53.802: INFO: Waiting for pod downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26 to disappear
Oct 12 19:23:53.911: INFO: Pod downwardapi-volume-c3dbf073-f6a3-4cb4-b30c-f9574c588c26 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.957 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:54.146: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:49.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:54.729: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:36.081: INFO: >>> kubeConfig: /root/.kube/config
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:55.459: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 235 lines ...
Oct 12 19:23:54.108: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 12 19:23:54.108: INFO: stdout: "scheduler controller-manager etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Oct 12 19:23:54.108: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4338 get componentstatuses scheduler'
Oct 12 19:23:54.525: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 12 19:23:54.526: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Oct 12 19:23:54.526: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4338 get componentstatuses controller-manager'
Oct 12 19:23:54.945: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 12 19:23:54.945: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-0
Oct 12 19:23:54.945: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4338 get componentstatuses etcd-0'
Oct 12 19:23:55.383: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 12 19:23:55.383: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Oct 12 19:23:55.383: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-4338 get componentstatuses etcd-1'
Oct 12 19:23:55.804: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Oct 12 19:23:55.804: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:55.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4338" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":3,"skipped":27,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:56.067: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
Oct 12 19:23:16.782: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-58922g8fv
STEP: creating a claim
Oct 12 19:23:16.893: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-psm5
STEP: Creating a pod to test subpath
Oct 12 19:23:17.233: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-psm5" in namespace "provisioning-5892" to be "Succeeded or Failed"
Oct 12 19:23:17.342: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 109.480731ms
Oct 12 19:23:19.452: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219530801s
Oct 12 19:23:21.563: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33022886s
Oct 12 19:23:23.674: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440847343s
Oct 12 19:23:25.784: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551096317s
Oct 12 19:23:27.894: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661594379s
Oct 12 19:23:30.006: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773551232s
Oct 12 19:23:32.123: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.889830153s
Oct 12 19:23:34.233: INFO: Pod "pod-subpath-test-dynamicpv-psm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.999627144s
STEP: Saw pod success
Oct 12 19:23:34.233: INFO: Pod "pod-subpath-test-dynamicpv-psm5" satisfied condition "Succeeded or Failed"
Oct 12 19:23:34.342: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-psm5 container test-container-volume-dynamicpv-psm5: <nil>
STEP: delete the pod
Oct 12 19:23:34.593: INFO: Waiting for pod pod-subpath-test-dynamicpv-psm5 to disappear
Oct 12 19:23:34.702: INFO: Pod pod-subpath-test-dynamicpv-psm5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-psm5
Oct 12 19:23:34.702: INFO: Deleting pod "pod-subpath-test-dynamicpv-psm5" in namespace "provisioning-5892"
... skipping 34 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:23:57.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7095" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:23:59.389: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 102 lines ...
Oct 12 19:23:15.142: INFO: PersistentVolumeClaim csi-hostpathnrbkw found but phase is Pending instead of Bound.
Oct 12 19:23:17.251: INFO: PersistentVolumeClaim csi-hostpathnrbkw found but phase is Pending instead of Bound.
Oct 12 19:23:19.361: INFO: PersistentVolumeClaim csi-hostpathnrbkw found but phase is Pending instead of Bound.
Oct 12 19:23:21.484: INFO: PersistentVolumeClaim csi-hostpathnrbkw found and phase=Bound (17.020424264s)
STEP: Creating pod pod-subpath-test-dynamicpv-pql9
STEP: Creating a pod to test subpath
Oct 12 19:23:21.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-pql9" in namespace "provisioning-7548" to be "Succeeded or Failed"
Oct 12 19:23:21.929: INFO: Pod "pod-subpath-test-dynamicpv-pql9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.002471ms
Oct 12 19:23:24.040: INFO: Pod "pod-subpath-test-dynamicpv-pql9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219890659s
Oct 12 19:23:26.149: INFO: Pod "pod-subpath-test-dynamicpv-pql9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328919851s
Oct 12 19:23:28.276: INFO: Pod "pod-subpath-test-dynamicpv-pql9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.455194866s
STEP: Saw pod success
Oct 12 19:23:28.276: INFO: Pod "pod-subpath-test-dynamicpv-pql9" satisfied condition "Succeeded or Failed"
Oct 12 19:23:28.406: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-pql9 container test-container-volume-dynamicpv-pql9: <nil>
STEP: delete the pod
Oct 12 19:23:28.667: INFO: Waiting for pod pod-subpath-test-dynamicpv-pql9 to disappear
Oct 12 19:23:28.781: INFO: Pod pod-subpath-test-dynamicpv-pql9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-pql9
Oct 12 19:23:28.781: INFO: Deleting pod "pod-subpath-test-dynamicpv-pql9" in namespace "provisioning-7548"
... skipping 82 lines ...
Oct 12 19:23:50.819: INFO: PersistentVolumeClaim pvc-vtf7p found but phase is Pending instead of Bound.
Oct 12 19:23:52.928: INFO: PersistentVolumeClaim pvc-vtf7p found and phase=Bound (2.216922548s)
Oct 12 19:23:52.928: INFO: Waiting up to 3m0s for PersistentVolume local-s8k5f to have phase Bound
Oct 12 19:23:53.036: INFO: PersistentVolume local-s8k5f found and phase=Bound (108.0521ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-f26g
STEP: Creating a pod to test subpath
Oct 12 19:23:53.371: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f26g" in namespace "provisioning-3274" to be "Succeeded or Failed"
Oct 12 19:23:53.480: INFO: Pod "pod-subpath-test-preprovisionedpv-f26g": Phase="Pending", Reason="", readiness=false. Elapsed: 108.654728ms
Oct 12 19:23:55.589: INFO: Pod "pod-subpath-test-preprovisionedpv-f26g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21740016s
Oct 12 19:23:57.699: INFO: Pod "pod-subpath-test-preprovisionedpv-f26g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327491612s
STEP: Saw pod success
Oct 12 19:23:57.699: INFO: Pod "pod-subpath-test-preprovisionedpv-f26g" satisfied condition "Succeeded or Failed"
Oct 12 19:23:57.808: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-f26g container test-container-volume-preprovisionedpv-f26g: <nil>
STEP: delete the pod
Oct 12 19:23:58.037: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f26g to disappear
Oct 12 19:23:58.147: INFO: Pod pod-subpath-test-preprovisionedpv-f26g no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-f26g
Oct 12 19:23:58.147: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f26g" in namespace "provisioning-3274"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:01.228: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:01.320: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:54.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-7223/configmap-test-6000fa77-8d1a-4efa-bbc2-1df6ed61c68b
STEP: Creating a pod to test consume configMaps
Oct 12 19:23:54.934: INFO: Waiting up to 5m0s for pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252" in namespace "configmap-7223" to be "Succeeded or Failed"
Oct 12 19:23:55.043: INFO: Pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252": Phase="Pending", Reason="", readiness=false. Elapsed: 108.731136ms
Oct 12 19:23:57.158: INFO: Pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223955655s
Oct 12 19:23:59.269: INFO: Pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334632827s
Oct 12 19:24:01.386: INFO: Pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.451502827s
STEP: Saw pod success
Oct 12 19:24:01.386: INFO: Pod "pod-configmaps-49937c2d-c367-40f9-9838-171f76735252" satisfied condition "Succeeded or Failed"
Oct 12 19:24:01.523: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-configmaps-49937c2d-c367-40f9-9838-171f76735252 container env-test: <nil>
STEP: delete the pod
Oct 12 19:24:01.767: INFO: Waiting for pod pod-configmaps-49937c2d-c367-40f9-9838-171f76735252 to disappear
Oct 12 19:24:01.881: INFO: Pod pod-configmaps-49937c2d-c367-40f9-9838-171f76735252 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.946 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-autoscaling] DNS horizontal autoscaling
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 424 lines ...
• [SLOW TEST:13.052 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:48.282 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:189
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":2,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:24:08.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6350" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:08.764: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:56.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:24:12.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6565" for this suite.


• [SLOW TEST:17.144 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:05.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 19:24:06.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00" in namespace "projected-3519" to be "Succeeded or Failed"
Oct 12 19:24:06.204: INFO: Pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00": Phase="Pending", Reason="", readiness=false. Elapsed: 109.826903ms
Oct 12 19:24:08.320: INFO: Pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225507505s
Oct 12 19:24:10.431: INFO: Pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336611403s
Oct 12 19:24:12.544: INFO: Pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449103146s
STEP: Saw pod success
Oct 12 19:24:12.544: INFO: Pod "downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00" satisfied condition "Succeeded or Failed"
Oct 12 19:24:12.654: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00 container client-container: <nil>
STEP: delete the pod
Oct 12 19:24:12.890: INFO: Waiting for pod downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00 to disappear
Oct 12 19:24:12.999: INFO: Pod downwardapi-volume-ccdd619a-ed5b-44fc-94bc-fb135b2cec00 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.792 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":4,"skipped":34,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:00.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:12.576 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:13.545: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Oct 12 19:24:07.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 12 19:24:07.985: INFO: Waiting up to 5m0s for pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205" in namespace "emptydir-6552" to be "Succeeded or Failed"
Oct 12 19:24:08.096: INFO: Pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205": Phase="Pending", Reason="", readiness=false. Elapsed: 110.649124ms
Oct 12 19:24:10.204: INFO: Pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218657675s
Oct 12 19:24:12.315: INFO: Pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329597907s
Oct 12 19:24:14.423: INFO: Pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437528973s
STEP: Saw pod success
Oct 12 19:24:14.423: INFO: Pod "pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205" satisfied condition "Succeeded or Failed"
Oct 12 19:24:14.532: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205 container test-container: <nil>
STEP: delete the pod
Oct 12 19:24:14.753: INFO: Waiting for pod pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205 to disappear
Oct 12 19:24:14.861: INFO: Pod pod-dcff5570-bd9c-4a9c-b2e7-fc5e8353a205 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.746 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:15.090: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 74 lines ...
Oct 12 19:23:30.010: INFO: PersistentVolumeClaim csi-hostpathm86dn found but phase is Pending instead of Bound.
Oct 12 19:23:32.128: INFO: PersistentVolumeClaim csi-hostpathm86dn found but phase is Pending instead of Bound.
Oct 12 19:23:34.238: INFO: PersistentVolumeClaim csi-hostpathm86dn found but phase is Pending instead of Bound.
Oct 12 19:23:36.349: INFO: PersistentVolumeClaim csi-hostpathm86dn found and phase=Bound (6.449960581s)
STEP: Creating pod pod-subpath-test-dynamicpv-j6q8
STEP: Creating a pod to test subpath
Oct 12 19:23:36.681: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-j6q8" in namespace "provisioning-9005" to be "Succeeded or Failed"
Oct 12 19:23:36.791: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 109.849317ms
Oct 12 19:23:38.901: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219769281s
Oct 12 19:23:41.012: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330249766s
Oct 12 19:23:43.123: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441429817s
Oct 12 19:23:45.233: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551404054s
Oct 12 19:23:47.343: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661965139s
Oct 12 19:23:49.454: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.77228688s
Oct 12 19:23:51.564: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.882643793s
Oct 12 19:23:53.675: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.99346841s
Oct 12 19:23:55.786: INFO: Pod "pod-subpath-test-dynamicpv-j6q8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.104296703s
STEP: Saw pod success
Oct 12 19:23:55.786: INFO: Pod "pod-subpath-test-dynamicpv-j6q8" satisfied condition "Succeeded or Failed"
Oct 12 19:23:55.897: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-j6q8 container test-container-subpath-dynamicpv-j6q8: <nil>
STEP: delete the pod
Oct 12 19:23:56.141: INFO: Waiting for pod pod-subpath-test-dynamicpv-j6q8 to disappear
Oct 12 19:23:56.255: INFO: Pod pod-subpath-test-dynamicpv-j6q8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-j6q8
Oct 12 19:23:56.255: INFO: Deleting pod "pod-subpath-test-dynamicpv-j6q8" in namespace "provisioning-9005"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:19.309: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 191 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 61 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:23.281 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":3,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:25.087: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-0c447aad-3821-4c44-92b6-4893099a3992
STEP: Creating a pod to test consume secrets
Oct 12 19:24:20.191: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011" in namespace "projected-5095" to be "Succeeded or Failed"
Oct 12 19:24:20.301: INFO: Pod "pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011": Phase="Pending", Reason="", readiness=false. Elapsed: 109.88587ms
Oct 12 19:24:22.411: INFO: Pod "pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219877174s
Oct 12 19:24:24.521: INFO: Pod "pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329970251s
STEP: Saw pod success
Oct 12 19:24:24.521: INFO: Pod "pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011" satisfied condition "Succeeded or Failed"
Oct 12 19:24:24.631: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:24:24.857: INFO: Waiting for pod pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011 to disappear
Oct 12 19:24:24.968: INFO: Pod pod-projected-secrets-426bd0b0-f0ab-4933-abba-740edc90d011 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.785 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:13.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct 12 19:24:13.937: INFO: Waiting up to 5m0s for pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df" in namespace "security-context-4473" to be "Succeeded or Failed"
Oct 12 19:24:14.049: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Pending", Reason="", readiness=false. Elapsed: 111.370971ms
Oct 12 19:24:16.160: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222837008s
Oct 12 19:24:18.272: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334692771s
Oct 12 19:24:20.386: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448274107s
Oct 12 19:24:22.497: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Running", Reason="", readiness=true. Elapsed: 8.559737922s
Oct 12 19:24:24.609: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671258078s
STEP: Saw pod success
Oct 12 19:24:24.609: INFO: Pod "security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df" satisfied condition "Succeeded or Failed"
Oct 12 19:24:24.720: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df container test-container: <nil>
STEP: delete the pod
Oct 12 19:24:24.950: INFO: Waiting for pod security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df to disappear
Oct 12 19:24:25.061: INFO: Pod security-context-b48ee8ee-2ad0-411f-bb58-12ff0fb1a2df no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.060 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:68

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:70
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":51,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:13.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 239 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:29.717: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 126 lines ...
• [SLOW TEST:6.657 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":34,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:34.031: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
Oct 12 19:23:27.043: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8517bh6v7
STEP: creating a claim
Oct 12 19:23:27.152: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-9gkj
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:23:27.484: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9gkj" in namespace "provisioning-8517" to be "Succeeded or Failed"
Oct 12 19:23:27.592: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 108.668852ms
Oct 12 19:23:29.703: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219342612s
Oct 12 19:23:31.812: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328669335s
Oct 12 19:23:33.922: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438547593s
Oct 12 19:23:36.032: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548725206s
Oct 12 19:23:38.143: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659326509s
... skipping 13 lines ...
Oct 12 19:24:07.690: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Running", Reason="", readiness=true. Elapsed: 40.206095636s
Oct 12 19:24:09.800: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Running", Reason="", readiness=true. Elapsed: 42.316250067s
Oct 12 19:24:11.914: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Running", Reason="", readiness=true. Elapsed: 44.429834424s
Oct 12 19:24:14.024: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Running", Reason="", readiness=true. Elapsed: 46.540498004s
Oct 12 19:24:16.135: INFO: Pod "pod-subpath-test-dynamicpv-9gkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.651014784s
STEP: Saw pod success
Oct 12 19:24:16.135: INFO: Pod "pod-subpath-test-dynamicpv-9gkj" satisfied condition "Succeeded or Failed"
Oct 12 19:24:16.243: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-9gkj container test-container-subpath-dynamicpv-9gkj: <nil>
STEP: delete the pod
Oct 12 19:24:16.470: INFO: Waiting for pod pod-subpath-test-dynamicpv-9gkj to disappear
Oct 12 19:24:16.578: INFO: Pod pod-subpath-test-dynamicpv-9gkj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9gkj
Oct 12 19:24:16.578: INFO: Deleting pod "pod-subpath-test-dynamicpv-9gkj" in namespace "provisioning-8517"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:38.022: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-q57x
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:24:14.479: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q57x" in namespace "subpath-5323" to be "Succeeded or Failed"
Oct 12 19:24:14.590: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Pending", Reason="", readiness=false. Elapsed: 110.201177ms
Oct 12 19:24:16.699: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219771565s
Oct 12 19:24:18.809: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 4.329513044s
Oct 12 19:24:20.919: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 6.439276606s
Oct 12 19:24:23.027: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 8.548039705s
Oct 12 19:24:25.137: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 10.657843779s
Oct 12 19:24:27.249: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 12.769589593s
Oct 12 19:24:29.366: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 14.886804296s
Oct 12 19:24:31.476: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 16.996264888s
Oct 12 19:24:33.585: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 19.106138317s
Oct 12 19:24:35.695: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Running", Reason="", readiness=true. Elapsed: 21.216141502s
Oct 12 19:24:37.811: INFO: Pod "pod-subpath-test-projected-q57x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.331847524s
STEP: Saw pod success
Oct 12 19:24:37.811: INFO: Pod "pod-subpath-test-projected-q57x" satisfied condition "Succeeded or Failed"
Oct 12 19:24:37.920: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-projected-q57x container test-container-subpath-projected-q57x: <nil>
STEP: delete the pod
Oct 12 19:24:38.143: INFO: Waiting for pod pod-subpath-test-projected-q57x to disappear
Oct 12 19:24:38.252: INFO: Pod pod-subpath-test-projected-q57x no longer exists
STEP: Deleting pod pod-subpath-test-projected-q57x
Oct 12 19:24:38.252: INFO: Deleting pod "pod-subpath-test-projected-q57x" in namespace "subpath-5323"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:38.613: INFO: Only supported for providers [azure] (not aws)
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:24:38.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-2334" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":4,"skipped":53,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
• [SLOW TEST:9.546 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct 12 19:24:34.432: INFO: PersistentVolumeClaim pvc-qk42d found but phase is Pending instead of Bound.
Oct 12 19:24:36.540: INFO: PersistentVolumeClaim pvc-qk42d found and phase=Bound (8.549505188s)
Oct 12 19:24:36.540: INFO: Waiting up to 3m0s for PersistentVolume local-lxlm7 to have phase Bound
Oct 12 19:24:36.649: INFO: PersistentVolume local-lxlm7 found and phase=Bound (108.295218ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ffmn
STEP: Creating a pod to test subpath
Oct 12 19:24:36.985: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ffmn" in namespace "provisioning-9895" to be "Succeeded or Failed"
Oct 12 19:24:37.093: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 108.400576ms
Oct 12 19:24:39.204: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218900836s
Oct 12 19:24:41.312: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327634198s
Oct 12 19:24:43.425: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440109323s
Oct 12 19:24:45.534: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.549673618s
STEP: Saw pod success
Oct 12 19:24:45.534: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn" satisfied condition "Succeeded or Failed"
Oct 12 19:24:45.643: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ffmn container test-container-subpath-preprovisionedpv-ffmn: <nil>
STEP: delete the pod
Oct 12 19:24:45.868: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ffmn to disappear
Oct 12 19:24:45.976: INFO: Pod pod-subpath-test-preprovisionedpv-ffmn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ffmn
Oct 12 19:24:45.976: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ffmn" in namespace "provisioning-9895"
STEP: Creating pod pod-subpath-test-preprovisionedpv-ffmn
STEP: Creating a pod to test subpath
Oct 12 19:24:46.195: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ffmn" in namespace "provisioning-9895" to be "Succeeded or Failed"
Oct 12 19:24:46.304: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 109.286775ms
Oct 12 19:24:48.413: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218291321s
Oct 12 19:24:50.523: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328080338s
Oct 12 19:24:52.632: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437060526s
STEP: Saw pod success
Oct 12 19:24:52.632: INFO: Pod "pod-subpath-test-preprovisionedpv-ffmn" satisfied condition "Succeeded or Failed"
Oct 12 19:24:52.743: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ffmn container test-container-subpath-preprovisionedpv-ffmn: <nil>
STEP: delete the pod
Oct 12 19:24:52.966: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ffmn to disappear
Oct 12 19:24:53.074: INFO: Pod pod-subpath-test-preprovisionedpv-ffmn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ffmn
Oct 12 19:24:53.074: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ffmn" in namespace "provisioning-9895"
... skipping 30 lines ...
Oct 12 19:24:34.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 12 19:24:34.593: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:24:34.817: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5368" in namespace "provisioning-5368" to be "Succeeded or Failed"
Oct 12 19:24:34.926: INFO: Pod "hostpath-symlink-prep-provisioning-5368": Phase="Pending", Reason="", readiness=false. Elapsed: 109.215337ms
Oct 12 19:24:37.044: INFO: Pod "hostpath-symlink-prep-provisioning-5368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.226681377s
STEP: Saw pod success
Oct 12 19:24:37.044: INFO: Pod "hostpath-symlink-prep-provisioning-5368" satisfied condition "Succeeded or Failed"
Oct 12 19:24:37.044: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5368" in namespace "provisioning-5368"
Oct 12 19:24:37.157: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5368" to be fully deleted
Oct 12 19:24:37.266: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-j4gx
STEP: Creating a pod to test subpath
Oct 12 19:24:37.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-j4gx" in namespace "provisioning-5368" to be "Succeeded or Failed"
Oct 12 19:24:37.487: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 109.748504ms
Oct 12 19:24:39.599: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222040253s
Oct 12 19:24:41.713: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336245663s
Oct 12 19:24:43.824: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44756917s
Oct 12 19:24:45.935: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557774093s
Oct 12 19:24:48.045: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668134973s
Oct 12 19:24:50.155: INFO: Pod "pod-subpath-test-inlinevolume-j4gx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.77863666s
STEP: Saw pod success
Oct 12 19:24:50.156: INFO: Pod "pod-subpath-test-inlinevolume-j4gx" satisfied condition "Succeeded or Failed"
Oct 12 19:24:50.265: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-j4gx container test-container-volume-inlinevolume-j4gx: <nil>
STEP: delete the pod
Oct 12 19:24:50.501: INFO: Waiting for pod pod-subpath-test-inlinevolume-j4gx to disappear
Oct 12 19:24:50.613: INFO: Pod pod-subpath-test-inlinevolume-j4gx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-j4gx
Oct 12 19:24:50.613: INFO: Deleting pod "pod-subpath-test-inlinevolume-j4gx" in namespace "provisioning-5368"
STEP: Deleting pod
Oct 12 19:24:50.723: INFO: Deleting pod "pod-subpath-test-inlinevolume-j4gx" in namespace "provisioning-5368"
Oct 12 19:24:50.943: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5368" in namespace "provisioning-5368" to be "Succeeded or Failed"
Oct 12 19:24:51.054: INFO: Pod "hostpath-symlink-prep-provisioning-5368": Phase="Pending", Reason="", readiness=false. Elapsed: 111.147895ms
Oct 12 19:24:53.165: INFO: Pod "hostpath-symlink-prep-provisioning-5368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222258352s
Oct 12 19:24:55.286: INFO: Pod "hostpath-symlink-prep-provisioning-5368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.342849325s
STEP: Saw pod success
Oct 12 19:24:55.286: INFO: Pod "hostpath-symlink-prep-provisioning-5368" satisfied condition "Succeeded or Failed"
Oct 12 19:24:55.286: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5368" in namespace "provisioning-5368"
Oct 12 19:24:55.400: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5368" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:24:55.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5368" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:55.766: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:56.019: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 115 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 12 19:24:49.693: INFO: Waiting up to 5m0s for pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93" in namespace "emptydir-6029" to be "Succeeded or Failed"
Oct 12 19:24:49.802: INFO: Pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 108.666354ms
Oct 12 19:24:51.911: INFO: Pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218417731s
Oct 12 19:24:54.022: INFO: Pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328780173s
Oct 12 19:24:56.131: INFO: Pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437881212s
STEP: Saw pod success
Oct 12 19:24:56.131: INFO: Pod "pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93" satisfied condition "Succeeded or Failed"
Oct 12 19:24:56.239: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93 container test-container: <nil>
STEP: delete the pod
Oct 12 19:24:56.493: INFO: Waiting for pod pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93 to disappear
Oct 12 19:24:56.601: INFO: Pod pod-59e7c6e9-7e0c-4c74-aa26-850411bb0f93 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 71 lines ...
Oct 12 19:24:47.311: INFO: Pod aws-client still exists
Oct 12 19:24:49.202: INFO: Waiting for pod aws-client to disappear
Oct 12 19:24:49.311: INFO: Pod aws-client still exists
Oct 12 19:24:51.201: INFO: Waiting for pod aws-client to disappear
Oct 12 19:24:51.311: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Oct 12 19:24:51.557: INFO: Couldn't delete PD "aws://eu-central-1a/vol-092e0f17bdfa10ac2", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-092e0f17bdfa10ac2 is currently attached to i-08d05e3b4af64ab5b
	status code: 400, request id: dd0894ac-de5b-4f00-a400-069c49beca95
Oct 12 19:24:57.167: INFO: Successfully deleted PD "aws://eu-central-1a/vol-092e0f17bdfa10ac2".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:24:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5279" for this suite.
... skipping 107 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:24:58.019: INFO: Only supported for providers [azure] (not aws)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:01.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6537" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:56.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:828
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
Oct 12 19:23:57.407: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8313 to register on node ip-172-20-57-193.eu-central-1.compute.internal
STEP: Creating pod
Oct 12 19:24:07.467: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct 12 19:24:07.580: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tsh5g] to have phase Bound
Oct 12 19:24:07.689: INFO: PersistentVolumeClaim pvc-tsh5g found and phase=Bound (109.667274ms)
STEP: checking for CSIInlineVolumes feature
Oct 12 19:24:24.470: INFO: Error getting logs for pod inline-volume-tjrmt: the server rejected our request for an unknown reason (get pods inline-volume-tjrmt)
Oct 12 19:24:24.691: INFO: Deleting pod "inline-volume-tjrmt" in namespace "csi-mock-volumes-8313"
Oct 12 19:24:24.806: INFO: Wait up to 5m0s for pod "inline-volume-tjrmt" to be fully deleted
STEP: Deleting the previously created pod
Oct 12 19:24:33.025: INFO: Deleting pod "pvc-volume-tester-6zpj5" in namespace "csi-mock-volumes-8313"
Oct 12 19:24:33.139: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6zpj5" to be fully deleted
STEP: Checking CSI driver logs
Oct 12 19:24:37.472: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-6zpj5
Oct 12 19:24:37.472: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-8313
Oct 12 19:24:37.472: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: c035f2dc-f938-4ee4-b6aa-9250e624a198
Oct 12 19:24:37.472: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Oct 12 19:24:37.472: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Oct 12 19:24:37.472: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c035f2dc-f938-4ee4-b6aa-9250e624a198/volumes/kubernetes.io~csi/pvc-055f9fb2-b4fa-42e8-affe-53e71e470d83/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-6zpj5
Oct 12 19:24:37.472: INFO: Deleting pod "pvc-volume-tester-6zpj5" in namespace "csi-mock-volumes-8313"
STEP: Deleting claim pvc-tsh5g
Oct 12 19:24:37.806: INFO: Waiting up to 2m0s for PersistentVolume pvc-055f9fb2-b4fa-42e8-affe-53e71e470d83 to get deleted
Oct 12 19:24:37.916: INFO: PersistentVolume pvc-055f9fb2-b4fa-42e8-affe-53e71e470d83 found and phase=Released (109.903811ms)
Oct 12 19:24:40.027: INFO: PersistentVolume pvc-055f9fb2-b4fa-42e8-affe-53e71e470d83 found and phase=Released (2.221634443s)
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:02.607: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Oct 12 19:24:56.614: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 19:24:56.614: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qs6g
STEP: Creating a pod to test subpath
Oct 12 19:24:56.726: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qs6g" in namespace "provisioning-6285" to be "Succeeded or Failed"
Oct 12 19:24:56.836: INFO: Pod "pod-subpath-test-inlinevolume-qs6g": Phase="Pending", Reason="", readiness=false. Elapsed: 109.677957ms
Oct 12 19:24:58.946: INFO: Pod "pod-subpath-test-inlinevolume-qs6g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22014587s
Oct 12 19:25:01.057: INFO: Pod "pod-subpath-test-inlinevolume-qs6g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330835492s
Oct 12 19:25:03.168: INFO: Pod "pod-subpath-test-inlinevolume-qs6g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442318715s
Oct 12 19:25:05.283: INFO: Pod "pod-subpath-test-inlinevolume-qs6g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556640462s
STEP: Saw pod success
Oct 12 19:25:05.283: INFO: Pod "pod-subpath-test-inlinevolume-qs6g" satisfied condition "Succeeded or Failed"
Oct 12 19:25:05.393: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-qs6g container test-container-subpath-inlinevolume-qs6g: <nil>
STEP: delete the pod
Oct 12 19:25:05.623: INFO: Waiting for pod pod-subpath-test-inlinevolume-qs6g to disappear
Oct 12 19:25:05.732: INFO: Pod pod-subpath-test-inlinevolume-qs6g no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qs6g
Oct 12 19:25:05.732: INFO: Deleting pod "pod-subpath-test-inlinevolume-qs6g" in namespace "provisioning-6285"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:12.553 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:531
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class","total":-1,"completed":8,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:08.360: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
Oct 12 19:24:40.112: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:41.094: INFO: Exec stderr: ""
Oct 12 19:24:43.427: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7385"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7385"/host; echo host > "/var/lib/kubelet/mount-propagation-7385"/host/file] Namespace:mount-propagation-7385 PodName:hostexec-ip-172-20-32-55.eu-central-1.compute.internal-zjh5p ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:24:43.428: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:44.324: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7385 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:44.324: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:45.066: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Oct 12 19:24:45.175: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7385 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:45.176: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:45.908: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:46.017: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7385 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:46.017: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:46.754: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:46.863: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7385 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:46.863: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:47.615: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:47.724: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7385 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:47.724: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:48.482: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Oct 12 19:24:48.592: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7385 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:48.592: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:49.387: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Oct 12 19:24:49.496: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7385 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:49.496: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:50.242: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Oct 12 19:24:50.352: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7385 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:50.352: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:51.074: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:51.184: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7385 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:51.184: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:51.906: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:52.015: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7385 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:52.016: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:52.937: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Oct 12 19:24:53.046: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7385 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:53.046: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:53.794: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:53.906: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7385 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:53.906: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:54.695: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:54.805: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7385 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:54.805: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:55.561: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Oct 12 19:24:55.670: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7385 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:55.671: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:56.402: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:56.511: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7385 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:56.511: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:57.270: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:57.380: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7385 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:57.380: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:58.110: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:58.219: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7385 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:58.219: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:59.007: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:59.116: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7385 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:59.116: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:59.857: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:24:59.966: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7385 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:59.966: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:00.704: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Oct 12 19:25:00.815: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7385 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:00.815: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:01.599: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Oct 12 19:25:01.599: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7385"/master/file` = master] Namespace:mount-propagation-7385 PodName:hostexec-ip-172-20-32-55.eu-central-1.compute.internal-zjh5p ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:25:01.599: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:02.514: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7385"/slave/file] Namespace:mount-propagation-7385 PodName:hostexec-ip-172-20-32-55.eu-central-1.compute.internal-zjh5p ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:25:02.514: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:03.271: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7385"/host] Namespace:mount-propagation-7385 PodName:hostexec-ip-172-20-32-55.eu-central-1.compute.internal-zjh5p ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:25:03.272: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
• [SLOW TEST:65.756 seconds]
[sig-node] Mount propagation
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should propagate mounts to the host
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":4,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:09.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7088" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:35.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 107 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:10.445: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:10.644: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 188 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":29,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:58.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
Oct 12 19:24:32.508: INFO: Pod pod-with-prestop-http-hook still exists
Oct 12 19:24:34.398: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 12 19:24:34.507: INFO: Pod pod-with-prestop-http-hook still exists
Oct 12 19:24:36.399: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 12 19:24:36.508: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
Oct 12 19:25:06.509: FAIL: Timed out after 30.001s.
Expected
    <*errors.errorString | 0xc002ce6700>: {
        s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2021/10/12 19:24:00 Started HTTP server on port 8080\\n2021/10/12 19:24:00 Started UDP server on port  8081\\n\"",
    }
to be nil

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc002506c00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 17 lines ...
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:00 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-32-55.eu-central-1.compute.internal} Started: Started container agnhost-container
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:03 +0000 UTC - event for pod-with-prestop-http-hook: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-4756/pod-with-prestop-http-hook to ip-172-20-61-115.eu-central-1.compute.internal
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:04 +0000 UTC - event for pod-with-prestop-http-hook: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:05 +0000 UTC - event for pod-with-prestop-http-hook: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} Created: Created container pod-with-prestop-http-hook
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:05 +0000 UTC - event for pod-with-prestop-http-hook: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} Started: Started container pod-with-prestop-http-hook
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:10 +0000 UTC - event for pod-with-prestop-http-hook: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} Killing: Stopping container pod-with-prestop-http-hook
Oct 12 19:25:06.619: INFO: At 2021-10-12 19:24:40 +0000 UTC - event for pod-with-prestop-http-hook: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} FailedPreStopHook: HTTP lifecycle hook (/echo?msg=prestop) for Container "pod-with-prestop-http-hook" in Pod "pod-with-prestop-http-hook_container-lifecycle-hook-4756(45516be1-939a-42db-a047-4120a5b5e02b)" failed - error: Get "http://100.96.4.30:8080//echo?msg=prestop": dial tcp 100.96.4.30:8080: i/o timeout, message: ""
Oct 12 19:25:06.728: INFO: POD                      NODE                                           PHASE    GRACE  CONDITIONS
Oct 12 19:25:06.729: INFO: pod-handle-http-request  ip-172-20-32-55.eu-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:23:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:24:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:24:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:23:59 +0000 UTC  }]
Oct 12 19:25:06.729: INFO: 
Oct 12 19:25:06.839: INFO: 
Logging node info for node ip-172-20-32-55.eu-central-1.compute.internal
Oct 12 19:25:06.949: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-55.eu-central-1.compute.internal    d4114834-c2b7-4ba5-be09-57ef7df0cb89 7193 0 2021-10-12 19:20:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-55.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-64":"csi-mock-csi-mock-volumes-64"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-12 19:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kube-controller-manager Update v1 2021-10-12 19:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}}} {kubelet Update v1 2021-10-12 19:24:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-02eb4501265093bcc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-12 19:24:53 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-12 19:24:53 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-12 19:24:53 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-12 19:24:53 +0000 UTC,LastTransitionTime:2021-10-12 19:20:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.55,},NodeAddress{Type:ExternalIP,Address:3.67.193.7,},NodeAddress{Type:Hostname,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-67-193-7.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2ea2383ed96e95048d0fa7f35e04f5,SystemUUID:ec2ea238-3ed9-6e95-048d-0fa7f35e04f5,BootID:96651c1c-97be-47be-ba65-81db1fa077ae,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 217 lines ...
    should execute prestop http hook properly [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 12 19:25:06.509: Timed out after 30.001s.
    Expected
        <*errors.errorString | 0xc002ce6700>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2021/10/12 19:24:00 Started HTTP server on port 8080\\n2021/10/12 19:24:00 Started UDP server on port  8081\\n\"",
        }
    to be nil

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:05.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:7.558 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:10.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 12 19:25:11.123: INFO: Waiting up to 5m0s for pod "security-context-705555c6-54b6-442b-9b75-41c5f8291889" in namespace "security-context-8114" to be "Succeeded or Failed"
Oct 12 19:25:11.233: INFO: Pod "security-context-705555c6-54b6-442b-9b75-41c5f8291889": Phase="Pending", Reason="", readiness=false. Elapsed: 110.017009ms
Oct 12 19:25:13.342: INFO: Pod "security-context-705555c6-54b6-442b-9b75-41c5f8291889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219480973s
STEP: Saw pod success
Oct 12 19:25:13.342: INFO: Pod "security-context-705555c6-54b6-442b-9b75-41c5f8291889" satisfied condition "Succeeded or Failed"
Oct 12 19:25:13.457: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod security-context-705555c6-54b6-442b-9b75-41c5f8291889 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:13.687: INFO: Waiting for pod security-context-705555c6-54b6-442b-9b75-41c5f8291889 to disappear
Oct 12 19:25:13.795: INFO: Pod security-context-705555c6-54b6-442b-9b75-41c5f8291889 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:13.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8114" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:14.037: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 21 lines ...
Oct 12 19:25:10.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 12 19:25:11.606: INFO: Waiting up to 5m0s for pod "pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f" in namespace "emptydir-896" to be "Succeeded or Failed"
Oct 12 19:25:11.716: INFO: Pod "pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f": Phase="Pending", Reason="", readiness=false. Elapsed: 109.430567ms
Oct 12 19:25:13.826: INFO: Pod "pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219429759s
Oct 12 19:25:15.938: INFO: Pod "pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331194332s
STEP: Saw pod success
Oct 12 19:25:15.938: INFO: Pod "pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f" satisfied condition "Succeeded or Failed"
Oct 12 19:25:16.054: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:16.295: INFO: Waiting for pod pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f to disappear
Oct 12 19:25:16.404: INFO: Pod pod-ba50288d-9f9f-4cda-a27b-d1bc8fcc601f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.678 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:16.663: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:01.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":19,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:39.833: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Oct 12 19:25:04.571: INFO: PersistentVolumeClaim pvc-dxw4w found but phase is Pending instead of Bound.
Oct 12 19:25:06.681: INFO: PersistentVolumeClaim pvc-dxw4w found and phase=Bound (14.890879986s)
Oct 12 19:25:06.681: INFO: Waiting up to 3m0s for PersistentVolume local-j9c2w to have phase Bound
Oct 12 19:25:06.791: INFO: PersistentVolume local-j9c2w found and phase=Bound (109.756662ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-q9zx
STEP: Creating a pod to test subpath
Oct 12 19:25:07.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q9zx" in namespace "provisioning-1608" to be "Succeeded or Failed"
Oct 12 19:25:07.266: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx": Phase="Pending", Reason="", readiness=false. Elapsed: 118.012155ms
Oct 12 19:25:09.378: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230018023s
Oct 12 19:25:11.489: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341297785s
Oct 12 19:25:13.600: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452576145s
Oct 12 19:25:15.712: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.564310002s
STEP: Saw pod success
Oct 12 19:25:15.712: INFO: Pod "pod-subpath-test-preprovisionedpv-q9zx" satisfied condition "Succeeded or Failed"
Oct 12 19:25:15.822: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-q9zx container test-container-volume-preprovisionedpv-q9zx: <nil>
STEP: delete the pod
Oct 12 19:25:16.054: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q9zx to disappear
Oct 12 19:25:16.164: INFO: Pod pod-subpath-test-preprovisionedpv-q9zx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q9zx
Oct 12 19:25:16.164: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q9zx" in namespace "provisioning-1608"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:17.722: INFO: Only supported for providers [azure] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:56.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:21.421 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:18.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9185" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":7,"skipped":23,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:10.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
Oct 12 19:25:10.734: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:25:10.956: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1739" in namespace "provisioning-1739" to be "Succeeded or Failed"
Oct 12 19:25:11.066: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Pending", Reason="", readiness=false. Elapsed: 110.22307ms
Oct 12 19:25:13.176: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220096218s
Oct 12 19:25:15.287: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330684958s
Oct 12 19:25:17.397: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441113318s
STEP: Saw pod success
Oct 12 19:25:17.397: INFO: Pod "hostpath-symlink-prep-provisioning-1739" satisfied condition "Succeeded or Failed"
Oct 12 19:25:17.397: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1739" in namespace "provisioning-1739"
Oct 12 19:25:17.511: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1739" to be fully deleted
Oct 12 19:25:17.624: INFO: Creating resource for inline volume
Oct 12 19:25:17.625: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Oct 12 19:25:17.625: INFO: Deleting pod "pod-subpath-test-inlinevolume-cp2k" in namespace "provisioning-1739"
Oct 12 19:25:17.856: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1739" in namespace "provisioning-1739" to be "Succeeded or Failed"
Oct 12 19:25:17.966: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Pending", Reason="", readiness=false. Elapsed: 109.527033ms
Oct 12 19:25:20.077: INFO: Pod "hostpath-symlink-prep-provisioning-1739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220926194s
STEP: Saw pod success
Oct 12 19:25:20.077: INFO: Pod "hostpath-symlink-prep-provisioning-1739" satisfied condition "Succeeded or Failed"
Oct 12 19:25:20.077: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1739" in namespace "provisioning-1739"
Oct 12 19:25:20.191: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1739" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:20.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1739" for this suite.
... skipping 20 lines ...
STEP: Creating a kubernetes client
Oct 12 19:25:18.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct 12 19:25:18.811: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:23.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4244" for this suite.


• [SLOW TEST:5.162 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:23.449: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:23:56.156: INFO: >>> kubeConfig: /root/.kube/config
... skipping 121 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:23.597: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 5 lines ...
Oct 12 19:25:16.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 12 19:25:17.335: INFO: Waiting up to 5m0s for pod "pod-c679567d-590f-49d1-971a-333d3d85f46e" in namespace "emptydir-3520" to be "Succeeded or Failed"
Oct 12 19:25:17.452: INFO: Pod "pod-c679567d-590f-49d1-971a-333d3d85f46e": Phase="Pending", Reason="", readiness=false. Elapsed: 116.238162ms
Oct 12 19:25:19.562: INFO: Pod "pod-c679567d-590f-49d1-971a-333d3d85f46e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226758717s
Oct 12 19:25:21.672: INFO: Pod "pod-c679567d-590f-49d1-971a-333d3d85f46e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336796431s
Oct 12 19:25:23.783: INFO: Pod "pod-c679567d-590f-49d1-971a-333d3d85f46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447373958s
STEP: Saw pod success
Oct 12 19:25:23.783: INFO: Pod "pod-c679567d-590f-49d1-971a-333d3d85f46e" satisfied condition "Succeeded or Failed"
Oct 12 19:25:23.894: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-c679567d-590f-49d1-971a-333d3d85f46e container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:24.120: INFO: Waiting for pod pod-c679567d-590f-49d1-971a-333d3d85f46e to disappear
Oct 12 19:25:24.230: INFO: Pod pod-c679567d-590f-49d1-971a-333d3d85f46e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.781 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:24.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename clientset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 115 lines ...
• [SLOW TEST:56.725 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:130
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":3,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:26.530: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 64 lines ...
• [SLOW TEST:8.263 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":27,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:27.429: INFO: Only supported for providers [openstack] (not aws)
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:30.250: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 171 lines ...
• [SLOW TEST:7.330 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:54.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":4,"skipped":18,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:26.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":7,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:31.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:32.153: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
• [SLOW TEST:6.664 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:34.146: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct 12 19:25:31.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 12 19:25:31.683: INFO: Waiting up to 5m0s for pod "security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95" in namespace "security-context-5632" to be "Succeeded or Failed"
Oct 12 19:25:31.793: INFO: Pod "security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95": Phase="Pending", Reason="", readiness=false. Elapsed: 109.649288ms
Oct 12 19:25:33.902: INFO: Pod "security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21916988s
STEP: Saw pod success
Oct 12 19:25:33.903: INFO: Pod "security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95" satisfied condition "Succeeded or Failed"
Oct 12 19:25:34.012: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:34.244: INFO: Waiting for pod security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95 to disappear
Oct 12 19:25:34.353: INFO: Pod security-context-bdd9a054-2c02-490e-bab2-ebd095fd1d95 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:34.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5632" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":5,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:34.594: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:34.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct 12 19:25:35.330: INFO: Waiting up to 5m0s for pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491" in namespace "security-context-9330" to be "Succeeded or Failed"
Oct 12 19:25:35.455: INFO: Pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491": Phase="Pending", Reason="", readiness=false. Elapsed: 125.168974ms
Oct 12 19:25:37.565: INFO: Pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234808065s
Oct 12 19:25:39.676: INFO: Pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346005316s
Oct 12 19:25:41.787: INFO: Pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.456614141s
STEP: Saw pod success
Oct 12 19:25:41.787: INFO: Pod "security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491" satisfied condition "Succeeded or Failed"
Oct 12 19:25:41.896: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:42.120: INFO: Waiting for pod security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491 to disappear
Oct 12 19:25:42.229: INFO: Pod security-context-6329482a-1ebc-4c9e-ab0e-c926b2310491 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.797 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:42.480: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Oct 12 19:25:34.583: INFO: PersistentVolumeClaim pvc-bjvks found but phase is Pending instead of Bound.
Oct 12 19:25:36.693: INFO: PersistentVolumeClaim pvc-bjvks found and phase=Bound (14.885196283s)
Oct 12 19:25:36.693: INFO: Waiting up to 3m0s for PersistentVolume local-clsrw to have phase Bound
Oct 12 19:25:36.803: INFO: PersistentVolume local-clsrw found and phase=Bound (109.81715ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-dbp8
STEP: Creating a pod to test exec-volume-test
Oct 12 19:25:37.135: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dbp8" in namespace "volume-1581" to be "Succeeded or Failed"
Oct 12 19:25:37.250: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8": Phase="Pending", Reason="", readiness=false. Elapsed: 115.752453ms
Oct 12 19:25:39.361: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226517434s
Oct 12 19:25:41.473: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337783989s
Oct 12 19:25:43.583: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448321104s
Oct 12 19:25:45.694: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.558811882s
STEP: Saw pod success
Oct 12 19:25:45.694: INFO: Pod "exec-volume-test-preprovisionedpv-dbp8" satisfied condition "Succeeded or Failed"
Oct 12 19:25:45.804: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-dbp8 container exec-container-preprovisionedpv-dbp8: <nil>
STEP: delete the pod
Oct 12 19:25:46.035: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dbp8 to disappear
Oct 12 19:25:46.147: INFO: Pod exec-volume-test-preprovisionedpv-dbp8 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-dbp8
Oct 12 19:25:46.147: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dbp8" in namespace "volume-1581"
... skipping 81 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Oct 12 19:25:20.776: INFO: PersistentVolumeClaim pvc-dz7mw found but phase is Pending instead of Bound.
Oct 12 19:25:22.889: INFO: PersistentVolumeClaim pvc-dz7mw found and phase=Bound (6.44348306s)
Oct 12 19:25:22.889: INFO: Waiting up to 3m0s for PersistentVolume local-cnggw to have phase Bound
Oct 12 19:25:22.998: INFO: PersistentVolume local-cnggw found and phase=Bound (109.623794ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-768d
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:25:23.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-768d" in namespace "provisioning-5198" to be "Succeeded or Failed"
Oct 12 19:25:23.448: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 117.834981ms
Oct 12 19:25:25.559: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229337281s
Oct 12 19:25:27.671: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 4.340861121s
Oct 12 19:25:29.782: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 6.452199901s
Oct 12 19:25:31.893: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 8.563345948s
Oct 12 19:25:34.004: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 10.674245822s
Oct 12 19:25:36.114: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 12.78466875s
Oct 12 19:25:38.228: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 14.897850812s
Oct 12 19:25:40.338: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 17.007774897s
Oct 12 19:25:42.451: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 19.121389492s
Oct 12 19:25:44.563: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Running", Reason="", readiness=true. Elapsed: 21.232849195s
Oct 12 19:25:46.673: INFO: Pod "pod-subpath-test-preprovisionedpv-768d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.343167999s
STEP: Saw pod success
Oct 12 19:25:46.673: INFO: Pod "pod-subpath-test-preprovisionedpv-768d" satisfied condition "Succeeded or Failed"
Oct 12 19:25:46.783: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-768d container test-container-subpath-preprovisionedpv-768d: <nil>
STEP: delete the pod
Oct 12 19:25:47.017: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-768d to disappear
Oct 12 19:25:47.127: INFO: Pod pod-subpath-test-preprovisionedpv-768d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-768d
Oct 12 19:25:47.127: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-768d" in namespace "provisioning-5198"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:48.668: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver aws doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":13,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:12.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:37.873 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":6,"skipped":13,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 12 19:25:48.402: INFO: Waiting up to 5m0s for pod "pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2" in namespace "emptydir-9105" to be "Succeeded or Failed"
Oct 12 19:25:48.513: INFO: Pod "pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2": Phase="Pending", Reason="", readiness=false. Elapsed: 110.430889ms
Oct 12 19:25:50.623: INFO: Pod "pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.221135595s
STEP: Saw pod success
Oct 12 19:25:50.624: INFO: Pod "pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2" satisfied condition "Succeeded or Failed"
Oct 12 19:25:50.733: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:50.961: INFO: Waiting for pod pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2 to disappear
Oct 12 19:25:51.071: INFO: Pod pod-acf758e7-5d41-4195-9f71-9ba49c04cbd2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:51.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9105" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":8,"skipped":35,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:51.333: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:52.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3677" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:52.391: INFO: Driver local doesn't support ext4 -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 12 19:25:51.047: INFO: Waiting up to 5m0s for pod "pod-8944b039-3533-4992-8dfd-20e3e75a7449" in namespace "emptydir-5280" to be "Succeeded or Failed"
Oct 12 19:25:51.159: INFO: Pod "pod-8944b039-3533-4992-8dfd-20e3e75a7449": Phase="Pending", Reason="", readiness=false. Elapsed: 111.171405ms
Oct 12 19:25:53.268: INFO: Pod "pod-8944b039-3533-4992-8dfd-20e3e75a7449": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220249407s
STEP: Saw pod success
Oct 12 19:25:53.268: INFO: Pod "pod-8944b039-3533-4992-8dfd-20e3e75a7449" satisfied condition "Succeeded or Failed"
Oct 12 19:25:53.377: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-8944b039-3533-4992-8dfd-20e3e75a7449 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:53.602: INFO: Waiting for pod pod-8944b039-3533-4992-8dfd-20e3e75a7449 to disappear
Oct 12 19:25:53.711: INFO: Pod pod-8944b039-3533-4992-8dfd-20e3e75a7449 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:53.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":7,"skipped":16,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:53.943: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
• [SLOW TEST:25.444 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":5,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:57.475: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 5 lines ...
Oct 12 19:25:53.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 12 19:25:54.614: INFO: Waiting up to 5m0s for pod "pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237" in namespace "emptydir-9319" to be "Succeeded or Failed"
Oct 12 19:25:54.724: INFO: Pod "pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237": Phase="Pending", Reason="", readiness=false. Elapsed: 109.797674ms
Oct 12 19:25:56.833: INFO: Pod "pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218699869s
STEP: Saw pod success
Oct 12 19:25:56.833: INFO: Pod "pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237" satisfied condition "Succeeded or Failed"
Oct 12 19:25:56.941: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237 container test-container: <nil>
STEP: delete the pod
Oct 12 19:25:57.167: INFO: Waiting for pod pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237 to disappear
Oct 12 19:25:57.275: INFO: Pod pod-69d5ed8e-fd1a-4c9c-a61d-ec2bc18e4237 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:25:57.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9319" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":17,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:25:57.507: INFO: Only supported for providers [openstack] (not aws)
... skipping 274 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:05.681: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 64 lines ...
• [SLOW TEST:35.881 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:06.407: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 103 lines ...
Oct 12 19:24:30.383: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-64
Oct 12 19:24:30.499: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-64
Oct 12 19:24:30.611: INFO: creating *v1.StatefulSet: csi-mock-volumes-64-2949/csi-mockplugin
Oct 12 19:24:30.722: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-64
Oct 12 19:24:30.834: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-64"
Oct 12 19:24:30.944: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-64 to register on node ip-172-20-32-55.eu-central-1.compute.internal
I1012 19:24:35.011368    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1012 19:24:35.121675    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-64","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1012 19:24:35.233380    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I1012 19:24:35.343974    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1012 19:24:35.600703    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-64","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1012 19:24:36.653601    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-64","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Oct 12 19:24:41.110: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1012 19:24:41.356394    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3ba42129-c9c3-4d93-8000-6162d4778e89","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1012 19:24:44.161807    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3ba42129-c9c3-4d93-8000-6162d4778e89","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3ba42129-c9c3-4d93-8000-6162d4778e89"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I1012 19:24:45.327371    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 12 19:24:45.440: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:24:46.199738    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3ba42129-c9c3-4d93-8000-6162d4778e89/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3ba42129-c9c3-4d93-8000-6162d4778e89","storage.kubernetes.io/csiProvisionerIdentity":"1634066675389-8081-csi-mock-csi-mock-volumes-64"}},"Response":{},"Error":"","FullError":null}
I1012 19:24:46.319045    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 12 19:24:46.428: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:47.162: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:47.896: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:24:48.741867    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3ba42129-c9c3-4d93-8000-6162d4778e89/globalmount","target_path":"/var/lib/kubelet/pods/1fa69b58-fa1e-40e0-91ae-27fef447dc58/volumes/kubernetes.io~csi/pvc-3ba42129-c9c3-4d93-8000-6162d4778e89/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3ba42129-c9c3-4d93-8000-6162d4778e89","storage.kubernetes.io/csiProvisionerIdentity":"1634066675389-8081-csi-mock-csi-mock-volumes-64"}},"Response":{},"Error":"","FullError":null}
Oct 12 19:24:51.563: INFO: Deleting pod "pvc-volume-tester-5tvw6" in namespace "csi-mock-volumes-64"
Oct 12 19:24:51.676: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5tvw6" to be fully deleted
Oct 12 19:24:53.710: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:24:54.482869    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1fa69b58-fa1e-40e0-91ae-27fef447dc58/volumes/kubernetes.io~csi/pvc-3ba42129-c9c3-4d93-8000-6162d4778e89/mount"},"Response":{},"Error":"","FullError":null}
I1012 19:24:54.625545    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1012 19:24:54.733918    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3ba42129-c9c3-4d93-8000-6162d4778e89/globalmount"},"Response":{},"Error":"","FullError":null}
I1012 19:25:04.047826    5402 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct 12 19:25:05.013: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4lvh8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-64", SelfLink:"", UID:"3ba42129-c9c3-4d93-8000-6162d4778e89", ResourceVersion:"6785", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769663481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001768408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001768420)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001459d40), VolumeMode:(*v1.PersistentVolumeMode)(0xc001459d50), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:25:05.013: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4lvh8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-64", SelfLink:"", UID:"3ba42129-c9c3-4d93-8000-6162d4778e89", ResourceVersion:"6790", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769663481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-32-55.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0013d6c90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013d6ca8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0013d6cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013d6cd8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0013f4770), VolumeMode:(*v1.PersistentVolumeMode)(0xc0013f4780), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:25:05.014: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4lvh8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-64", SelfLink:"", UID:"3ba42129-c9c3-4d93-8000-6162d4778e89", ResourceVersion:"6792", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769663481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-64", "volume.kubernetes.io/selected-node":"ip-172-20-32-55.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e48a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e48b8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e48d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e48e8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e4918)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000f025c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000f025d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:25:05.014: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4lvh8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-64", SelfLink:"", UID:"3ba42129-c9c3-4d93-8000-6162d4778e89", ResourceVersion:"6809", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769663481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-64"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e4948)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e4978)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4990), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e49a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000f02600), VolumeMode:(*v1.PersistentVolumeMode)(0xc000f02610), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:25:05.014: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-4lvh8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-64", SelfLink:"", UID:"3ba42129-c9c3-4d93-8000-6162d4778e89", ResourceVersion:"6919", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769663481, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-64", "volume.kubernetes.io/selected-node":"ip-172-20-32-55.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e49d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e49f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4a08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e4a20)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032e4a38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032e4a50)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000f02640), VolumeMode:(*v1.PersistentVolumeMode)(0xc000f02650), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:08.881: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":6,"skipped":41,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:26:07.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
• [SLOW TEST:22.436 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":10,"skipped":80,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8784" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":11,"skipped":85,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:14.373: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Oct 12 19:26:09.465: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 19:26:09.465: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zg45
STEP: Creating a pod to test subpath
Oct 12 19:26:09.578: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zg45" in namespace "provisioning-4325" to be "Succeeded or Failed"
Oct 12 19:26:09.688: INFO: Pod "pod-subpath-test-inlinevolume-zg45": Phase="Pending", Reason="", readiness=false. Elapsed: 109.534577ms
Oct 12 19:26:11.798: INFO: Pod "pod-subpath-test-inlinevolume-zg45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220002002s
Oct 12 19:26:13.911: INFO: Pod "pod-subpath-test-inlinevolume-zg45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332771092s
STEP: Saw pod success
Oct 12 19:26:13.911: INFO: Pod "pod-subpath-test-inlinevolume-zg45" satisfied condition "Succeeded or Failed"
Oct 12 19:26:14.022: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-zg45 container test-container-subpath-inlinevolume-zg45: <nil>
STEP: delete the pod
Oct 12 19:26:14.249: INFO: Waiting for pod pod-subpath-test-inlinevolume-zg45 to disappear
Oct 12 19:26:14.359: INFO: Pod pod-subpath-test-inlinevolume-zg45 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zg45
Oct 12 19:26:14.359: INFO: Deleting pod "pod-subpath-test-inlinevolume-zg45" in namespace "provisioning-4325"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:26:14.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Oct 12 19:26:15.097: INFO: Waiting up to 5m0s for pod "client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816" in namespace "containers-6304" to be "Succeeded or Failed"
Oct 12 19:26:15.207: INFO: Pod "client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816": Phase="Pending", Reason="", readiness=false. Elapsed: 109.681531ms
Oct 12 19:26:17.317: INFO: Pod "client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219878681s
STEP: Saw pod success
Oct 12 19:26:17.317: INFO: Pod "client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816" satisfied condition "Succeeded or Failed"
Oct 12 19:26:17.427: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:26:17.652: INFO: Waiting for pod client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816 to disappear
Oct 12 19:26:17.761: INFO: Pod client-containers-d342275a-7144-4ff0-94b1-0d0d339bf816 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:17.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6304" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:17.992: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:57.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Oct 12 19:26:01.858: INFO: Creating new exec pod
Oct 12 19:26:07.191: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4520 exec execpodxw9zx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct 12 19:26:08.362: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Oct 12 19:26:08.362: INFO: stdout: ""
Oct 12 19:26:09.363: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4520 exec execpodxw9zx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct 12 19:26:12.549: INFO: rc: 1
Oct 12 19:26:12.549: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4520 exec execpodxw9zx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: connect to externalip-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:13.363: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4520 exec execpodxw9zx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct 12 19:26:14.521: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Oct 12 19:26:14.521: INFO: stdout: ""
Oct 12 19:26:15.363: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4520 exec execpodxw9zx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
... skipping 22 lines ...
• [SLOW TEST:25.576 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":5,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 43 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:26:10.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:25.649: INFO: Only supported for providers [azure] (not aws)
... skipping 110 lines ...
Oct 12 19:26:20.286: INFO: PersistentVolumeClaim pvc-ftnfx found but phase is Pending instead of Bound.
Oct 12 19:26:22.398: INFO: PersistentVolumeClaim pvc-ftnfx found and phase=Bound (12.762730403s)
Oct 12 19:26:22.398: INFO: Waiting up to 3m0s for PersistentVolume local-b5brg to have phase Bound
Oct 12 19:26:22.506: INFO: PersistentVolume local-b5brg found and phase=Bound (108.577791ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-m5cg
STEP: Creating a pod to test subpath
Oct 12 19:26:22.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m5cg" in namespace "provisioning-1535" to be "Succeeded or Failed"
Oct 12 19:26:22.940: INFO: Pod "pod-subpath-test-preprovisionedpv-m5cg": Phase="Pending", Reason="", readiness=false. Elapsed: 107.810152ms
Oct 12 19:26:25.049: INFO: Pod "pod-subpath-test-preprovisionedpv-m5cg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216838199s
Oct 12 19:26:27.158: INFO: Pod "pod-subpath-test-preprovisionedpv-m5cg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325286519s
STEP: Saw pod success
Oct 12 19:26:27.158: INFO: Pod "pod-subpath-test-preprovisionedpv-m5cg" satisfied condition "Succeeded or Failed"
Oct 12 19:26:27.266: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-m5cg container test-container-subpath-preprovisionedpv-m5cg: <nil>
STEP: delete the pod
Oct 12 19:26:27.492: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m5cg to disappear
Oct 12 19:26:27.600: INFO: Pod pod-subpath-test-preprovisionedpv-m5cg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m5cg
Oct 12 19:26:27.600: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m5cg" in namespace "provisioning-1535"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":40,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:29.195: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:35.089 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":9,"skipped":19,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:32.626: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct 12 19:26:29.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f" in namespace "downward-api-9004" to be "Succeeded or Failed"
Oct 12 19:26:29.970: INFO: Pod "downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f": Phase="Pending", Reason="", readiness=false. Elapsed: 107.774924ms
Oct 12 19:26:32.080: INFO: Pod "downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217120426s
STEP: Saw pod success
Oct 12 19:26:32.080: INFO: Pod "downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f" satisfied condition "Succeeded or Failed"
Oct 12 19:26:32.188: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f container client-container: <nil>
STEP: delete the pod
Oct 12 19:26:32.411: INFO: Waiting for pod downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f to disappear
Oct 12 19:26:32.519: INFO: Pod downwardapi-volume-38a41a5e-2031-4c57-9ea6-095e8418d16f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:32.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9004" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:36.845: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:37.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4870" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":10,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:38.733: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:40.654: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Oct 12 19:26:34.830: INFO: PersistentVolumeClaim pvc-gw8sj found but phase is Pending instead of Bound.
Oct 12 19:26:36.940: INFO: PersistentVolumeClaim pvc-gw8sj found and phase=Bound (8.550316985s)
Oct 12 19:26:36.940: INFO: Waiting up to 3m0s for PersistentVolume local-w6rjj to have phase Bound
Oct 12 19:26:37.050: INFO: PersistentVolume local-w6rjj found and phase=Bound (109.467694ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-wd8r
STEP: Creating a pod to test exec-volume-test
Oct 12 19:26:37.378: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-wd8r" in namespace "volume-2103" to be "Succeeded or Failed"
Oct 12 19:26:37.487: INFO: Pod "exec-volume-test-preprovisionedpv-wd8r": Phase="Pending", Reason="", readiness=false. Elapsed: 109.160181ms
Oct 12 19:26:39.597: INFO: Pod "exec-volume-test-preprovisionedpv-wd8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218901193s
Oct 12 19:26:41.707: INFO: Pod "exec-volume-test-preprovisionedpv-wd8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328614969s
STEP: Saw pod success
Oct 12 19:26:41.707: INFO: Pod "exec-volume-test-preprovisionedpv-wd8r" satisfied condition "Succeeded or Failed"
Oct 12 19:26:41.816: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-wd8r container exec-container-preprovisionedpv-wd8r: <nil>
STEP: delete the pod
Oct 12 19:26:42.043: INFO: Waiting for pod exec-volume-test-preprovisionedpv-wd8r to disappear
Oct 12 19:26:42.152: INFO: Pod exec-volume-test-preprovisionedpv-wd8r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-wd8r
Oct 12 19:26:42.152: INFO: Deleting pod "exec-volume-test-preprovisionedpv-wd8r" in namespace "volume-2103"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 53 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":5,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:44.686: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 133 lines ...
Oct 12 19:26:35.024: INFO: PersistentVolumeClaim pvc-ttsvf found but phase is Pending instead of Bound.
Oct 12 19:26:37.136: INFO: PersistentVolumeClaim pvc-ttsvf found and phase=Bound (6.44728305s)
Oct 12 19:26:37.136: INFO: Waiting up to 3m0s for PersistentVolume local-n56ws to have phase Bound
Oct 12 19:26:37.246: INFO: PersistentVolume local-n56ws found and phase=Bound (110.279875ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lk4h
STEP: Creating a pod to test subpath
Oct 12 19:26:37.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lk4h" in namespace "provisioning-5936" to be "Succeeded or Failed"
Oct 12 19:26:37.692: INFO: Pod "pod-subpath-test-preprovisionedpv-lk4h": Phase="Pending", Reason="", readiness=false. Elapsed: 110.506123ms
Oct 12 19:26:39.804: INFO: Pod "pod-subpath-test-preprovisionedpv-lk4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222260471s
Oct 12 19:26:41.915: INFO: Pod "pod-subpath-test-preprovisionedpv-lk4h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333601802s
STEP: Saw pod success
Oct 12 19:26:41.915: INFO: Pod "pod-subpath-test-preprovisionedpv-lk4h" satisfied condition "Succeeded or Failed"
Oct 12 19:26:42.026: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-lk4h container test-container-subpath-preprovisionedpv-lk4h: <nil>
STEP: delete the pod
Oct 12 19:26:42.260: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lk4h to disappear
Oct 12 19:26:42.371: INFO: Pod pod-subpath-test-preprovisionedpv-lk4h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lk4h
Oct 12 19:26:42.371: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lk4h" in namespace "provisioning-5936"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:45.648: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-7314" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":10,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 168 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":77,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:50.028: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
• [SLOW TEST:76.162 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:50.358: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 118 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:26:56.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-1492" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:26:57.233: INFO: Only supported for providers [gce gke] (not aws)
... skipping 132 lines ...
Oct 12 19:26:49.970: INFO: PersistentVolumeClaim pvc-5zg9r found but phase is Pending instead of Bound.
Oct 12 19:26:52.085: INFO: PersistentVolumeClaim pvc-5zg9r found and phase=Bound (4.334138051s)
Oct 12 19:26:52.086: INFO: Waiting up to 3m0s for PersistentVolume local-d8k5z to have phase Bound
Oct 12 19:26:52.195: INFO: PersistentVolume local-d8k5z found and phase=Bound (109.552685ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-m9qq
STEP: Creating a pod to test exec-volume-test
Oct 12 19:26:52.525: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-m9qq" in namespace "volume-823" to be "Succeeded or Failed"
Oct 12 19:26:52.635: INFO: Pod "exec-volume-test-preprovisionedpv-m9qq": Phase="Pending", Reason="", readiness=false. Elapsed: 109.4655ms
Oct 12 19:26:54.745: INFO: Pod "exec-volume-test-preprovisionedpv-m9qq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21924132s
STEP: Saw pod success
Oct 12 19:26:54.745: INFO: Pod "exec-volume-test-preprovisionedpv-m9qq" satisfied condition "Succeeded or Failed"
Oct 12 19:26:54.854: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-m9qq container exec-container-preprovisionedpv-m9qq: <nil>
STEP: delete the pod
Oct 12 19:26:55.078: INFO: Waiting for pod exec-volume-test-preprovisionedpv-m9qq to disappear
Oct 12 19:26:55.189: INFO: Pod exec-volume-test-preprovisionedpv-m9qq no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-m9qq
Oct 12 19:26:55.189: INFO: Deleting pod "exec-volume-test-preprovisionedpv-m9qq" in namespace "volume-823"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 148 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:01.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3096" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":47,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:04.745: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 94 lines ...
• [SLOW TEST:18.360 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":11,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:05.058: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-d93d5b05-102e-4248-8999-edb17ddf79a2
STEP: Creating a pod to test consume configMaps
Oct 12 19:27:05.871: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486" in namespace "projected-5125" to be "Succeeded or Failed"
Oct 12 19:27:06.048: INFO: Pod "pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486": Phase="Pending", Reason="", readiness=false. Elapsed: 176.270585ms
Oct 12 19:27:08.160: INFO: Pod "pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.288801914s
STEP: Saw pod success
Oct 12 19:27:08.160: INFO: Pod "pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486" satisfied condition "Succeeded or Failed"
Oct 12 19:27:08.271: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:27:08.513: INFO: Waiting for pod pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486 to disappear
Oct 12 19:27:08.624: INFO: Pod pod-projected-configmaps-2eceb121-67f0-4524-a859-c5facbae3486 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:08.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5125" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
Oct 12 19:23:57.570: INFO: The status of Pod netserver-3 is Running (Ready = true)
STEP: Creating test pods
Oct 12 19:24:04.464: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4
Oct 12 19:24:04.464: INFO: Going to poll 100.96.4.20 on port 8081 at least 0 times, with a maximum of 46 tries before failing
Oct 12 19:24:04.589: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:04.589: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:06.380: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:06.380: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:08.492: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:08.492: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:10.250: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:10.250: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:12.363: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:12.363: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:14.150: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:14.150: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:16.262: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:16.262: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:18.038: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:18.039: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:20.150: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:20.151: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:21.895: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:21.895: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:24.007: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:24.007: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:25.797: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:25.797: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:27.907: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:27.907: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:29.641: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:29.641: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:31.752: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:31.752: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:33.508: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:33.508: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:35.619: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:35.619: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:37.401: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:37.401: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:39.512: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:39.513: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:41.271: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:41.271: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:43.381: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:43.381: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:45.174: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:45.174: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:47.288: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:47.288: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:49.043: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:49.043: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:51.154: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:51.154: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:53.005: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:53.005: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:55.116: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:55.116: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:24:56.876: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:24:56.876: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:24:58.986: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:24:58.986: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:00.729: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:00.729: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:02.840: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:02.840: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:04.635: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:04.635: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:06.748: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:06.748: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:08.651: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:08.651: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:10.762: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:10.762: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:12.535: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:12.535: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:14.646: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:14.646: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:16.418: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:16.418: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:18.528: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:18.528: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:20.278: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:20.279: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:22.390: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:22.390: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:24.184: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:24.184: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:26.295: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:26.295: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:28.043: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:28.043: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:30.154: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:30.154: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:31.946: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:31.946: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:34.057: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:34.057: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:35.888: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:35.888: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:37.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:37.999: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:39.731: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:39.731: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:41.844: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:41.844: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:43.611: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:43.611: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:45.726: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:45.726: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:47.546: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:47.546: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:49.657: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:49.657: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:51.422: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:51.422: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:53.533: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:53.533: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:55.291: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:55.291: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:25:57.401: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:25:57.401: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:25:59.134: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:25:59.134: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:01.245: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:01.245: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:03.359: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:03.359: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:05.494: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:05.494: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:07.253: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:07.253: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:09.364: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:09.364: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:11.203: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:11.203: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:13.314: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:13.315: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:15.096: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:15.097: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:17.207: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:17.207: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:18.963: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:18.963: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:21.074: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:21.074: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:22.828: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:22.828: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:24.939: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:24.940: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:26.828: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:26.829: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:28.941: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:28.941: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:30.838: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:30.838: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:32.949: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:32.949: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:34.705: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:34.705: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:36.816: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:36.816: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:38.592: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:38.592: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:40.702: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:40.702: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:42.442: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:42.442: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:44.553: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:44.554: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:46.283: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:46.283: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:48.394: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:48.394: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:50.291: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:50.291: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:52.402: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:52.402: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:54.202: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:54.202: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:26:56.313: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:26:56.313: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:26:58.044: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:26:58.044: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:27:00.155: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7712 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:27:00.155: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:27:01.916: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.4.20 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct 12 19:27:01.916: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0], actual=[])
Oct 12 19:27:03.917: INFO: 
Output of kubectl describe pod pod-network-test-7712/netserver-0:

Oct 12 19:27:03.917: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-7712 describe pod netserver-0 --namespace=pod-network-test-7712'
Oct 12 19:27:04.569: INFO: stderr: ""
... skipping 237 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m40s  default-scheduler  Successfully assigned pod-network-test-7712/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     3m39s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    3m39s  kubelet            Created container webserver
  Normal  Started    3m38s  kubelet            Started container webserver

Oct 12 19:27:06.586: FAIL: Error dialing UDP from node to pod: failed to find expected endpoints, 
tries 46
Command echo hostName | nc -w 1 -u 100.96.4.20 8081
retrieved map[]
expected map[netserver-0:{}]

Full Stack Trace
... skipping 247 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 12 19:27:06.586: Error dialing UDP from node to pod: failed to find expected endpoints, 
    tries 46
    Command echo hostName | nc -w 1 -u 100.96.4.20 8081
    retrieved map[]
    expected map[netserver-0:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:11.623: INFO: Only supported for providers [openstack] (not aws)
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:14.291: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 52 lines ...
• [SLOW TEST:13.534 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":11,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:18.416: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:19.087: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:22.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8962" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":12,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:22.419: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 94 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-a10b1cc7-08e5-440c-981f-4183cc03c2c9
STEP: Creating a pod to test consume configMaps
Oct 12 19:27:19.877: INFO: Waiting up to 5m0s for pod "pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432" in namespace "configmap-8592" to be "Succeeded or Failed"
Oct 12 19:27:19.992: INFO: Pod "pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432": Phase="Pending", Reason="", readiness=false. Elapsed: 114.421273ms
Oct 12 19:27:22.106: INFO: Pod "pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.228585076s
STEP: Saw pod success
Oct 12 19:27:22.106: INFO: Pod "pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432" satisfied condition "Succeeded or Failed"
Oct 12 19:27:22.220: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:27:22.442: INFO: Waiting for pod pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432 to disappear
Oct 12 19:27:22.553: INFO: Pod pod-configmaps-31b4e6ad-c4da-4690-b27a-ebd7f5534432 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8592" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:22.806: INFO: Only supported for providers [openstack] (not aws)
... skipping 186 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":6,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:23.474: INFO: Driver "local" does not provide raw block - skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
• [SLOW TEST:265.803 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a non-local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:24.105: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:24.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4736" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":9,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:24.481: INFO: Only supported for providers [azure] (not aws)
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:26.376: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":10,"skipped":28,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:26.728: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:25:47.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 12 19:25:48.456: INFO: created pod
Oct 12 19:25:48.456: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6825" to be "Succeeded or Failed"
Oct 12 19:25:48.568: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 112.254455ms
Oct 12 19:25:50.679: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.223050183s
Oct 12 19:25:52.790: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 4.334550611s
Oct 12 19:25:54.901: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.445325773s
Oct 12 19:25:57.013: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 8.55723984s
Oct 12 19:25:59.124: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 10.667822849s
... skipping 18 lines ...
Oct 12 19:26:39.234: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 50.77796803s
Oct 12 19:26:41.344: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 52.888473085s
Oct 12 19:26:43.456: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 55.000285379s
Oct 12 19:26:45.567: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 57.111515541s
Oct 12 19:26:47.678: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 59.221982255s
Oct 12 19:26:49.789: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.333627329s
Oct 12 19:26:51.900: INFO: Pod "oidc-discovery-validator": Phase="Failed", Reason="", readiness=false. Elapsed: 1m3.443907816s
Oct 12 19:27:21.900: INFO: polling logs
Oct 12 19:27:22.012: INFO: Pod logs: 
2021/10/12 19:25:49 OK: Got token
2021/10/12 19:25:49 validating with in-cluster discovery
2021/10/12 19:25:49 OK: got issuer https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io
2021/10/12 19:25:49 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io", Subject:"system:serviceaccount:svcaccounts-6825:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634067348, NotBefore:1634066748, IssuedAt:1634066748, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6825", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"37d02488-c982-4e63-b939-aee8e4d9fc93"}}}
2021/10/12 19:26:19 failed to validate with in-cluster discovery: Get "https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io/.well-known/openid-configuration": dial tcp: i/o timeout
2021/10/12 19:26:19 falling back to validating with external discovery
2021/10/12 19:26:19 OK: got issuer https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io
2021/10/12 19:26:19 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io", Subject:"system:serviceaccount:svcaccounts-6825:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1634067348, NotBefore:1634066748, IssuedAt:1634066748, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6825", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"37d02488-c982-4e63-b939-aee8e4d9fc93"}}}
2021/10/12 19:26:49 Get "https://api.internal.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io/.well-known/openid-configuration": dial tcp: i/o timeout

Oct 12 19:27:22.012: FAIL: Unexpected error:
    <*errors.errorString | 0xc001dff850>: {
        s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.216 PodIP:100.96.2.59 PodIPs:[{IP:100.96.2.59}] StartTime:2021-10-12 19:25:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-12 19:25:49 +0000 UTC,FinishedAt:2021-10-12 19:26:49 +0000 UTC,ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40 Started:0xc0020e3c50}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.216 PodIP:100.96.2.59 PodIPs:[{IP:100.96.2.59}] StartTime:2021-10-12 19:25:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-12 19:25:49 +0000 UTC,FinishedAt:2021-10-12 19:26:49 +0000 UTC,ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40 Started:0xc0020e3c50}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/auth.glob..func6.7()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789 +0xc35
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003246180)
... skipping 10 lines ...
STEP: Found 4 events.
Oct 12 19:27:22.255: INFO: At 2021-10-12 19:25:48 +0000 UTC - event for oidc-discovery-validator: {default-scheduler } Scheduled: Successfully assigned svcaccounts-6825/oidc-discovery-validator to ip-172-20-47-216.eu-central-1.compute.internal
Oct 12 19:27:22.255: INFO: At 2021-10-12 19:25:49 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-47-216.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct 12 19:27:22.255: INFO: At 2021-10-12 19:25:49 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-47-216.eu-central-1.compute.internal} Created: Created container oidc-discovery-validator
Oct 12 19:27:22.255: INFO: At 2021-10-12 19:25:49 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-47-216.eu-central-1.compute.internal} Started: Started container oidc-discovery-validator
Oct 12 19:27:22.365: INFO: POD                       NODE                                            PHASE   GRACE  CONDITIONS
Oct 12 19:27:22.365: INFO: oidc-discovery-validator  ip-172-20-47-216.eu-central-1.compute.internal  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:25:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:26:49 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:26:49 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-12 19:25:48 +0000 UTC  }]
Oct 12 19:27:22.365: INFO: 
Oct 12 19:27:22.476: INFO: 
Logging node info for node ip-172-20-32-55.eu-central-1.compute.internal
Oct 12 19:27:22.585: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-55.eu-central-1.compute.internal    d4114834-c2b7-4ba5-be09-57ef7df0cb89 11546 0 2021-10-12 19:20:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-55.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-32-55.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-12 19:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kubelet Update v1 2021-10-12 19:26:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}} {kube-controller-manager Update v1 2021-10-12 19:26:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-02eb4501265093bcc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-12 19:26:33 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-12 19:26:33 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-12 19:26:33 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-12 19:26:33 +0000 UTC,LastTransitionTime:2021-10-12 19:20:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.55,},NodeAddress{Type:ExternalIP,Address:3.67.193.7,},NodeAddress{Type:Hostname,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-67-193-7.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2ea2383ed96e95048d0fa7f35e04f5,SystemUUID:ec2ea238-3ed9-6e95-048d-0fa7f35e04f5,BootID:96651c1c-97be-47be-ba65-81db1fa077ae,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/aws-ebs/aws://eu-central-1a/vol-027f9b54ea5817ae0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/aws-ebs/aws://eu-central-1a/vol-027f9b54ea5817ae0,DevicePath:/dev/xvdca,},},Config:nil,},}
Oct 12 19:27:22.586: INFO: 
Logging kubelet events for node ip-172-20-32-55.eu-central-1.compute.internal
... skipping 176 lines ...
• Failure [99.210 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:27:22.012: Unexpected error:
      <*errors.errorString | 0xc001dff850>: {
          s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.216 PodIP:100.96.2.59 PodIPs:[{IP:100.96.2.59}] StartTime:2021-10-12 19:25:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-12 19:25:49 +0000 UTC,FinishedAt:2021-10-12 19:26:49 +0000 UTC,ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40 Started:0xc0020e3c50}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
      }
      pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:26:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-12 19:25:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.47.216 PodIP:100.96.2.59 PodIPs:[{IP:100.96.2.59}] StartTime:2021-10-12 19:25:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-10-12 19:25:49 +0000 UTC,FinishedAt:2021-10-12 19:26:49 +0000 UTC,ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://7e760d9cbc28d01c6401d118846290875fbc101672fac41b05c8d361c89bcc40 Started:0xc0020e3c50}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789
------------------------------
{"msg":"FAILED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":6,"skipped":40,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:26.904: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:28.194: INFO: Only supported for providers [vsphere] (not aws)
... skipping 40 lines ...
Oct 12 19:27:19.972: INFO: PersistentVolumeClaim pvc-t276k found but phase is Pending instead of Bound.
Oct 12 19:27:22.094: INFO: PersistentVolumeClaim pvc-t276k found and phase=Bound (6.46783264s)
Oct 12 19:27:22.094: INFO: Waiting up to 3m0s for PersistentVolume local-lfxx7 to have phase Bound
Oct 12 19:27:22.206: INFO: PersistentVolume local-lfxx7 found and phase=Bound (111.339253ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c2lq
STEP: Creating a pod to test subpath
Oct 12 19:27:22.546: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c2lq" in namespace "provisioning-7381" to be "Succeeded or Failed"
Oct 12 19:27:22.656: INFO: Pod "pod-subpath-test-preprovisionedpv-c2lq": Phase="Pending", Reason="", readiness=false. Elapsed: 109.697633ms
Oct 12 19:27:24.767: INFO: Pod "pod-subpath-test-preprovisionedpv-c2lq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221579358s
Oct 12 19:27:26.878: INFO: Pod "pod-subpath-test-preprovisionedpv-c2lq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332177065s
STEP: Saw pod success
Oct 12 19:27:26.878: INFO: Pod "pod-subpath-test-preprovisionedpv-c2lq" satisfied condition "Succeeded or Failed"
Oct 12 19:27:26.988: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-c2lq container test-container-volume-preprovisionedpv-c2lq: <nil>
STEP: delete the pod
Oct 12 19:27:27.237: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c2lq to disappear
Oct 12 19:27:27.347: INFO: Pod pod-subpath-test-preprovisionedpv-c2lq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c2lq
Oct 12 19:27:27.347: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c2lq" in namespace "provisioning-7381"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":21,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:29.038: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 37 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":14,"skipped":79,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:27:26.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-7a32cf2e-9f76-4205-a6ce-fb409ce3bf28
STEP: Creating a pod to test consume secrets
Oct 12 19:27:27.965: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6" in namespace "projected-9614" to be "Succeeded or Failed"
Oct 12 19:27:28.075: INFO: Pod "pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.360855ms
Oct 12 19:27:30.187: INFO: Pod "pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222087879s
STEP: Saw pod success
Oct 12 19:27:30.187: INFO: Pod "pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6" satisfied condition "Succeeded or Failed"
Oct 12 19:27:30.298: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:27:30.534: INFO: Waiting for pod pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6 to disappear
Oct 12 19:27:30.644: INFO: Pod pod-projected-secrets-ea140232-08f6-49a2-a16b-084f512259d6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:30.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9614" for this suite.
STEP: Destroying namespace "secret-namespace-88" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":15,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Oct 12 19:27:24.670: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct 12 19:27:24.670: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7tth
STEP: Creating a pod to test subpath
Oct 12 19:27:24.781: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7tth" in namespace "provisioning-3034" to be "Succeeded or Failed"
Oct 12 19:27:24.890: INFO: Pod "pod-subpath-test-inlinevolume-7tth": Phase="Pending", Reason="", readiness=false. Elapsed: 108.582984ms
Oct 12 19:27:27.000: INFO: Pod "pod-subpath-test-inlinevolume-7tth": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218192997s
Oct 12 19:27:29.109: INFO: Pod "pod-subpath-test-inlinevolume-7tth": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327083045s
Oct 12 19:27:31.218: INFO: Pod "pod-subpath-test-inlinevolume-7tth": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436615033s
STEP: Saw pod success
Oct 12 19:27:31.218: INFO: Pod "pod-subpath-test-inlinevolume-7tth" satisfied condition "Succeeded or Failed"
Oct 12 19:27:31.327: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-7tth container test-container-volume-inlinevolume-7tth: <nil>
STEP: delete the pod
Oct 12 19:27:31.552: INFO: Waiting for pod pod-subpath-test-inlinevolume-7tth to disappear
Oct 12 19:27:31.661: INFO: Pod pod-subpath-test-inlinevolume-7tth no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7tth
Oct 12 19:27:31.661: INFO: Deleting pod "pod-subpath-test-inlinevolume-7tth" in namespace "provisioning-3034"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":28,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:27:29.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct 12 19:27:29.752: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f5d2ee2b-d378-4609-b23a-944cf86b3b20" in namespace "security-context-test-9726" to be "Succeeded or Failed"
Oct 12 19:27:29.862: INFO: Pod "busybox-privileged-false-f5d2ee2b-d378-4609-b23a-944cf86b3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 109.837971ms
Oct 12 19:27:31.974: INFO: Pod "busybox-privileged-false-f5d2ee2b-d378-4609-b23a-944cf86b3b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.221598484s
Oct 12 19:27:31.974: INFO: Pod "busybox-privileged-false-f5d2ee2b-d378-4609-b23a-944cf86b3b20" satisfied condition "Succeeded or Failed"
Oct 12 19:27:32.085: INFO: Got logs for pod "busybox-privileged-false-f5d2ee2b-d378-4609-b23a-944cf86b3b20": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:32.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9726" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:32.326: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-44cb314d-03a0-4584-a666-d36a43eb5349
STEP: Creating a pod to test consume configMaps
Oct 12 19:27:27.502: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59" in namespace "projected-7264" to be "Succeeded or Failed"
Oct 12 19:27:27.611: INFO: Pod "pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59": Phase="Pending", Reason="", readiness=false. Elapsed: 108.641094ms
Oct 12 19:27:29.721: INFO: Pod "pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218546087s
Oct 12 19:27:31.830: INFO: Pod "pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327380219s
STEP: Saw pod success
Oct 12 19:27:31.830: INFO: Pod "pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59" satisfied condition "Succeeded or Failed"
Oct 12 19:27:31.941: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59 container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:27:32.167: INFO: Waiting for pod pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59 to disappear
Oct 12 19:27:32.277: INFO: Pod pod-projected-configmaps-0582fd7e-4d93-4b7a-9b41-fc85f0794d59 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.760 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":29,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:10.305 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":13,"skipped":92,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:32.922: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
Oct 12 19:27:28.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Oct 12 19:27:28.865: INFO: Waiting up to 5m0s for pod "client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a" in namespace "containers-5756" to be "Succeeded or Failed"
Oct 12 19:27:28.974: INFO: Pod "client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.737891ms
Oct 12 19:27:31.084: INFO: Pod "client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218475585s
Oct 12 19:27:33.196: INFO: Pod "client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330453899s
STEP: Saw pod success
Oct 12 19:27:33.196: INFO: Pod "client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a" satisfied condition "Succeeded or Failed"
Oct 12 19:27:33.305: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:27:33.531: INFO: Waiting for pod client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a to disappear
Oct 12 19:27:33.640: INFO: Pod client-containers-e024bac3-f89b-4a8b-8a69-78eeba8c0c0a no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.689 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":84,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:34.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-380" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:35.042: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 276 lines ...
Oct 12 19:26:39.324: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4684fm27l
STEP: creating a claim
Oct 12 19:26:39.435: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-9wn6
STEP: Creating a pod to test subpath
Oct 12 19:26:39.769: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9wn6" in namespace "provisioning-4684" to be "Succeeded or Failed"
Oct 12 19:26:39.880: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.165992ms
Oct 12 19:26:41.991: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221064044s
Oct 12 19:26:44.101: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331242718s
Oct 12 19:26:46.212: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442179649s
Oct 12 19:26:48.324: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554460427s
Oct 12 19:26:50.435: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665869582s
Oct 12 19:26:52.546: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.776581896s
Oct 12 19:26:54.658: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887980318s
Oct 12 19:26:56.768: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.99869733s
STEP: Saw pod success
Oct 12 19:26:56.768: INFO: Pod "pod-subpath-test-dynamicpv-9wn6" satisfied condition "Succeeded or Failed"
Oct 12 19:26:56.878: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-9wn6 container test-container-subpath-dynamicpv-9wn6: <nil>
STEP: delete the pod
Oct 12 19:26:57.114: INFO: Waiting for pod pod-subpath-test-dynamicpv-9wn6 to disappear
Oct 12 19:26:57.223: INFO: Pod pod-subpath-test-dynamicpv-9wn6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9wn6
Oct 12 19:26:57.223: INFO: Deleting pod "pod-subpath-test-dynamicpv-9wn6" in namespace "provisioning-4684"
STEP: Creating pod pod-subpath-test-dynamicpv-9wn6
STEP: Creating a pod to test subpath
Oct 12 19:26:57.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9wn6" in namespace "provisioning-4684" to be "Succeeded or Failed"
Oct 12 19:26:57.554: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 109.919163ms
Oct 12 19:26:59.665: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22091424s
Oct 12 19:27:01.776: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331863672s
Oct 12 19:27:03.887: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442862986s
Oct 12 19:27:06.046: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602111385s
Oct 12 19:27:08.156: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712112346s
Oct 12 19:27:10.268: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.824316349s
Oct 12 19:27:12.379: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.935355062s
Oct 12 19:27:14.525: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.081108979s
Oct 12 19:27:16.636: INFO: Pod "pod-subpath-test-dynamicpv-9wn6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.191859298s
STEP: Saw pod success
Oct 12 19:27:16.636: INFO: Pod "pod-subpath-test-dynamicpv-9wn6" satisfied condition "Succeeded or Failed"
Oct 12 19:27:16.746: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-9wn6 container test-container-subpath-dynamicpv-9wn6: <nil>
STEP: delete the pod
Oct 12 19:27:16.974: INFO: Waiting for pod pod-subpath-test-dynamicpv-9wn6 to disappear
Oct 12 19:27:17.083: INFO: Pod pod-subpath-test-dynamicpv-9wn6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9wn6
Oct 12 19:27:17.083: INFO: Deleting pod "pod-subpath-test-dynamicpv-9wn6" in namespace "provisioning-4684"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":13,"skipped":112,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:38.586: INFO: Driver local doesn't support ext3 -- skipping
... skipping 109 lines ...
• [SLOW TEST:8.657 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":12,"skipped":33,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:41.207: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 128 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":10,"skipped":89,"failed":0}
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:27:39.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:42.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-705" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":11,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
Oct 12 19:27:35.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Oct 12 19:27:36.588: INFO: Waiting up to 5m0s for pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581" in namespace "var-expansion-5782" to be "Succeeded or Failed"
Oct 12 19:27:36.697: INFO: Pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581": Phase="Pending", Reason="", readiness=false. Elapsed: 108.637893ms
Oct 12 19:27:38.806: INFO: Pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217867345s
Oct 12 19:27:40.919: INFO: Pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330709121s
Oct 12 19:27:43.028: INFO: Pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.43977087s
STEP: Saw pod success
Oct 12 19:27:43.028: INFO: Pod "var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581" satisfied condition "Succeeded or Failed"
Oct 12 19:27:43.137: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581 container dapi-container: <nil>
STEP: delete the pod
Oct 12 19:27:43.386: INFO: Waiting for pod var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581 to disappear
Oct 12 19:27:43.497: INFO: Pod var-expansion-983c6680-e5ff-4220-b1f7-d1e96c5f6581 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.834 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:43.792: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
Oct 12 19:27:34.901: INFO: PersistentVolumeClaim pvc-spct6 found but phase is Pending instead of Bound.
Oct 12 19:27:37.010: INFO: PersistentVolumeClaim pvc-spct6 found and phase=Bound (8.546388491s)
Oct 12 19:27:37.010: INFO: Waiting up to 3m0s for PersistentVolume local-h24sr to have phase Bound
Oct 12 19:27:37.119: INFO: PersistentVolume local-h24sr found and phase=Bound (108.529609ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zvr2
STEP: Creating a pod to test subpath
Oct 12 19:27:37.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zvr2" in namespace "provisioning-2176" to be "Succeeded or Failed"
Oct 12 19:27:37.562: INFO: Pod "pod-subpath-test-preprovisionedpv-zvr2": Phase="Pending", Reason="", readiness=false. Elapsed: 110.608221ms
Oct 12 19:27:39.672: INFO: Pod "pod-subpath-test-preprovisionedpv-zvr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22011474s
Oct 12 19:27:41.780: INFO: Pod "pod-subpath-test-preprovisionedpv-zvr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32870444s
STEP: Saw pod success
Oct 12 19:27:41.780: INFO: Pod "pod-subpath-test-preprovisionedpv-zvr2" satisfied condition "Succeeded or Failed"
Oct 12 19:27:41.889: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-zvr2 container test-container-subpath-preprovisionedpv-zvr2: <nil>
STEP: delete the pod
Oct 12 19:27:42.114: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zvr2 to disappear
Oct 12 19:27:42.222: INFO: Pod pod-subpath-test-preprovisionedpv-zvr2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zvr2
Oct 12 19:27:42.222: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zvr2" in namespace "provisioning-2176"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:43.880: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:44.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7881" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:45.190: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Oct 12 19:27:42.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct 12 19:27:43.082: INFO: Waiting up to 5m0s for pod "downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637" in namespace "downward-api-8910" to be "Succeeded or Failed"
Oct 12 19:27:43.191: INFO: Pod "downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637": Phase="Pending", Reason="", readiness=false. Elapsed: 109.735175ms
Oct 12 19:27:45.301: INFO: Pod "downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219253097s
STEP: Saw pod success
Oct 12 19:27:45.301: INFO: Pod "downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637" satisfied condition "Succeeded or Failed"
Oct 12 19:27:45.410: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637 container dapi-container: <nil>
STEP: delete the pod
Oct 12 19:27:45.635: INFO: Waiting for pod downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637 to disappear
Oct 12 19:27:45.744: INFO: Pod downward-api-b3effcf8-6ef7-4e42-aa46-b901f870f637 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:45.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8910" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:45.989: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":11,"skipped":64,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:47.375: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":16,"skipped":84,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:47.998: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 63 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":3,"skipped":48,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:24:57.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
I1012 19:25:04.095774    5490 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7311, replica count: 3
I1012 19:25:07.247527    5490 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1012 19:25:10.248405    5490 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 12 19:25:10.466: INFO: Creating new exec pod
Oct 12 19:25:13.795: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:20.020: INFO: rc: 1
Oct 12 19:25:20.020: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:21.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:27.202: INFO: rc: 1
Oct 12 19:25:27.202: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:28.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:34.283: INFO: rc: 1
Oct 12 19:25:34.283: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:35.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:41.331: INFO: rc: 1
Oct 12 19:25:41.331: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:42.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:48.222: INFO: rc: 1
Oct 12 19:25:48.222: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:49.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:25:55.314: INFO: rc: 1
Oct 12 19:25:55.314: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:25:56.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:02.218: INFO: rc: 1
Oct 12 19:26:02.218: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:03.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:09.189: INFO: rc: 1
Oct 12 19:26:09.189: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:10.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:16.208: INFO: rc: 1
Oct 12 19:26:16.208: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:17.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:23.200: INFO: rc: 1
Oct 12 19:26:23.200: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:24.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:30.208: INFO: rc: 1
Oct 12 19:26:30.208: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:31.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:37.183: INFO: rc: 1
Oct 12 19:26:37.183: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:38.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:44.253: INFO: rc: 1
Oct 12 19:26:44.253: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:45.020: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:51.208: INFO: rc: 1
Oct 12 19:26:51.208: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:52.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:26:58.202: INFO: rc: 1
Oct 12 19:26:58.202: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:26:59.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:27:05.315: INFO: rc: 1
Oct 12 19:27:05.315: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:27:06.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:27:12.266: INFO: rc: 1
Oct 12 19:27:12.266: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:27:13.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:27:19.218: INFO: rc: 1
Oct 12 19:27:19.218: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-clusterip-timeout 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:27:20.021: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:27:26.237: INFO: rc: 1
Oct 12 19:27:26.237: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:27:26.237: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Oct 12 19:27:32.490: INFO: rc: 1
Oct 12 19:27:32.490: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7311 exec execpod-affinityd49vb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:27:32.491: FAIL: Unexpected error:
    <*errors.errorString | 0xc000608610>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
occurred

... skipping 196 lines ...
Oct 12 19:27:47.507: INFO: 
Logging pods the kubelet thinks is on node ip-172-20-61-115.eu-central-1.compute.internal
Oct 12 19:27:47.620: INFO: hostexec-ip-172-20-61-115.eu-central-1.compute.internal-w72zn started at 2021-10-12 19:27:27 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 12 19:27:47.620: INFO: kube-proxy-ip-172-20-61-115.eu-central-1.compute.internal started at 2021-10-12 19:19:10 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container kube-proxy ready: true, restart count 0
Oct 12 19:27:47.620: INFO: fail-once-non-local-swkkj started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:47.620: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded)
Oct 12 19:27:47.620: INFO: update-demo-nautilus-m8mm2 started at 2021-10-12 19:27:25 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container update-demo ready: true, restart count 0
Oct 12 19:27:47.620: INFO: fail-once-non-local-bmfwj started at 2021-10-12 19:27:45 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:47.620: INFO: test-container-pod started at 2021-10-12 19:27:36 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container webserver ready: true, restart count 0
Oct 12 19:27:47.620: INFO: kopeio-networking-agent-h5d2h started at 2021-10-12 19:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container networking-agent ready: true, restart count 0
Oct 12 19:27:47.620: INFO: dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 started at 2021-10-12 19:27:39 +0000 UTC (0+3 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container jessie-querier ready: false, restart count 0
Oct 12 19:27:47.620: INFO: 	Container querier ready: false, restart count 0
Oct 12 19:27:47.620: INFO: 	Container webserver ready: false, restart count 0
Oct 12 19:27:47.620: INFO: fail-once-non-local-6zxkr started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.620: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:47.620: INFO: netserver-3 started at 2021-10-12 19:27:15 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.621: INFO: 	Container webserver ready: true, restart count 0
Oct 12 19:27:47.621: INFO: test-rolling-update-with-lb-686dff95d9-vznlh started at 2021-10-12 19:27:21 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:47.621: INFO: 	Container agnhost ready: true, restart count 0
Oct 12 19:27:47.621: INFO: csi-mockplugin-attacher-0 started at <nil> (0+0 container statuses recorded)
Oct 12 19:27:47.621: INFO: fail-once-non-local-95t5n started at <nil> (0+0 container statuses recorded)
W1012 19:27:47.732022    5490 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 12 19:27:48.151: INFO: 
Latency metrics for node ip-172-20-61-115.eu-central-1.compute.internal
Oct 12 19:27:48.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7311" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• Failure [170.972 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:27:32.491: Unexpected error:
      <*errors.errorString | 0xc000608610>: {
          s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint affinity-clusterip-timeout:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2493
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":48,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:48.408: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-4956d35a-ad73-4d37-8a82-d0116237d530
STEP: Creating a pod to test consume secrets
Oct 12 19:27:45.980: INFO: Waiting up to 5m0s for pod "pod-secrets-59509796-dfc0-4140-9023-29120e136db8" in namespace "secrets-1484" to be "Succeeded or Failed"
Oct 12 19:27:46.090: INFO: Pod "pod-secrets-59509796-dfc0-4140-9023-29120e136db8": Phase="Pending", Reason="", readiness=false. Elapsed: 109.311967ms
Oct 12 19:27:48.199: INFO: Pod "pod-secrets-59509796-dfc0-4140-9023-29120e136db8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21823496s
STEP: Saw pod success
Oct 12 19:27:48.199: INFO: Pod "pod-secrets-59509796-dfc0-4140-9023-29120e136db8" satisfied condition "Succeeded or Failed"
Oct 12 19:27:48.308: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-secrets-59509796-dfc0-4140-9023-29120e136db8 container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:27:48.532: INFO: Waiting for pod pod-secrets-59509796-dfc0-4140-9023-29120e136db8 to disappear
Oct 12 19:27:48.641: INFO: Pod pod-secrets-59509796-dfc0-4140-9023-29120e136db8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:48.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1484" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:48.891: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:49.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4621" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":17,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:49.607: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":6,"skipped":71,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:26:47.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
STEP: Creating a mutating webhook configuration
Oct 12 19:27:02.571: INFO: Waiting for webhook configuration to be ready...
Oct 12 19:27:12.901: INFO: Waiting for webhook configuration to be ready...
Oct 12 19:27:23.195: INFO: Waiting for webhook configuration to be ready...
Oct 12 19:27:33.493: INFO: Waiting for webhook configuration to be ready...
Oct 12 19:27:43.780: INFO: Waiting for webhook configuration to be ready...
Oct 12 19:27:43.781: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000236240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 171 lines ...
Oct 12 19:27:48.127: INFO: kopeio-networking-agent-h5d2h started at 2021-10-12 19:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container networking-agent ready: true, restart count 0
Oct 12 19:27:48.127: INFO: dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 started at 2021-10-12 19:27:39 +0000 UTC (0+3 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container jessie-querier ready: false, restart count 0
Oct 12 19:27:48.127: INFO: 	Container querier ready: false, restart count 0
Oct 12 19:27:48.127: INFO: 	Container webserver ready: false, restart count 0
Oct 12 19:27:48.127: INFO: fail-once-non-local-6zxkr started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:48.127: INFO: netserver-3 started at 2021-10-12 19:27:15 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container webserver ready: true, restart count 0
Oct 12 19:27:48.127: INFO: test-rolling-update-with-lb-686dff95d9-vznlh started at 2021-10-12 19:27:21 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container agnhost ready: true, restart count 0
Oct 12 19:27:48.127: INFO: csi-mockplugin-attacher-0 started at <nil> (0+0 container statuses recorded)
Oct 12 19:27:48.127: INFO: fail-once-non-local-95t5n started at <nil> (0+0 container statuses recorded)
Oct 12 19:27:48.127: INFO: kube-proxy-ip-172-20-61-115.eu-central-1.compute.internal started at 2021-10-12 19:19:10 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container kube-proxy ready: true, restart count 0
Oct 12 19:27:48.127: INFO: fail-once-non-local-swkkj started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:48.127: INFO: hostexec-ip-172-20-61-115.eu-central-1.compute.internal-w72zn started at 2021-10-12 19:27:27 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 12 19:27:48.127: INFO: update-demo-nautilus-m8mm2 started at 2021-10-12 19:27:25 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container update-demo ready: true, restart count 0
Oct 12 19:27:48.127: INFO: fail-once-non-local-bmfwj started at 2021-10-12 19:27:45 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:48.127: INFO: test-container-pod started at 2021-10-12 19:27:36 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container webserver ready: true, restart count 0
Oct 12 19:27:48.127: INFO: csi-mockplugin-0 started at 2021-10-12 19:27:45 +0000 UTC (0+3 container statuses recorded)
Oct 12 19:27:48.127: INFO: 	Container csi-provisioner ready: false, restart count 0
Oct 12 19:27:48.127: INFO: 	Container driver-registrar ready: false, restart count 0
... skipping 154 lines ...
Oct 12 19:27:52.767: INFO: kopeio-networking-agent-h5d2h started at 2021-10-12 19:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container networking-agent ready: true, restart count 0
Oct 12 19:27:52.767: INFO: dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 started at 2021-10-12 19:27:39 +0000 UTC (0+3 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container jessie-querier ready: false, restart count 0
Oct 12 19:27:52.767: INFO: 	Container querier ready: false, restart count 0
Oct 12 19:27:52.767: INFO: 	Container webserver ready: false, restart count 0
Oct 12 19:27:52.767: INFO: fail-once-non-local-6zxkr started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:52.767: INFO: test-rolling-update-with-lb-686dff95d9-vznlh started at 2021-10-12 19:27:21 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container agnhost ready: true, restart count 0
Oct 12 19:27:52.767: INFO: csi-mockplugin-attacher-0 started at 2021-10-12 19:27:46 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container csi-attacher ready: false, restart count 0
Oct 12 19:27:52.767: INFO: fail-once-non-local-95t5n started at 2021-10-12 19:27:46 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container c ready: true, restart count 0
Oct 12 19:27:52.767: INFO: kube-proxy-ip-172-20-61-115.eu-central-1.compute.internal started at 2021-10-12 19:19:10 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container kube-proxy ready: true, restart count 0
Oct 12 19:27:52.767: INFO: fail-once-non-local-swkkj started at 2021-10-12 19:27:42 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container c ready: false, restart count 0
Oct 12 19:27:52.767: INFO: fail-once-non-local-29nfx started at <nil> (0+0 container statuses recorded)
Oct 12 19:27:52.767: INFO: csi-mockplugin-0 started at 2021-10-12 19:27:45 +0000 UTC (0+3 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container csi-provisioner ready: false, restart count 0
Oct 12 19:27:52.767: INFO: 	Container driver-registrar ready: false, restart count 0
Oct 12 19:27:52.767: INFO: 	Container mock ready: false, restart count 0
Oct 12 19:27:52.767: INFO: update-demo-nautilus-m8mm2 started at 2021-10-12 19:27:25 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container update-demo ready: true, restart count 0
Oct 12 19:27:52.767: INFO: fail-once-non-local-bmfwj started at 2021-10-12 19:27:45 +0000 UTC (0+1 container statuses recorded)
Oct 12 19:27:52.767: INFO: 	Container c ready: false, restart count 0
W1012 19:27:52.879245    5537 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 12 19:27:53.197: INFO: 
Latency metrics for node ip-172-20-61-115.eu-central-1.compute.internal
Oct 12 19:27:53.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8727" for this suite.
... skipping 6 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:27:43.781: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000236240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":6,"skipped":71,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:54.028: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:27:41.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:56.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5764" for this suite.


• [SLOW TEST:15.102 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":11,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:56.794: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:57.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-8599" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:57.821: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:27:58.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":13,"skipped":86,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:250
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":7,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:58.695: INFO: Only supported for providers [azure] (not aws)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 21 lines ...
Oct 12 19:27:50.160: INFO: PersistentVolumeClaim pvc-g8r54 found but phase is Pending instead of Bound.
Oct 12 19:27:52.269: INFO: PersistentVolumeClaim pvc-g8r54 found and phase=Bound (2.2170811s)
Oct 12 19:27:52.269: INFO: Waiting up to 3m0s for PersistentVolume local-wm8ss to have phase Bound
Oct 12 19:27:52.379: INFO: PersistentVolume local-wm8ss found and phase=Bound (109.86963ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vkc5
STEP: Creating a pod to test subpath
Oct 12 19:27:52.709: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vkc5" in namespace "provisioning-329" to be "Succeeded or Failed"
Oct 12 19:27:52.817: INFO: Pod "pod-subpath-test-preprovisionedpv-vkc5": Phase="Pending", Reason="", readiness=false. Elapsed: 108.559303ms
Oct 12 19:27:54.927: INFO: Pod "pod-subpath-test-preprovisionedpv-vkc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218437055s
Oct 12 19:27:57.037: INFO: Pod "pod-subpath-test-preprovisionedpv-vkc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328124834s
STEP: Saw pod success
Oct 12 19:27:57.037: INFO: Pod "pod-subpath-test-preprovisionedpv-vkc5" satisfied condition "Succeeded or Failed"
Oct 12 19:27:57.146: INFO: Trying to get logs from node ip-172-20-32-55.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-vkc5 container test-container-subpath-preprovisionedpv-vkc5: <nil>
STEP: delete the pod
Oct 12 19:27:57.372: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vkc5 to disappear
Oct 12 19:27:57.481: INFO: Pod pod-subpath-test-preprovisionedpv-vkc5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vkc5
Oct 12 19:27:57.481: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vkc5" in namespace "provisioning-329"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":70,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:27:59.807: INFO: Only supported for providers [azure] (not aws)
... skipping 156 lines ...
• [SLOW TEST:11.333 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":18,"skipped":113,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:00.998: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":59,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:00.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:04.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7840" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":59,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:05.027: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:18.334 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:05.784: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
Oct 12 19:28:04.155: INFO: PersistentVolumeClaim pvc-4g6qg found but phase is Pending instead of Bound.
Oct 12 19:28:06.265: INFO: PersistentVolumeClaim pvc-4g6qg found and phase=Bound (2.218971336s)
Oct 12 19:28:06.265: INFO: Waiting up to 3m0s for PersistentVolume local-dpr72 to have phase Bound
Oct 12 19:28:06.374: INFO: PersistentVolume local-dpr72 found and phase=Bound (109.294697ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-csqc
STEP: Creating a pod to test subpath
Oct 12 19:28:06.704: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-csqc" in namespace "provisioning-5206" to be "Succeeded or Failed"
Oct 12 19:28:06.814: INFO: Pod "pod-subpath-test-preprovisionedpv-csqc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.987188ms
Oct 12 19:28:08.925: INFO: Pod "pod-subpath-test-preprovisionedpv-csqc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221139919s
Oct 12 19:28:11.038: INFO: Pod "pod-subpath-test-preprovisionedpv-csqc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333273949s
STEP: Saw pod success
Oct 12 19:28:11.038: INFO: Pod "pod-subpath-test-preprovisionedpv-csqc" satisfied condition "Succeeded or Failed"
Oct 12 19:28:11.148: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-csqc container test-container-volume-preprovisionedpv-csqc: <nil>
STEP: delete the pod
Oct 12 19:28:11.374: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-csqc to disappear
Oct 12 19:28:11.483: INFO: Pod pod-subpath-test-preprovisionedpv-csqc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-csqc
Oct 12 19:28:11.483: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-csqc" in namespace "provisioning-5206"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":14,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:15.538: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 105 lines ...
• [SLOW TEST:247.914 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:16.732: INFO: Only supported for providers [azure] (not aws)
... skipping 58 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":44,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:27:43.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":8,"skipped":44,"failed":1,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:24.317: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 164 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":85,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":81,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:25.117: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
Oct 12 19:28:25.954: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.769 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 280 lines ...
• [SLOW TEST:150.690 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:158
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:28.303: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Oct 12 19:28:26.674: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-9e6ee543-80f1-4757-959b-767cb746e93c" in namespace "security-context-test-2168" to be "Succeeded or Failed"
Oct 12 19:28:26.783: INFO: Pod "busybox-readonly-true-9e6ee543-80f1-4757-959b-767cb746e93c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.546586ms
Oct 12 19:28:28.892: INFO: Pod "busybox-readonly-true-9e6ee543-80f1-4757-959b-767cb746e93c": Phase="Failed", Reason="", readiness=false. Elapsed: 2.217949959s
Oct 12 19:28:28.892: INFO: Pod "busybox-readonly-true-9e6ee543-80f1-4757-959b-767cb746e93c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:28.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2168" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":103,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:29.138: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 150 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":13,"skipped":47,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:32.483: INFO: Only supported for providers [vsphere] (not aws)
... skipping 44 lines ...
Oct 12 19:28:29.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 12 19:28:29.819: INFO: Waiting up to 5m0s for pod "pod-49d563b6-2ea6-4c33-82c9-f98518548b9e" in namespace "emptydir-7674" to be "Succeeded or Failed"
Oct 12 19:28:29.929: INFO: Pod "pod-49d563b6-2ea6-4c33-82c9-f98518548b9e": Phase="Pending", Reason="", readiness=false. Elapsed: 109.719075ms
Oct 12 19:28:32.039: INFO: Pod "pod-49d563b6-2ea6-4c33-82c9-f98518548b9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219802239s
STEP: Saw pod success
Oct 12 19:28:32.039: INFO: Pod "pod-49d563b6-2ea6-4c33-82c9-f98518548b9e" satisfied condition "Succeeded or Failed"
Oct 12 19:28:32.148: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-49d563b6-2ea6-4c33-82c9-f98518548b9e container test-container: <nil>
STEP: delete the pod
Oct 12 19:28:32.372: INFO: Waiting for pod pod-49d563b6-2ea6-4c33-82c9-f98518548b9e to disappear
Oct 12 19:28:32.481: INFO: Pod pod-49d563b6-2ea6-4c33-82c9-f98518548b9e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:32.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7674" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:32.714: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
• [SLOW TEST:61.093 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:33.446: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 106 lines ...
Oct 12 19:28:04.723: INFO: PersistentVolumeClaim pvc-tvg9c found but phase is Pending instead of Bound.
Oct 12 19:28:06.832: INFO: PersistentVolumeClaim pvc-tvg9c found and phase=Bound (6.439976812s)
Oct 12 19:28:06.832: INFO: Waiting up to 3m0s for PersistentVolume aws-8754l to have phase Bound
Oct 12 19:28:06.941: INFO: PersistentVolume aws-8754l found and phase=Bound (109.022011ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pstj
STEP: Creating a pod to test exec-volume-test
Oct 12 19:28:07.272: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pstj" in namespace "volume-1129" to be "Succeeded or Failed"
Oct 12 19:28:07.382: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Pending", Reason="", readiness=false. Elapsed: 109.818322ms
Oct 12 19:28:09.493: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220141615s
Oct 12 19:28:11.604: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331694932s
Oct 12 19:28:13.717: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444520167s
Oct 12 19:28:15.827: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554778942s
Oct 12 19:28:17.937: INFO: Pod "exec-volume-test-preprovisionedpv-pstj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.664624491s
STEP: Saw pod success
Oct 12 19:28:17.937: INFO: Pod "exec-volume-test-preprovisionedpv-pstj" satisfied condition "Succeeded or Failed"
Oct 12 19:28:18.047: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-pstj container exec-container-preprovisionedpv-pstj: <nil>
STEP: delete the pod
Oct 12 19:28:18.280: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pstj to disappear
Oct 12 19:28:18.389: INFO: Pod exec-volume-test-preprovisionedpv-pstj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pstj
Oct 12 19:28:18.389: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pstj" in namespace "volume-1129"
STEP: Deleting pv and pvc
Oct 12 19:28:18.497: INFO: Deleting PersistentVolumeClaim "pvc-tvg9c"
Oct 12 19:28:18.608: INFO: Deleting PersistentVolume "aws-8754l"
Oct 12 19:28:18.949: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0e62a494476690337", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0e62a494476690337 is currently attached to i-08d05e3b4af64ab5b
	status code: 400, request id: 0962ed0a-13d5-4211-bbeb-bed068d61455
Oct 12 19:28:24.540: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0e62a494476690337", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0e62a494476690337 is currently attached to i-08d05e3b4af64ab5b
	status code: 400, request id: ea9940a6-0444-4cac-9a86-84564d77b732
Oct 12 19:28:30.117: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0e62a494476690337", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0e62a494476690337 is currently attached to i-08d05e3b4af64ab5b
	status code: 400, request id: a6db2e95-c13e-4734-8c9e-fdb9577707d8
Oct 12 19:28:35.718: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0e62a494476690337".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:35.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1129" for this suite.
... skipping 19 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:35.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3756" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":11,"skipped":111,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:32.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-1ecc6312-1279-4b9a-877f-8820fe8ede58
STEP: Creating a pod to test consume configMaps
Oct 12 19:28:33.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc" in namespace "configmap-360" to be "Succeeded or Failed"
Oct 12 19:28:33.384: INFO: Pod "pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 110.164535ms
Oct 12 19:28:35.494: INFO: Pod "pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219978729s
STEP: Saw pod success
Oct 12 19:28:35.494: INFO: Pod "pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc" satisfied condition "Succeeded or Failed"
Oct 12 19:28:35.603: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc container configmap-volume-test: <nil>
STEP: delete the pod
Oct 12 19:28:35.830: INFO: Waiting for pod pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc to disappear
Oct 12 19:28:35.978: INFO: Pod pod-configmaps-78a72308-9236-4735-a01b-7cce9c464dcc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:35.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-360" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":53,"failed":1,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":89,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 200 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":14,"skipped":106,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:30.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Oct 12 19:28:34.180: INFO: Creating a PV followed by a PVC
Oct 12 19:28:34.400: INFO: Waiting for PV local-pvss57k to bind to PVC pvc-vrrfd
Oct 12 19:28:34.400: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vrrfd] to have phase Bound
Oct 12 19:28:34.509: INFO: PersistentVolumeClaim pvc-vrrfd found and phase=Bound (108.531234ms)
Oct 12 19:28:34.509: INFO: Waiting up to 3m0s for PersistentVolume local-pvss57k to have phase Bound
Oct 12 19:28:34.627: INFO: PersistentVolume local-pvss57k found and phase=Bound (118.200495ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
Oct 12 19:28:34.859: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-70010a4f-89b9-4ac0-a710-e13eaacd477e] Namespace:persistent-local-volumes-test-2840 PodName:hostexec-ip-172-20-57-193.eu-central-1.compute.internal-j25qt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:28:34.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:7.601 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:15.610: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
• [SLOW TEST:22.436 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":105,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:38.088: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
STEP: Destroying namespace "services-370" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":15,"skipped":107,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":8,"skipped":54,"failed":0}
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:37.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Oct 12 19:28:38.551: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5043" to be "Succeeded or Failed"
Oct 12 19:28:38.679: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 128.053991ms
Oct 12 19:28:40.788: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.236839019s
STEP: Saw pod success
Oct 12 19:28:40.788: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct 12 19:28:40.896: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct 12 19:28:41.121: INFO: Waiting for pod pod-host-path-test to disappear
Oct 12 19:28:41.230: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:28:41.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5043" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":9,"skipped":54,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:28:35.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:5.818 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:28:41.846: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62187 lines ...
Oct 12 19:45:46.404: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tp8md] to have phase Bound
Oct 12 19:45:46.515: INFO: PersistentVolumeClaim pvc-tp8md found and phase=Bound (110.069964ms)
STEP: Deleting the previously created pod
Oct 12 19:45:59.068: INFO: Deleting pod "pvc-volume-tester-8xbwv" in namespace "csi-mock-volumes-7270"
Oct 12 19:45:59.180: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8xbwv" to be fully deleted
STEP: Checking CSI driver logs
Oct 12 19:46:07.516: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/60508133-7036-45b7-b991-54d095b25e54/volumes/kubernetes.io~csi/pvc-cbe44a81-b0c6-4583-b735-c7330f33c78b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-8xbwv
Oct 12 19:46:07.516: INFO: Deleting pod "pvc-volume-tester-8xbwv" in namespace "csi-mock-volumes-7270"
STEP: Deleting claim pvc-tp8md
Oct 12 19:46:07.871: INFO: Waiting up to 2m0s for PersistentVolume pvc-cbe44a81-b0c6-4583-b735-c7330f33c78b to get deleted
Oct 12 19:46:07.982: INFO: PersistentVolume pvc-cbe44a81-b0c6-4583-b735-c7330f33c78b was removed
STEP: Deleting storageclass csi-mock-volumes-7270-scmc4jx
... skipping 71 lines ...
Oct 12 19:46:20.146: INFO: PersistentVolumeClaim pvc-5qvwl found but phase is Pending instead of Bound.
Oct 12 19:46:22.255: INFO: PersistentVolumeClaim pvc-5qvwl found and phase=Bound (2.218190553s)
Oct 12 19:46:22.255: INFO: Waiting up to 3m0s for PersistentVolume local-x6k87 to have phase Bound
Oct 12 19:46:22.368: INFO: PersistentVolume local-x6k87 found and phase=Bound (112.35071ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jprp
STEP: Creating a pod to test subpath
Oct 12 19:46:22.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jprp" in namespace "provisioning-2225" to be "Succeeded or Failed"
Oct 12 19:46:22.808: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp": Phase="Pending", Reason="", readiness=false. Elapsed: 108.589191ms
Oct 12 19:46:24.918: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21838755s
Oct 12 19:46:27.028: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327948832s
STEP: Saw pod success
Oct 12 19:46:27.028: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp" satisfied condition "Succeeded or Failed"
Oct 12 19:46:27.137: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jprp container test-container-subpath-preprovisionedpv-jprp: <nil>
STEP: delete the pod
Oct 12 19:46:27.368: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jprp to disappear
Oct 12 19:46:27.477: INFO: Pod pod-subpath-test-preprovisionedpv-jprp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jprp
Oct 12 19:46:27.477: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jprp" in namespace "provisioning-2225"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jprp
STEP: Creating a pod to test subpath
Oct 12 19:46:27.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jprp" in namespace "provisioning-2225" to be "Succeeded or Failed"
Oct 12 19:46:27.815: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp": Phase="Pending", Reason="", readiness=false. Elapsed: 117.692159ms
Oct 12 19:46:29.925: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.227719595s
STEP: Saw pod success
Oct 12 19:46:29.925: INFO: Pod "pod-subpath-test-preprovisionedpv-jprp" satisfied condition "Succeeded or Failed"
Oct 12 19:46:30.033: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jprp container test-container-subpath-preprovisionedpv-jprp: <nil>
STEP: delete the pod
Oct 12 19:46:30.259: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jprp to disappear
Oct 12 19:46:30.368: INFO: Pod pod-subpath-test-preprovisionedpv-jprp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jprp
Oct 12 19:46:30.368: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jprp" in namespace "provisioning-2225"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":29,"skipped":198,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:33.467: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 28 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Oct 12 19:46:26.315: INFO: start=2021-10-12 19:46:21.19478738 +0000 UTC m=+1415.285600703, now=2021-10-12 19:46:26.315331455 +0000 UTC m=+1420.406144785, kubelet pod: {"metadata":{"name":"pod-submit-remove-2d876662-4e09-463c-a523-69f74309d06d","namespace":"pods-4545","uid":"ec2b6866-a9a3-4e4a-b732-10f9b526b4b9","resourceVersion":"40125","creationTimestamp":"2021-10-12T19:46:18Z","deletionTimestamp":"2021-10-12T19:46:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"426318649"},"annotations":{"kubernetes.io/config.seen":"2021-10-12T19:46:18.593886544Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-12T19:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-kzb7b","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-kzb7b","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-57-193.eu-central-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:18Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:23Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:23Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:18Z"}],"hostIP":"172.20.57.193","podIP":"100.96.1.108","podIPs":[{"ip":"100.96.1.108"}],"startTime":"2021-10-12T19:46:18Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-12T19:46:19Z","finishedAt":"2021-10-12T19:46:22Z","containerID":"containerd://1615d190934b82ef11e99213dd274a2c62f1e1f59ce833d17d5f44724faaa85c"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://1615d190934b82ef11e99213dd274a2c62f1e1f59ce833d17d5f44724faaa85c","started":false}],"qosClass":"BestEffort"}}
Oct 12 19:46:31.316: INFO: start=2021-10-12 19:46:21.19478738 +0000 UTC m=+1415.285600703, now=2021-10-12 19:46:31.316040346 +0000 UTC m=+1425.406853699, kubelet pod: {"metadata":{"name":"pod-submit-remove-2d876662-4e09-463c-a523-69f74309d06d","namespace":"pods-4545","uid":"ec2b6866-a9a3-4e4a-b732-10f9b526b4b9","resourceVersion":"40125","creationTimestamp":"2021-10-12T19:46:18Z","deletionTimestamp":"2021-10-12T19:46:51Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"426318649"},"annotations":{"kubernetes.io/config.seen":"2021-10-12T19:46:18.593886544Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-12T19:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-kzb7b","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-kzb7b","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-57-193.eu-central-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:18Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:23Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:23Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-12T19:46:18Z"}],"hostIP":"172.20.57.193","podIP":"100.96.1.108","podIPs":[{"ip":"100.96.1.108"}],"startTime":"2021-10-12T19:46:18Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-10-12T19:46:19Z","finishedAt":"2021-10-12T19:46:22Z","containerID":"containerd://1615d190934b82ef11e99213dd274a2c62f1e1f59ce833d17d5f44724faaa85c"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://1615d190934b82ef11e99213dd274a2c62f1e1f59ce833d17d5f44724faaa85c","started":false}],"qosClass":"BestEffort"}}
Oct 12 19:46:36.314: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:46:36.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4545" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":14,"skipped":136,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:46:36.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-8325" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":30,"skipped":202,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:37.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Oct 12 19:46:37.682: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5888" to be "Succeeded or Failed"
Oct 12 19:46:37.791: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 109.05394ms
Oct 12 19:46:39.900: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218107384s
Oct 12 19:46:39.900: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:46:40.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5888" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":31,"skipped":202,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:40.267: INFO: Only supported for providers [openstack] (not aws)
... skipping 179 lines ...
• [SLOW TEST:14.987 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":53,"skipped":342,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct 12 19:46:41.029: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3602" to be "Succeeded or Failed"
Oct 12 19:46:41.138: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 108.830975ms
Oct 12 19:46:43.248: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218680598s
STEP: Saw pod success
Oct 12 19:46:43.248: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct 12 19:46:43.357: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct 12 19:46:43.584: INFO: Waiting for pod pod-host-path-test to disappear
Oct 12 19:46:43.693: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:46:43.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3602" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":32,"skipped":227,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Oct 12 19:46:02.885: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9jkl6] to have phase Bound
Oct 12 19:46:02.994: INFO: PersistentVolumeClaim pvc-9jkl6 found and phase=Bound (109.357734ms)
STEP: Deleting the previously created pod
Oct 12 19:46:13.545: INFO: Deleting pod "pvc-volume-tester-thqjp" in namespace "csi-mock-volumes-2999"
Oct 12 19:46:13.655: INFO: Wait up to 5m0s for pod "pvc-volume-tester-thqjp" to be fully deleted
STEP: Checking CSI driver logs
Oct 12 19:46:21.986: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9e10f87b-be84-41d1-8295-e697fd1d6f58/volumes/kubernetes.io~csi/pvc-7389f8f0-12a9-4f23-9f7d-ca588b413eae/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-thqjp
Oct 12 19:46:21.986: INFO: Deleting pod "pvc-volume-tester-thqjp" in namespace "csi-mock-volumes-2999"
STEP: Deleting claim pvc-9jkl6
Oct 12 19:46:22.316: INFO: Waiting up to 2m0s for PersistentVolume pvc-7389f8f0-12a9-4f23-9f7d-ca588b413eae to get deleted
Oct 12 19:46:22.425: INFO: PersistentVolume pvc-7389f8f0-12a9-4f23-9f7d-ca588b413eae was removed
STEP: Deleting storageclass csi-mock-volumes-2999-scdwkzw
... skipping 81 lines ...
Oct 12 19:44:20.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-74b65766b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 12 19:44:22.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664660, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-74b65766b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 12 19:44:24.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664660, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769664659, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-74b65766b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 12 19:44:26.372: INFO: Waiting up to 2m0s to get response from 100.68.127.113:8080
Oct 12 19:44:26.373: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip'
Oct 12 19:44:57.525: INFO: rc: 28
Oct 12 19:44:57.525: INFO: got err: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 12 19:44:59.526: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip'
Oct 12 19:45:30.731: INFO: rc: 28
Oct 12 19:45:30.731: INFO: got err: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 12 19:45:32.732: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip'
Oct 12 19:46:03.959: INFO: rc: 28
Oct 12 19:46:03.959: INFO: got err: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 12 19:46:05.960: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip'
Oct 12 19:46:37.126: INFO: rc: 28
Oct 12 19:46:37.126: INFO: got err: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Oct 12 19:46:39.126: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc0033e56e0, 0x19, 0xc0044b41b0, 0x14, 0xc004511b40, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
... skipping 34 lines ...
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:44:20 +0000 UTC - event for pause-pod-74b65766b-h65tc: {kubelet ip-172-20-57-193.eu-central-1.compute.internal} Created: Created container agnhost-pause
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:44:20 +0000 UTC - event for pause-pod-74b65766b-h65tc: {kubelet ip-172-20-57-193.eu-central-1.compute.internal} Started: Started container agnhost-pause
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:44:21 +0000 UTC - event for pause-pod-74b65766b-cv2kp: {kubelet ip-172-20-47-216.eu-central-1.compute.internal} Started: Started container agnhost-pause
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:46:39 +0000 UTC - event for echo-sourceip: {kubelet ip-172-20-61-115.eu-central-1.compute.internal} Killing: Stopping container agnhost-container
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:46:39 +0000 UTC - event for pause-pod-74b65766b-cv2kp: {kubelet ip-172-20-47-216.eu-central-1.compute.internal} Killing: Stopping container agnhost-pause
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:46:39 +0000 UTC - event for pause-pod-74b65766b-h65tc: {kubelet ip-172-20-57-193.eu-central-1.compute.internal} Killing: Stopping container agnhost-pause
Oct 12 19:46:39.579: INFO: At 2021-10-12 19:46:39 +0000 UTC - event for sourceip-test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-6753/sourceip-test: Operation cannot be fulfilled on endpoints "sourceip-test": the object has been modified; please apply your changes to the latest version and try again
Oct 12 19:46:39.687: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 12 19:46:39.687: INFO: 
Oct 12 19:46:39.796: INFO: 
Logging node info for node ip-172-20-32-55.eu-central-1.compute.internal
Oct 12 19:46:39.904: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-55.eu-central-1.compute.internal    d4114834-c2b7-4ba5-be09-57ef7df0cb89 40525 0 2021-10-12 19:20:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-55.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-32-55.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-3657":"ip-172-20-32-55.eu-central-1.compute.internal","csi-hostpath-volume-expand-8971":"ip-172-20-32-55.eu-central-1.compute.internal"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-12 19:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kube-controller-manager Update v1 2021-10-12 19:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}} {kubelet Update v1 2021-10-12 19:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-02eb4501265093bcc,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-12 19:46:28 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-12 19:46:28 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-12 19:46:28 +0000 UTC,LastTransitionTime:2021-10-12 19:19:53 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-12 19:46:28 +0000 UTC,LastTransitionTime:2021-10-12 19:20:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.55,},NodeAddress{Type:ExternalIP,Address:3.67.193.7,},NodeAddress{Type:Hostname,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-55.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-67-193-7.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2ea2383ed96e95048d0fa7f35e04f5,SystemUUID:ec2ea238-3ed9-6e95-048d-0fa7f35e04f5,BootID:96651c1c-97be-47be-ba65-81db1fa077ae,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[docker.io/library/nginx@sha256:155238fc7fdea5b7d4e5cf026f268a03f87741e511bdd225b89cea084544a8fb docker.io/library/nginx:latest],SizeBytes:53792768,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-3657^113ee58d-2b95-11ec-8f82-664200be9e23],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-3657^113ee58d-2b95-11ec-8f82-664200be9e23,DevicePath:,},},Config:nil,},}
Oct 12 19:46:39.905: INFO: 
... skipping 248 lines ...
• Failure [160.573 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903

  Oct 12 19:46:39.127: Unexpected error:
      <exec.CodeExitError>: {
          Err: {
              s: "error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
          },
          Code: 28,
      }
      error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6753 exec pause-pod-74b65766b-cv2kp -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip:
      Command stdout:
      
      stderr:
      + curl -q -s --connect-timeout 30 100.68.127.113:8080/clientip
      command terminated with exit code 28
      
      error:
      exit status 28
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133
------------------------------
{"msg":"FAILED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":27,"skipped":254,"failed":5,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:45.079: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:46:44.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3765" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":33,"skipped":230,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 109 lines ...
• [SLOW TEST:9.808 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":15,"skipped":139,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 20 lines ...
Oct 12 19:46:35.537: INFO: PersistentVolumeClaim pvc-cpbsk found but phase is Pending instead of Bound.
Oct 12 19:46:37.646: INFO: PersistentVolumeClaim pvc-cpbsk found and phase=Bound (2.218339338s)
Oct 12 19:46:37.646: INFO: Waiting up to 3m0s for PersistentVolume local-fstl5 to have phase Bound
Oct 12 19:46:37.754: INFO: PersistentVolume local-fstl5 found and phase=Bound (108.687267ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-gnsx
STEP: Creating a pod to test exec-volume-test
Oct 12 19:46:38.085: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gnsx" in namespace "volume-7127" to be "Succeeded or Failed"
Oct 12 19:46:38.194: INFO: Pod "exec-volume-test-preprovisionedpv-gnsx": Phase="Pending", Reason="", readiness=false. Elapsed: 108.878594ms
Oct 12 19:46:40.306: INFO: Pod "exec-volume-test-preprovisionedpv-gnsx": Phase="Running", Reason="", readiness=true. Elapsed: 2.221236664s
Oct 12 19:46:42.416: INFO: Pod "exec-volume-test-preprovisionedpv-gnsx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331067674s
STEP: Saw pod success
Oct 12 19:46:42.416: INFO: Pod "exec-volume-test-preprovisionedpv-gnsx" satisfied condition "Succeeded or Failed"
Oct 12 19:46:42.525: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-gnsx container exec-container-preprovisionedpv-gnsx: <nil>
STEP: delete the pod
Oct 12 19:46:42.759: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gnsx to disappear
Oct 12 19:46:42.868: INFO: Pod exec-volume-test-preprovisionedpv-gnsx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-gnsx
Oct 12 19:46:42.868: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gnsx" in namespace "volume-7127"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":49,"skipped":331,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:46.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 34 lines ...
STEP: Destroying namespace "apply-7509" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":16,"skipped":142,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:48.071: INFO: Driver local doesn't support ext4 -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-ad1ce104-518d-4e09-8fe1-a3873551e2e4
STEP: Creating a pod to test consume configMaps
Oct 12 19:46:45.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa" in namespace "configmap-9045" to be "Succeeded or Failed"
Oct 12 19:46:45.962: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa": Phase="Pending", Reason="", readiness=false. Elapsed: 107.847026ms
Oct 12 19:46:48.071: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217425878s
Oct 12 19:46:50.184: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330361115s
Oct 12 19:46:52.299: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444803123s
Oct 12 19:46:54.409: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.554626024s
STEP: Saw pod success
Oct 12 19:46:54.409: INFO: Pod "pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa" satisfied condition "Succeeded or Failed"
Oct 12 19:46:54.517: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa container agnhost-container: <nil>
STEP: delete the pod
Oct 12 19:46:54.740: INFO: Waiting for pod pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa to disappear
Oct 12 19:46:54.849: INFO: Pod pod-configmaps-b76c44b8-f30d-4f2b-80b6-f69a74d59afa no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.979 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":255,"failed":5,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:55.108: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 119 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications with PVCs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:286
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","total":-1,"completed":37,"skipped":222,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:46:57.937: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 129 lines ...
• [SLOW TEST:16.864 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":34,"skipped":231,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:02.045: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-gssg
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:46:27.539: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gssg" in namespace "subpath-6526" to be "Succeeded or Failed"
Oct 12 19:46:27.648: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Pending", Reason="", readiness=false. Elapsed: 109.633813ms
Oct 12 19:46:29.759: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 2.22001971s
Oct 12 19:46:31.868: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 4.329465652s
Oct 12 19:46:33.978: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 6.439692191s
Oct 12 19:46:36.088: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 8.54958852s
Oct 12 19:46:38.198: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 10.659680644s
... skipping 6 lines ...
Oct 12 19:46:52.976: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 25.437852313s
Oct 12 19:46:55.086: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 27.547614469s
Oct 12 19:46:57.196: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 29.657836737s
Oct 12 19:46:59.306: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Running", Reason="", readiness=true. Elapsed: 31.7678503s
Oct 12 19:47:01.416: INFO: Pod "pod-subpath-test-configmap-gssg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.877009896s
STEP: Saw pod success
Oct 12 19:47:01.416: INFO: Pod "pod-subpath-test-configmap-gssg" satisfied condition "Succeeded or Failed"
Oct 12 19:47:01.525: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-configmap-gssg container test-container-subpath-configmap-gssg: <nil>
STEP: delete the pod
Oct 12 19:47:01.757: INFO: Waiting for pod pod-subpath-test-configmap-gssg to disappear
Oct 12 19:47:01.866: INFO: Pod pod-subpath-test-configmap-gssg no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gssg
Oct 12 19:47:01.866: INFO: Deleting pod "pod-subpath-test-configmap-gssg" in namespace "subpath-6526"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":36,"skipped":185,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:27.974: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Oct 12 19:46:28.525: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1706xnqv4
STEP: creating a claim
Oct 12 19:46:28.635: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-f64x
STEP: Creating a pod to test subpath
Oct 12 19:46:28.977: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-f64x" in namespace "provisioning-1706" to be "Succeeded or Failed"
Oct 12 19:46:29.087: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 109.77955ms
Oct 12 19:46:31.197: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220044117s
Oct 12 19:46:33.308: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330505062s
Oct 12 19:46:35.421: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443095037s
Oct 12 19:46:37.532: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554262303s
Oct 12 19:46:39.644: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667004423s
Oct 12 19:46:41.757: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.779120732s
Oct 12 19:46:43.926: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 14.948210417s
Oct 12 19:46:46.036: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.058757205s
Oct 12 19:46:48.146: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 19.168929786s
Oct 12 19:46:50.257: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Pending", Reason="", readiness=false. Elapsed: 21.2797572s
Oct 12 19:46:52.370: INFO: Pod "pod-subpath-test-dynamicpv-f64x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.392754096s
STEP: Saw pod success
Oct 12 19:46:52.370: INFO: Pod "pod-subpath-test-dynamicpv-f64x" satisfied condition "Succeeded or Failed"
Oct 12 19:46:52.480: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-f64x container test-container-volume-dynamicpv-f64x: <nil>
STEP: delete the pod
Oct 12 19:46:52.721: INFO: Waiting for pod pod-subpath-test-dynamicpv-f64x to disappear
Oct 12 19:46:52.830: INFO: Pod pod-subpath-test-dynamicpv-f64x no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-f64x
Oct 12 19:46:52.830: INFO: Deleting pod "pod-subpath-test-dynamicpv-f64x" in namespace "provisioning-1706"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":37,"skipped":185,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:09.176: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":50,"skipped":275,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:30.252: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Oct 12 19:46:34.334: INFO: PersistentVolumeClaim pvc-h6bj4 found but phase is Pending instead of Bound.
Oct 12 19:46:36.444: INFO: PersistentVolumeClaim pvc-h6bj4 found and phase=Bound (2.221156228s)
Oct 12 19:46:36.444: INFO: Waiting up to 3m0s for PersistentVolume local-hqgp5 to have phase Bound
Oct 12 19:46:36.555: INFO: PersistentVolume local-hqgp5 found and phase=Bound (110.403284ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mzbm
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:46:36.889: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mzbm" in namespace "provisioning-1969" to be "Succeeded or Failed"
Oct 12 19:46:37.000: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Pending", Reason="", readiness=false. Elapsed: 110.613297ms
Oct 12 19:46:39.110: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220836468s
Oct 12 19:46:41.221: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331347126s
Oct 12 19:46:43.333: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 6.443215581s
Oct 12 19:46:45.443: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 8.553731543s
Oct 12 19:46:47.554: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 10.664877502s
... skipping 5 lines ...
Oct 12 19:47:00.230: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 23.340786259s
Oct 12 19:47:02.342: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 25.452762031s
Oct 12 19:47:04.454: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 27.56472092s
Oct 12 19:47:06.565: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Running", Reason="", readiness=true. Elapsed: 29.675788655s
Oct 12 19:47:08.677: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.787558867s
STEP: Saw pod success
Oct 12 19:47:08.677: INFO: Pod "pod-subpath-test-preprovisionedpv-mzbm" satisfied condition "Succeeded or Failed"
Oct 12 19:47:08.788: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-mzbm container test-container-subpath-preprovisionedpv-mzbm: <nil>
STEP: delete the pod
Oct 12 19:47:09.020: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mzbm to disappear
Oct 12 19:47:09.131: INFO: Pod pod-subpath-test-preprovisionedpv-mzbm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mzbm
Oct 12 19:47:09.131: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mzbm" in namespace "provisioning-1969"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":51,"skipped":275,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:10.683: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 169 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Oct 12 19:47:09.862: INFO: Waiting up to 5m0s for pod "pod-always-succeedca2c5545-8e2a-45ba-9dda-00a52dbbb567" in namespace "pods-64" to be "Succeeded or Failed"
Oct 12 19:47:09.972: INFO: Pod "pod-always-succeedca2c5545-8e2a-45ba-9dda-00a52dbbb567": Phase="Pending", Reason="", readiness=false. Elapsed: 109.30811ms
Oct 12 19:47:12.082: INFO: Pod "pod-always-succeedca2c5545-8e2a-45ba-9dda-00a52dbbb567": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219379223s
STEP: Saw pod success
Oct 12 19:47:12.082: INFO: Pod "pod-always-succeedca2c5545-8e2a-45ba-9dda-00a52dbbb567" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:14.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":38,"skipped":192,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:14.557: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8034" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":39,"skipped":197,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:16.011: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
Oct 12 19:46:34.368: INFO: PersistentVolumeClaim pvc-zk6m9 found and phase=Bound (108.010178ms)
STEP: Deleting the previously created pod
Oct 12 19:46:41.913: INFO: Deleting pod "pvc-volume-tester-k75mc" in namespace "csi-mock-volumes-3294"
Oct 12 19:46:42.028: INFO: Wait up to 5m0s for pod "pvc-volume-tester-k75mc" to be fully deleted
STEP: Checking CSI driver logs
Oct 12 19:46:54.361: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Im1URHN6ZjFONEdaZXdLM2lNdnE3TkhXZndaZk9MZzFGM3lKUDE3ODNjR3cifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MzQwNjg1OTgsImlhdCI6MTYzNDA2Nzk5OCwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLTdlMTY2NmY4ZTYtNjI2OTEudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtMzI5NCIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXItazc1bWMiLCJ1aWQiOiIwOWZhOTM2YS04NDIwLTQ3YTQtYWNhYy1jNjgwOWZhZjYxYmQifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiJiMDRhZDNiYi04OWIyLTRlOWEtOTFkNy1kNTdiNjM4NDhhYjYifX0sIm5iZiI6MTYzNDA2Nzk5OCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtMzI5NDpkZWZhdWx0In0.rAaNdjh047N4AOZFjDMWuda2f7XwZg1dOffMh1o4QolhT2tJ4ek-UVGLgeUOcRogvB5WAKvFZ3rovd07K4pditdq_WfSUebj4WIP3rVVwkHScZUIR_ldw2e92RkXQw6dGzOKXfhqmUpXR1xgmq1CFDt8aPATitszvWanHuibXJE6LQb23W1MRxIM1jE9gqYyV_HLfvQ1mndru5diPuSjMWQmWoVi0Ey41v1M8VkRlx_uQ3YJz8F8Uh-4E3JzqhN6mJQmUtczvMfR-FRUXzQ5342kvPsvInAD7npY9ecDKAVKmHmQ589rnO4tsTREbB84iJJsVRMdObv0iQWwM1l0Dw","expirationTimestamp":"2021-10-12T19:56:38Z"}}
Oct 12 19:46:54.361: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/09fa936a-8420-47a4-acac-c6809faf61bd/volumes/kubernetes.io~csi/pvc-81b46463-bbc8-4053-b107-5e17e2859c09/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-k75mc
Oct 12 19:46:54.361: INFO: Deleting pod "pvc-volume-tester-k75mc" in namespace "csi-mock-volumes-3294"
STEP: Deleting claim pvc-zk6m9
Oct 12 19:46:54.687: INFO: Waiting up to 2m0s for PersistentVolume pvc-81b46463-bbc8-4053-b107-5e17e2859c09 to get deleted
Oct 12 19:46:54.796: INFO: PersistentVolume pvc-81b46463-bbc8-4053-b107-5e17e2859c09 was removed
STEP: Deleting storageclass csi-mock-volumes-3294-scj8hrj
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":256,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:47:02.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-f76ac390-fb47-4452-9d17-c32fec2a2252
STEP: Creating a pod to test consume secrets
Oct 12 19:47:02.985: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845" in namespace "projected-8490" to be "Succeeded or Failed"
Oct 12 19:47:03.095: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 110.097159ms
Oct 12 19:47:05.205: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220011183s
Oct 12 19:47:07.316: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330982258s
Oct 12 19:47:09.425: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440492436s
Oct 12 19:47:11.535: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550250865s
Oct 12 19:47:13.645: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659744122s
Oct 12 19:47:15.755: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76998963s
Oct 12 19:47:17.865: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.880463972s
STEP: Saw pod success
Oct 12 19:47:17.866: INFO: Pod "pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845" satisfied condition "Succeeded or Failed"
Oct 12 19:47:17.974: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845 container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:47:18.198: INFO: Waiting for pod pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845 to disappear
Oct 12 19:47:18.308: INFO: Pod pod-projected-secrets-13de7e71-74f0-4416-a186-95acaabe6845 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7191" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":40,"skipped":201,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:19.473: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":256,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:47:18.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:19.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7461" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":30,"skipped":256,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:20.095: INFO: Only supported for providers [vsphere] (not aws)
... skipping 221 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":27,"skipped":148,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:20.640: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":54,"skipped":344,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:21.850: INFO: Only supported for providers [gce gke] (not aws)
... skipping 20 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:47:19.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:22.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2108" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":41,"skipped":207,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:23.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5407" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":31,"skipped":267,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":21,"skipped":170,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should implement service.kubernetes.io/headless","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:47:17.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 164 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":10,"skipped":64,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:44.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct 12 19:46:45.188: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:46:45.409: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7657" in namespace "provisioning-7657" to be "Succeeded or Failed"
Oct 12 19:46:45.548: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 138.911486ms
Oct 12 19:46:47.660: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251021241s
Oct 12 19:46:49.770: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 4.361125496s
Oct 12 19:46:51.880: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470895121s
Oct 12 19:46:53.990: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 8.581064151s
Oct 12 19:46:56.100: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.691166252s
STEP: Saw pod success
Oct 12 19:46:56.101: INFO: Pod "hostpath-symlink-prep-provisioning-7657" satisfied condition "Succeeded or Failed"
Oct 12 19:46:56.101: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7657" in namespace "provisioning-7657"
Oct 12 19:46:56.217: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7657" to be fully deleted
Oct 12 19:46:56.326: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kk56
STEP: Creating a pod to test atomic-volume-subpath
Oct 12 19:46:56.436: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kk56" in namespace "provisioning-7657" to be "Succeeded or Failed"
Oct 12 19:46:56.545: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Pending", Reason="", readiness=false. Elapsed: 109.049112ms
Oct 12 19:46:58.656: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21969002s
Oct 12 19:47:00.767: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330185705s
Oct 12 19:47:02.876: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439869484s
Oct 12 19:47:04.987: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550581155s
Oct 12 19:47:07.100: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Running", Reason="", readiness=true. Elapsed: 10.663196008s
... skipping 2 lines ...
Oct 12 19:47:13.442: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Running", Reason="", readiness=true. Elapsed: 17.005868781s
Oct 12 19:47:15.553: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Running", Reason="", readiness=true. Elapsed: 19.116657769s
Oct 12 19:47:17.663: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Running", Reason="", readiness=true. Elapsed: 21.226678128s
Oct 12 19:47:19.775: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Running", Reason="", readiness=true. Elapsed: 23.33824923s
Oct 12 19:47:21.884: INFO: Pod "pod-subpath-test-inlinevolume-kk56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.447733179s
STEP: Saw pod success
Oct 12 19:47:21.884: INFO: Pod "pod-subpath-test-inlinevolume-kk56" satisfied condition "Succeeded or Failed"
Oct 12 19:47:21.993: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-kk56 container test-container-subpath-inlinevolume-kk56: <nil>
STEP: delete the pod
Oct 12 19:47:22.219: INFO: Waiting for pod pod-subpath-test-inlinevolume-kk56 to disappear
Oct 12 19:47:22.327: INFO: Pod pod-subpath-test-inlinevolume-kk56 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kk56
Oct 12 19:47:22.328: INFO: Deleting pod "pod-subpath-test-inlinevolume-kk56" in namespace "provisioning-7657"
STEP: Deleting pod
Oct 12 19:47:22.436: INFO: Deleting pod "pod-subpath-test-inlinevolume-kk56" in namespace "provisioning-7657"
Oct 12 19:47:22.659: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7657" in namespace "provisioning-7657" to be "Succeeded or Failed"
Oct 12 19:47:22.768: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Pending", Reason="", readiness=false. Elapsed: 109.037611ms
Oct 12 19:47:24.881: INFO: Pod "hostpath-symlink-prep-provisioning-7657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222019575s
STEP: Saw pod success
Oct 12 19:47:24.881: INFO: Pod "hostpath-symlink-prep-provisioning-7657" satisfied condition "Succeeded or Failed"
Oct 12 19:47:24.881: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7657" in namespace "provisioning-7657"
Oct 12 19:47:24.996: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7657" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:25.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7657" for this suite.
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":11,"skipped":64,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:25.339: INFO: Only supported for providers [azure] (not aws)
... skipping 154 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-5f96b3b5-3068-407a-b96e-1656ee5552ac
STEP: Creating a pod to test consume secrets
Oct 12 19:47:23.063: INFO: Waiting up to 5m0s for pod "pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8" in namespace "secrets-8734" to be "Succeeded or Failed"
Oct 12 19:47:23.171: INFO: Pod "pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 108.774776ms
Oct 12 19:47:25.284: INFO: Pod "pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220890736s
STEP: Saw pod success
Oct 12 19:47:25.284: INFO: Pod "pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8" satisfied condition "Succeeded or Failed"
Oct 12 19:47:25.392: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8 container secret-volume-test: <nil>
STEP: delete the pod
Oct 12 19:47:25.621: INFO: Waiting for pod pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8 to disappear
Oct 12 19:47:25.730: INFO: Pod pod-secrets-bd79ec1e-d77a-4203-a1d6-577e0258d3e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:25.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8734" for this suite.
STEP: Destroying namespace "secret-namespace-3449" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":352,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 53 lines ...
Oct 12 19:46:12.391: INFO: PersistentVolumeClaim csi-hostpathwldr9 found but phase is Pending instead of Bound.
Oct 12 19:46:14.498: INFO: PersistentVolumeClaim csi-hostpathwldr9 found but phase is Pending instead of Bound.
Oct 12 19:46:16.605: INFO: PersistentVolumeClaim csi-hostpathwldr9 found but phase is Pending instead of Bound.
Oct 12 19:46:18.712: INFO: PersistentVolumeClaim csi-hostpathwldr9 found and phase=Bound (6.428511267s)
STEP: Expanding non-expandable pvc
Oct 12 19:46:18.927: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 12 19:46:19.143: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:21.358: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:23.359: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:25.359: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:27.364: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:29.358: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:31.359: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:33.359: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:35.358: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:37.365: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:39.363: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:41.358: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:43.360: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:45.359: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:47.358: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:49.363: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct 12 19:46:49.577: INFO: Error updating pvc csi-hostpathwldr9: persistentvolumeclaims "csi-hostpathwldr9" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct 12 19:46:49.577: INFO: Deleting PersistentVolumeClaim "csi-hostpathwldr9"
Oct 12 19:46:49.690: INFO: Waiting up to 5m0s for PersistentVolume pvc-ffa941d0-cd87-4509-9726-bce3f448fe54 to get deleted
Oct 12 19:46:49.798: INFO: PersistentVolume pvc-ffa941d0-cd87-4509-9726-bce3f448fe54 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-8971
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":22,"skipped":184,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:28.057: INFO: Only supported for providers [vsphere] (not aws)
... skipping 51 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
Oct 12 19:47:23.080: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct 12 19:47:23.192: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-vq22
STEP: Creating a pod to test subpath
Oct 12 19:47:23.304: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vq22" in namespace "provisioning-950" to be "Succeeded or Failed"
Oct 12 19:47:23.414: INFO: Pod "pod-subpath-test-inlinevolume-vq22": Phase="Pending", Reason="", readiness=false. Elapsed: 109.492675ms
Oct 12 19:47:25.526: INFO: Pod "pod-subpath-test-inlinevolume-vq22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221861016s
Oct 12 19:47:27.637: INFO: Pod "pod-subpath-test-inlinevolume-vq22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332916085s
STEP: Saw pod success
Oct 12 19:47:27.637: INFO: Pod "pod-subpath-test-inlinevolume-vq22" satisfied condition "Succeeded or Failed"
Oct 12 19:47:27.748: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-vq22 container test-container-subpath-inlinevolume-vq22: <nil>
STEP: delete the pod
Oct 12 19:47:27.974: INFO: Waiting for pod pod-subpath-test-inlinevolume-vq22 to disappear
Oct 12 19:47:28.083: INFO: Pod pod-subpath-test-inlinevolume-vq22 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-vq22
Oct 12 19:47:28.084: INFO: Deleting pod "pod-subpath-test-inlinevolume-vq22" in namespace "provisioning-950"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":42,"skipped":211,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:28.533: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:30.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3832" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":23,"skipped":200,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:30.292: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":27,"skipped":230,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:04.740: INFO: >>> kubeConfig: /root/.kube/config
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":28,"skipped":230,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:36.836: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 56 lines ...
• [SLOW TEST:11.403 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:318
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":56,"skipped":353,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:37.487: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":43,"skipped":214,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:45.000: INFO: Only supported for providers [gce gke] (not aws)
... skipping 21 lines ...
Oct 12 19:47:45.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct 12 19:47:45.670: INFO: Waiting up to 5m0s for pod "security-context-d5a091c9-af4d-4a54-add4-6e24e8987289" in namespace "security-context-3911" to be "Succeeded or Failed"
Oct 12 19:47:45.780: INFO: Pod "security-context-d5a091c9-af4d-4a54-add4-6e24e8987289": Phase="Pending", Reason="", readiness=false. Elapsed: 109.631002ms
Oct 12 19:47:47.889: INFO: Pod "security-context-d5a091c9-af4d-4a54-add4-6e24e8987289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219464522s
STEP: Saw pod success
Oct 12 19:47:47.889: INFO: Pod "security-context-d5a091c9-af4d-4a54-add4-6e24e8987289" satisfied condition "Succeeded or Failed"
Oct 12 19:47:48.000: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod security-context-d5a091c9-af4d-4a54-add4-6e24e8987289 container test-container: <nil>
STEP: delete the pod
Oct 12 19:47:48.230: INFO: Waiting for pod security-context-d5a091c9-af4d-4a54-add4-6e24e8987289 to disappear
Oct 12 19:47:48.339: INFO: Pod security-context-d5a091c9-af4d-4a54-add4-6e24e8987289 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:47:48.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-3911" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":44,"skipped":217,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:48.577: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 84 lines ...
Oct 12 19:47:35.864: INFO: PersistentVolumeClaim pvc-zrs6f found but phase is Pending instead of Bound.
Oct 12 19:47:37.973: INFO: PersistentVolumeClaim pvc-zrs6f found and phase=Bound (2.217998913s)
Oct 12 19:47:37.973: INFO: Waiting up to 3m0s for PersistentVolume local-6wtlf to have phase Bound
Oct 12 19:47:38.084: INFO: PersistentVolume local-6wtlf found and phase=Bound (111.243213ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nwkj
STEP: Creating a pod to test subpath
Oct 12 19:47:38.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nwkj" in namespace "provisioning-9503" to be "Succeeded or Failed"
Oct 12 19:47:38.521: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj": Phase="Pending", Reason="", readiness=false. Elapsed: 108.676004ms
Oct 12 19:47:40.632: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219104541s
Oct 12 19:47:42.742: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329536553s
Oct 12 19:47:44.852: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439091467s
Oct 12 19:47:46.962: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.549053207s
STEP: Saw pod success
Oct 12 19:47:46.962: INFO: Pod "pod-subpath-test-preprovisionedpv-nwkj" satisfied condition "Succeeded or Failed"
Oct 12 19:47:47.071: INFO: Trying to get logs from node ip-172-20-47-216.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-nwkj container test-container-subpath-preprovisionedpv-nwkj: <nil>
STEP: delete the pod
Oct 12 19:47:47.295: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nwkj to disappear
Oct 12 19:47:47.404: INFO: Pod pod-subpath-test-preprovisionedpv-nwkj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nwkj
Oct 12 19:47:47.404: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nwkj" in namespace "provisioning-9503"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":35,"skipped":254,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:47:48.966: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 62 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":21,"skipped":135,"failed":4,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:42:50.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
Oct 12 19:44:54.144: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:45:24.258: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-634.svc.cluster.local from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:45:54.369: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:46:24.484: INFO: Unable to read wheezy_udp@PodARecord from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:46:54.595: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:47:24.706: INFO: Unable to read jessie_udp@kubernetes.default from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: the server is currently unable to handle the request (get pods dns-test-f45016bd-04bf-4702-a8fa-f50460c10228)
Oct 12 19:47:53.705: FAIL: Unable to read jessie_tcp@kubernetes.default from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: Get "https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-634/pods/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00249bd48, 0x299a700, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003865068, 0xc00249bd48, 0xc003865068, 0xc00249bd48)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0022e2a80, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E1012 19:47:53.706577    5429 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Oct 12 19:47:53.706: Unable to read jessie_tcp@kubernetes.default from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: Get \"https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-634/pods/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228/proxy/results/jessie_tcp@kubernetes.default\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00249bd48, 0x299a700, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003865068, 0xc00249bd48, 0xc003865068, 0xc00249bd48)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc00249bd48, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001d79f00, 0x10, 0x10, 0x6ed05c6, 0x7, 0xc0031e2800, 0x779f8f8, 0xc002a3cb00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0011dda20, 0xc0031e2800, 0xc001d79f00, 0x10, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.3()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:107 +0x68f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022e2a80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0022e2a80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc0022e2a80, 0x70e7b58)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a6f0a0, 0xc003570340)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6a6f0a0, 0xc003570340)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00137c160, 0x151, 0x868a4a4, 0x7d, 0xd3, 0xc001213800, 0x800)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61c84e0, 0x75c1ba0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00137c160, 0x151, 0xc00249b788, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00137c160, 0x151, 0xc00249b870, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f73783, 0x24, 0xc00249bad0, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc003865000, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00249bd48, 0x299a700, 0x0, 0x0)
... skipping 318 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90

  Oct 12 19:47:53.706: Unable to read jessie_tcp@kubernetes.default from pod dns-634/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228: Get "https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-634/pods/dns-test-f45016bd-04bf-4702-a8fa-f50460c10228/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
------------------------------
{"msg":"FAILED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":21,"skipped":135,"failed":5,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:22.839 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":29,"skipped":238,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 245 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":17,"skipped":151,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:48:08.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:48:09.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-424" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":18,"skipped":151,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:48:10.036: INFO: Only supported for providers [vsphere] (not aws)
... skipping 34 lines ...
Oct 12 19:48:09.131: INFO: Creating a PV followed by a PVC
Oct 12 19:48:09.349: INFO: Waiting for PV local-pvz5szr to bind to PVC pvc-5dhrl
Oct 12 19:48:09.349: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5dhrl] to have phase Bound
Oct 12 19:48:09.457: INFO: PersistentVolumeClaim pvc-5dhrl found and phase=Bound (108.303612ms)
Oct 12 19:48:09.458: INFO: Waiting up to 3m0s for PersistentVolume local-pvz5szr to have phase Bound
Oct 12 19:48:09.566: INFO: PersistentVolume local-pvz5szr found and phase=Bound (108.2332ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Oct 12 19:48:09.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-765fa21a-28b0-43a1-971e-657e1333eeae] Namespace:persistent-local-volumes-test-4233 PodName:hostexec-ip-172-20-57-193.eu-central-1.compute.internal-tms6v ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Oct 12 19:48:09.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:11.387 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":30,"skipped":242,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 42 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:501
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":32,"skipped":268,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:48:17.893: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
Oct 12 19:29:26.151: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:29:56.262: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:30:26.375: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:30:56.487: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:31:26.598: INFO: Unable to read jessie_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:31:56.710: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:31:56.710: INFO: Lookups using dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:32:31.821: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:33:01.932: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:33:32.042: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:34:02.153: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:34:32.263: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:35:02.374: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:35:32.485: INFO: Unable to read jessie_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:36:02.597: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:36:02.597: INFO: Lookups using dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:36:36.827: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:37:06.937: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:37:37.048: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:38:07.159: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:38:37.270: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:39:07.380: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:39:37.491: INFO: Unable to read jessie_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:40:07.602: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:40:07.602: INFO: Lookups using dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:40:41.821: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:41:11.932: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:41:42.043: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:42:12.154: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:42:42.265: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:43:12.375: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:43:42.486: INFO: Unable to read jessie_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:44:12.597: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:44:12.597: INFO: Lookups using dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:44:42.707: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:45:12.817: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:45:42.927: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:46:13.037: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:46:43.149: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:47:13.258: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:47:43.398: INFO: Unable to read jessie_udp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:48:13.546: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076: the server is currently unable to handle the request (get pods dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076)
Oct 12 19:48:13.547: INFO: Lookups using dns-4719/dns-test-6fc4c129-9aa4-4f9e-a11f-0172bd84e076 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-4719.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:48:13.547: FAIL: Unexpected error:
    <*errors.errorString | 0xc000336250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 290 lines ...
• Failure [1240.225 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:48:13.547: Unexpected error:
      <*errors.errorString | 0xc000336250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":133,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:48:18.961: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:48:23.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-4686" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":14,"skipped":154,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:48:23.460: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 75 lines ...
Oct 12 19:46:41.662: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-666 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.68.237.82:80 2>&1 || true; echo; done'
Oct 12 19:48:18.218: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.237.82:80\n+ true\n+ echo\n"
Oct 12 19:48:18.218: INFO: stdout: "wget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nup-down-1-nj5vx\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-nj5vx\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\n"
Oct 12 19:48:18.218: INFO: Unable to reach the following endpoints of service 100.68.237.82: map[up-down-1-wpkj5:{} up-down-1-zs8rm:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-666
STEP: Deleting pod verify-service-up-exec-pod-g8s5d in namespace services-666
Oct 12 19:48:23.444: FAIL: Unexpected error:
    <*errors.errorString | 0xc0033aa090>: {
        s: "service verification failed for: 100.68.237.82\nexpected [up-down-1-nj5vx up-down-1-wpkj5 up-down-1-zs8rm]\nreceived [up-down-1-nj5vx wget: download timed out]",
    }
    service verification failed for: 100.68.237.82
    expected [up-down-1-nj5vx up-down-1-wpkj5 up-down-1-zs8rm]
    received [up-down-1-nj5vx wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.8()
... skipping 317 lines ...
• Failure [328.149 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015

  Oct 12 19:48:23.444: Unexpected error:
      <*errors.errorString | 0xc0033aa090>: {
          s: "service verification failed for: 100.68.237.82\nexpected [up-down-1-nj5vx up-down-1-wpkj5 up-down-1-zs8rm]\nreceived [up-down-1-nj5vx wget: download timed out]",
      }
      service verification failed for: 100.68.237.82
      expected [up-down-1-nj5vx up-down-1-wpkj5 up-down-1-zs8rm]
      received [up-down-1-nj5vx wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1031
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":25,"skipped":150,"failed":3,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","[sig-network] Services should be able to up and down services"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Oct 12 19:48:21.094: INFO: PersistentVolumeClaim pvc-ppt7z found but phase is Pending instead of Bound.
Oct 12 19:48:23.203: INFO: PersistentVolumeClaim pvc-ppt7z found and phase=Bound (6.437584465s)
Oct 12 19:48:23.203: INFO: Waiting up to 3m0s for PersistentVolume local-6hhlt to have phase Bound
Oct 12 19:48:23.312: INFO: PersistentVolume local-6hhlt found and phase=Bound (108.596731ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h697
STEP: Creating a pod to test subpath
Oct 12 19:48:23.638: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h697" in namespace "provisioning-4024" to be "Succeeded or Failed"
Oct 12 19:48:23.747: INFO: Pod "pod-subpath-test-preprovisionedpv-h697": Phase="Pending", Reason="", readiness=false. Elapsed: 108.556226ms
Oct 12 19:48:25.857: INFO: Pod "pod-subpath-test-preprovisionedpv-h697": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218529095s
Oct 12 19:48:27.968: INFO: Pod "pod-subpath-test-preprovisionedpv-h697": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329373043s
STEP: Saw pod success
Oct 12 19:48:27.968: INFO: Pod "pod-subpath-test-preprovisionedpv-h697" satisfied condition "Succeeded or Failed"
Oct 12 19:48:28.076: INFO: Trying to get logs from node ip-172-20-61-115.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-h697 container test-container-subpath-preprovisionedpv-h697: <nil>
STEP: delete the pod
Oct 12 19:48:28.300: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h697 to disappear
Oct 12 19:48:28.408: INFO: Pod pod-subpath-test-preprovisionedpv-h697 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h697
Oct 12 19:48:28.408: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h697" in namespace "provisioning-4024"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":31,"skipped":249,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 12 19:48:29.930: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Oct 12 19:48:29.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 12 19:48:30.601: INFO: Waiting up to 5m0s for pod "pod-ea3059d5-2faf-4e98-8480-2908e80f3edc" in namespace "emptydir-9015" to be "Succeeded or Failed"
Oct 12 19:48:30.710: INFO: Pod "pod-ea3059d5-2faf-4e98-8480-2908e80f3edc": Phase="Pending", Reason="", readiness=false. Elapsed: 108.241882ms
Oct 12 19:48:32.819: INFO: Pod "pod-ea3059d5-2faf-4e98-8480-2908e80f3edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217036669s
STEP: Saw pod success
Oct 12 19:48:32.819: INFO: Pod "pod-ea3059d5-2faf-4e98-8480-2908e80f3edc" satisfied condition "Succeeded or Failed"
Oct 12 19:48:32.927: INFO: Trying to get logs from node ip-172-20-57-193.eu-central-1.compute.internal pod pod-ea3059d5-2faf-4e98-8480-2908e80f3edc container test-container: <nil>
STEP: delete the pod
Oct 12 19:48:33.151: INFO: Waiting for pod pod-ea3059d5-2faf-4e98-8480-2908e80f3edc to disappear
Oct 12 19:48:33.259: INFO: Pod pod-ea3059d5-2faf-4e98-8480-2908e80f3edc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:48:33.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9015" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":252,"failed":3,"failures":["[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSS
------------------------------
Oct 12 19:48:33.502: INFO: Running AfterSuite actions on all nodes


... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":52,"skipped":307,"failed":1,"failures":["[sig-network] Conntrack should drop INVALID conntrack entries"]}
Oct 12 19:48:51.479: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 12 19:48:54.654: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":233,"failed":3,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:46:27.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
STEP: creating replication controller externalname-service in namespace services-377
I1012 19:46:28.297954    5578 runners.go:190] Created replication controller with name: externalname-service, namespace: services-377, replica count: 2
I1012 19:46:31.449772    5578 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 12 19:46:31.449: INFO: Creating new exec pod
Oct 12 19:46:34.893: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:46:41.106: INFO: rc: 1
Oct 12 19:46:41.106: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:46:42.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:46:48.284: INFO: rc: 1
Oct 12 19:46:48.284: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:46:49.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:46:55.313: INFO: rc: 1
Oct 12 19:46:55.314: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:46:56.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:02.299: INFO: rc: 1
Oct 12 19:47:02.299: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:03.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:09.392: INFO: rc: 1
Oct 12 19:47:09.392: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:10.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:16.324: INFO: rc: 1
Oct 12 19:47:16.325: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:17.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:23.260: INFO: rc: 1
Oct 12 19:47:23.260: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:24.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:30.296: INFO: rc: 1
Oct 12 19:47:30.296: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:31.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:37.304: INFO: rc: 1
Oct 12 19:47:37.304: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:38.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:44.299: INFO: rc: 1
Oct 12 19:47:44.299: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:45.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:51.349: INFO: rc: 1
Oct 12 19:47:51.349: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:52.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:47:58.281: INFO: rc: 1
Oct 12 19:47:58.281: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:59.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:05.394: INFO: rc: 1
Oct 12 19:48:05.394: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:06.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:12.264: INFO: rc: 1
Oct 12 19:48:12.265: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:13.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:19.277: INFO: rc: 1
Oct 12 19:48:19.277: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:20.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:26.294: INFO: rc: 1
Oct 12 19:48:26.294: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:27.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:33.307: INFO: rc: 1
Oct 12 19:48:33.307: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:34.106: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:40.294: INFO: rc: 1
Oct 12 19:48:40.294: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:41.107: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:47.355: INFO: rc: 1
Oct 12 19:48:47.355: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:47.355: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct 12 19:48:53.545: INFO: rc: 1
Oct 12 19:48:53.545: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-377 exec execpodb86qz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:53.545: FAIL: Unexpected error:
    <*errors.errorString | 0xc004120180>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 273 lines ...
• Failure [151.537 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:48:53.545: Unexpected error:
      <*errors.errorString | 0xc004120180>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":26,"skipped":233,"failed":4,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
Oct 12 19:48:58.840: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:99.651 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:283
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":22,"skipped":195,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Services should implement service.kubernetes.io/headless","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}
Oct 12 19:49:05.042: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":38,"skipped":236,"failed":2,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
Oct 12 19:49:14.094: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":36,"skipped":272,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 12 19:48:02.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 75 lines ...
Tue Oct 12 19:48:59 UTC 2021 Try: 23

Tue Oct 12 19:49:04 UTC 2021 Try: 24

Tue Oct 12 19:49:09 UTC 2021 Try: 25

Oct 12 19:49:10.951: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002b9ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002b9ac00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 220 lines ...
• Failure [73.841 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203

  Oct 12 19:49:10.952: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":36,"skipped":272,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service"]}
Oct 12 19:49:16.165: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":15,"skipped":157,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}
Oct 12 19:49:19.968: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Oct 12 19:47:52.871: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2620
Oct 12 19:47:52.984: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2620
Oct 12 19:47:53.095: INFO: creating *v1.StatefulSet: csi-mock-volumes-2620-1233/csi-mockplugin
Oct 12 19:47:53.214: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2620
Oct 12 19:47:53.325: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2620"
Oct 12 19:47:53.435: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2620 to register on node ip-172-20-57-193.eu-central-1.compute.internal
I1012 19:47:57.539397    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1012 19:47:57.649049    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2620","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1012 19:47:57.759477    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1012 19:47:57.868459    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1012 19:47:58.111393    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2620","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1012 19:47:58.988953    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2620"},"Error":"","FullError":null}
STEP: Creating pod
Oct 12 19:48:03.601: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1012 19:48:03.843554    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1012 19:48:03.959715    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427"}}},"Error":"","FullError":null}
I1012 19:48:05.805745    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 12 19:48:05.920: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:48:06.660963    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d64d101a-6444-4725-ae96-6e2f13e0c427/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427","storage.kubernetes.io/csiProvisionerIdentity":"1634068077920-8081-csi-mock-csi-mock-volumes-2620"}},"Response":{},"Error":"","FullError":null}
I1012 19:48:07.101210    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct 12 19:48:07.210: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:48:07.934: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:48:08.657: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:48:09.395174    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d64d101a-6444-4725-ae96-6e2f13e0c427/globalmount","target_path":"/var/lib/kubelet/pods/7cf8cd8f-5ab4-4e3c-8fc2-3696d8a2e2fc/volumes/kubernetes.io~csi/pvc-d64d101a-6444-4725-ae96-6e2f13e0c427/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427","storage.kubernetes.io/csiProvisionerIdentity":"1634068077920-8081-csi-mock-csi-mock-volumes-2620"}},"Response":{},"Error":"","FullError":null}
Oct 12 19:48:12.043: INFO: Deleting pod "pvc-volume-tester-g66pf" in namespace "csi-mock-volumes-2620"
Oct 12 19:48:12.155: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g66pf" to be fully deleted
Oct 12 19:48:14.070: INFO: >>> kubeConfig: /root/.kube/config
I1012 19:48:14.809029    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7cf8cd8f-5ab4-4e3c-8fc2-3696d8a2e2fc/volumes/kubernetes.io~csi/pvc-d64d101a-6444-4725-ae96-6e2f13e0c427/mount"},"Response":{},"Error":"","FullError":null}
I1012 19:48:14.978663    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1012 19:48:15.088182    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d64d101a-6444-4725-ae96-6e2f13e0c427/globalmount"},"Response":{},"Error":"","FullError":null}
I1012 19:48:22.500242    5512 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct 12 19:48:23.486: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bfqtp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2620", SelfLink:"", UID:"d64d101a-6444-4725-ae96-6e2f13e0c427", ResourceVersion:"43555", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769664883, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00352cae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00352caf8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00249c0c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00249c0d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:48:23.486: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bfqtp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2620", SelfLink:"", UID:"d64d101a-6444-4725-ae96-6e2f13e0c427", ResourceVersion:"43558", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769664883, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-57-193.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00429bfe0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e96000)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001e96018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e96030)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0038f3d00), VolumeMode:(*v1.PersistentVolumeMode)(0xc0038f3d10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:48:23.486: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bfqtp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2620", SelfLink:"", UID:"d64d101a-6444-4725-ae96-6e2f13e0c427", ResourceVersion:"43559", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769664883, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2620", "volume.kubernetes.io/selected-node":"ip-172-20-57-193.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0037449a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0037449c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0037449d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0037449f0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744a08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744a20)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002c26580), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c26590), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:48:23.487: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bfqtp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2620", SelfLink:"", UID:"d64d101a-6444-4725-ae96-6e2f13e0c427", ResourceVersion:"43571", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769664883, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2620", "volume.kubernetes.io/selected-node":"ip-172-20-57-193.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744a50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744a68)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744a80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744a98)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744ab0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744ac8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427", StorageClassName:(*string)(0xc002c265c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c265d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct 12 19:48:23.487: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-bfqtp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2620", SelfLink:"", UID:"d64d101a-6444-4725-ae96-6e2f13e0c427", ResourceVersion:"43572", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769664883, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2620", "volume.kubernetes.io/selected-node":"ip-172-20-57-193.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744af8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744b10)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744b28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744b40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003744b58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003744b70)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d64d101a-6444-4725-ae96-6e2f13e0c427", StorageClassName:(*string)(0xc002c26600), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c26610), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":45,"skipped":233,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
Oct 12 19:49:25.611: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-380 to expose endpoints map[hairpin:[8080]]
Oct 12 19:47:23.125: INFO: successfully validated that service hairpin-test in namespace services-380 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct 12 19:47:24.125: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:47:30.424: INFO: rc: 1
Oct 12 19:47:30.424: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:31.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:47:37.608: INFO: rc: 1
Oct 12 19:47:37.608: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:38.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:47:44.638: INFO: rc: 1
Oct 12 19:47:44.638: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:45.425: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:47:51.705: INFO: rc: 1
Oct 12 19:47:51.705: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:52.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:47:58.596: INFO: rc: 1
Oct 12 19:47:58.596: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:59.425: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:05.952: INFO: rc: 1
Oct 12 19:48:05.952: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:06.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:12.626: INFO: rc: 1
Oct 12 19:48:12.626: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:13.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:19.594: INFO: rc: 1
Oct 12 19:48:19.594: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:20.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:26.594: INFO: rc: 1
Oct 12 19:48:26.595: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:27.425: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:33.605: INFO: rc: 1
Oct 12 19:48:33.605: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:34.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:40.603: INFO: rc: 1
Oct 12 19:48:40.603: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:41.425: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:47.596: INFO: rc: 1
Oct 12 19:48:47.596: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:48.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:48:54.613: INFO: rc: 1
Oct 12 19:48:54.613: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:55.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:01.643: INFO: rc: 1
Oct 12 19:49:01.643: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:02.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:08.637: INFO: rc: 1
Oct 12 19:49:08.637: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:09.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:15.575: INFO: rc: 1
Oct 12 19:49:15.575: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:16.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:22.596: INFO: rc: 1
Oct 12 19:49:22.596: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:23.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:29.598: INFO: rc: 1
Oct 12 19:49:29.598: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:30.424: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:36.631: INFO: rc: 1
Oct 12 19:49:36.631: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:36.633: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 12 19:49:42.826: INFO: rc: 1
Oct 12 19:49:42.826: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-380 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:42.826: FAIL: Unexpected error:
    <*errors.errorString | 0xc004a0a1e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
occurred

... skipping 209 lines ...
• Failure [147.839 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986

  Oct 12 19:49:42.826: Unexpected error:
      <*errors.errorString | 0xc004a0a1e0>: {
          s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012
------------------------------
{"msg":"FAILED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":49,"skipped":343,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
Oct 12 19:49:47.541: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
I1012 19:47:02.050751    5560 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1012 19:47:05.053117    5560 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1012 19:47:08.054380    5560 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 12 19:47:08.379: INFO: Creating new exec pod
Oct 12 19:47:11.817: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:17.980: INFO: rc: 1
Oct 12 19:47:17.980: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:18.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:25.219: INFO: rc: 1
Oct 12 19:47:25.219: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:25.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:32.176: INFO: rc: 1
Oct 12 19:47:32.176: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:32.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:39.136: INFO: rc: 1
Oct 12 19:47:39.136: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:39.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:46.205: INFO: rc: 1
Oct 12 19:47:46.205: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:46.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:47:53.140: INFO: rc: 1
Oct 12 19:47:53.140: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:47:53.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:00.168: INFO: rc: 1
Oct 12 19:48:00.168: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:00.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:07.425: INFO: rc: 1
Oct 12 19:48:07.425: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:07.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:14.127: INFO: rc: 1
Oct 12 19:48:14.127: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:14.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:21.136: INFO: rc: 1
Oct 12 19:48:21.136: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:21.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:28.144: INFO: rc: 1
Oct 12 19:48:28.144: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:28.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:35.222: INFO: rc: 1
Oct 12 19:48:35.222: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:35.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:42.164: INFO: rc: 1
Oct 12 19:48:42.164: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:42.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:49.180: INFO: rc: 1
Oct 12 19:48:49.180: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:49.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:48:56.170: INFO: rc: 1
Oct 12 19:48:56.170: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:48:56.981: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:49:03.214: INFO: rc: 1
Oct 12 19:49:03.214: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:03.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:49:10.150: INFO: rc: 1
Oct 12 19:49:10.151: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:10.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:49:17.271: INFO: rc: 1
Oct 12 19:49:17.271: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:17.980: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:49:24.174: INFO: rc: 1
Oct 12 19:49:24.174: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:24.174: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct 12 19:49:30.406: INFO: rc: 1
Oct 12 19:49:30.406: INFO: Service reachability failing with error: error running /tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2956 exec execpod-affinitys4l4m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 12 19:49:30.406: FAIL: Unexpected error:
    <*errors.errorString | 0xc003a2a1b0>: {
        s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
occurred

... skipping 233 lines ...
• Failure [175.569 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:49:30.406: Unexpected error:
      <*errors.errorString | 0xc003a2a1b0>: {
          s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint affinity-nodeport:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2572
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":269,"failed":6,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]}
Oct 12 19:49:50.714: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Delete the cronjob
W1012 19:45:00.312396    5481 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
STEP: Verify if cronjob does not leave jobs nor pods behind
W1012 19:45:00.422684    5481 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
STEP: Gathering metrics
W1012 19:45:00.755474    5481 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 12 19:50:00.977: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:50:00.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4112" for this suite.


• [SLOW TEST:336.996 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1160
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":25,"skipped":205,"failed":3,"failures":["[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-network] Services should be rejected when no endpoints exist","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
Oct 12 19:50:01.206: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":28,"skipped":150,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}
Oct 12 19:50:30.792: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 271 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  38s   default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     37s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    37s   kubelet            Created container webserver
  Normal  Started    37s   kubelet            Started container webserver

Oct 12 19:32:38.720: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.4.163&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Oct 12 19:32:38.720: INFO: ...failed...will try again in next pass
Oct 12 19:32:38.720: INFO: Breadth first check of 100.96.2.141 on host 172.20.47.216...
Oct 12 19:32:38.831: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.2.141&port=8081&tries=1'] Namespace:pod-network-test-7425 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:32:38.831: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:32:39.573: INFO: Waiting for responses: map[]
Oct 12 19:32:39.573: INFO: reached 100.96.2.141 after 0/1 tries
Oct 12 19:32:39.573: INFO: Breadth first check of 100.96.1.164 on host 172.20.57.193...
... skipping 245 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  50s   default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     49s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    49s   kubelet            Created container webserver
  Normal  Started    49s   kubelet            Started container webserver

Oct 12 19:32:50.060: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.1.164&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 12 19:32:50.060: INFO: ...failed...will try again in next pass
Oct 12 19:32:50.060: INFO: Breadth first check of 100.96.3.187 on host 172.20.61.115...
Oct 12 19:32:50.172: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.3.187&port=8081&tries=1'] Namespace:pod-network-test-7425 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:32:50.172: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:32:56.011: INFO: Waiting for responses: map[netserver-3:{}]
Oct 12 19:32:58.012: INFO: 
Output of kubectl describe pod pod-network-test-7425/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  60s   default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     59s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    59s   kubelet            Created container webserver
  Normal  Started    59s   kubelet            Started container webserver

Oct 12 19:33:00.623: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.3.187&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Oct 12 19:33:00.623: INFO: ...failed...will try again in next pass
Oct 12 19:33:00.623: INFO: Going to retry 3 out of 4 pods....
Oct 12 19:33:00.623: INFO: Doublechecking 1 pods in host 172.20.57.193 which werent seen the first time.
Oct 12 19:33:00.623: INFO: Now attempting to probe pod [[[ 100.96.1.164 ]]]
Oct 12 19:33:00.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.1.164&port=8081&tries=1'] Namespace:pod-network-test-7425 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:33:00.738: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:33:06.497: INFO: Waiting for responses: map[netserver-2:{}]
... skipping 377 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  7m6s  default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     7m5s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    7m5s  kubelet            Created container webserver
  Normal  Started    7m5s  kubelet            Started container webserver

Oct 12 19:39:06.168: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.1.164&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 12 19:39:06.168: INFO: ... Done probing pod [[[ 100.96.1.164 ]]]
Oct 12 19:39:06.168: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    13m   kubelet            Created container webserver
  Normal  Started    13m   kubelet            Started container webserver

Oct 12 19:45:12.293: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.3.187&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Oct 12 19:45:12.293: INFO: ... Done probing pod [[[ 100.96.3.187 ]]]
Oct 12 19:45:12.293: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned pod-network-test-7425/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     19m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    19m   kubelet            Created container webserver
  Normal  Started    19m   kubelet            Started container webserver

Oct 12 19:51:17.582: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.4.163&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Oct 12 19:51:17.582: INFO: ... Done probing pod [[[ 100.96.4.163 ]]]
Oct 12 19:51:17.582: INFO: succeeded at polling 1 out of 4 connections
Oct 12 19:51:17.582: INFO: pod polling failure summary:
Oct 12 19:51:17.582: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.1.164&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Oct 12 19:51:17.582: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.3.187&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}]
Oct 12 19:51:17.582: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.147:9080/dial?request=hostname&protocol=udp&host=100.96.4.163&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Oct 12 19:51:17.582: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000e6e780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 212 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 12 19:51:17.582: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":69,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Oct 12 19:51:22.309: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
Oct 12 19:45:55.617: INFO: stderr: ""
Oct 12 19:45:55.617: INFO: stdout: "true"
Oct 12 19:45:55.617: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:45:56.020: INFO: stderr: ""
Oct 12 19:45:56.020: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:45:56.020: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:46:26.129: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:46:31.130: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:46:31.576: INFO: stderr: ""
Oct 12 19:46:31.576: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:46:31.576: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:46:31.997: INFO: stderr: ""
Oct 12 19:46:31.997: INFO: stdout: "true"
Oct 12 19:46:31.997: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:46:32.429: INFO: stderr: ""
Oct 12 19:46:32.429: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:46:32.429: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:47:02.539: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:47:07.539: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:47:08.059: INFO: stderr: ""
Oct 12 19:47:08.059: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:47:08.059: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:47:08.491: INFO: stderr: ""
Oct 12 19:47:08.491: INFO: stdout: "true"
Oct 12 19:47:08.491: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:47:08.916: INFO: stderr: ""
Oct 12 19:47:08.916: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:47:08.916: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:47:39.025: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:47:44.025: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:47:44.562: INFO: stderr: ""
Oct 12 19:47:44.562: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:47:44.562: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:47:44.976: INFO: stderr: ""
Oct 12 19:47:44.976: INFO: stdout: "true"
Oct 12 19:47:44.976: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:47:45.396: INFO: stderr: ""
Oct 12 19:47:45.396: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:47:45.396: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:48:15.506: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:48:20.506: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:48:21.049: INFO: stderr: ""
Oct 12 19:48:21.049: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:48:21.049: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:48:21.470: INFO: stderr: ""
Oct 12 19:48:21.470: INFO: stdout: "true"
Oct 12 19:48:21.470: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:48:21.878: INFO: stderr: ""
Oct 12 19:48:21.879: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:48:21.879: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:48:51.988: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:48:56.989: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:48:57.438: INFO: stderr: ""
Oct 12 19:48:57.438: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:48:57.438: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:48:57.864: INFO: stderr: ""
Oct 12 19:48:57.864: INFO: stdout: "true"
Oct 12 19:48:57.864: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:48:58.292: INFO: stderr: ""
Oct 12 19:48:58.292: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:48:58.292: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:49:28.401: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:49:33.404: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:49:33.845: INFO: stderr: ""
Oct 12 19:49:33.846: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:49:33.846: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:49:34.283: INFO: stderr: ""
Oct 12 19:49:34.283: INFO: stdout: "true"
Oct 12 19:49:34.283: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:49:34.735: INFO: stderr: ""
Oct 12 19:49:34.735: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:49:34.735: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:50:04.844: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:50:09.848: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:50:10.385: INFO: stderr: ""
Oct 12 19:50:10.385: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:50:10.385: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:50:10.817: INFO: stderr: ""
Oct 12 19:50:10.817: INFO: stdout: "true"
Oct 12 19:50:10.817: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:50:11.297: INFO: stderr: ""
Oct 12 19:50:11.297: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:50:11.297: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:50:41.406: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:50:46.406: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct 12 19:50:46.953: INFO: stderr: ""
Oct 12 19:50:46.953: INFO: stdout: "update-demo-nautilus-xwrfc update-demo-nautilus-zhbcj "
Oct 12 19:50:46.953: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct 12 19:50:47.401: INFO: stderr: ""
Oct 12 19:50:47.401: INFO: stdout: "true"
Oct 12 19:50:47.401: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-449 get pods update-demo-nautilus-xwrfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct 12 19:50:47.842: INFO: stderr: ""
Oct 12 19:50:47.842: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct 12 19:50:47.842: INFO: validating pod update-demo-nautilus-xwrfc
Oct 12 19:51:17.952: INFO: update-demo-nautilus-xwrfc is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xwrfc)
Oct 12 19:51:22.952: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311 +0x29b
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0020d6a80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 216 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 12 19:51:22.952: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":37,"skipped":292,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}
Oct 12 19:51:29.635: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:245.860 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":354,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
Oct 12 19:51:43.365: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
Oct 12 19:33:40.056: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:34:10.164: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:34:40.275: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:35:10.383: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:35:40.492: INFO: Unable to read jessie_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:36:10.601: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:36:10.601: INFO: Lookups using dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:36:45.710: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:37:15.819: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:37:45.928: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:38:16.037: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:38:46.147: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:39:16.256: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:39:46.373: INFO: Unable to read jessie_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:40:16.485: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:40:16.485: INFO: Lookups using dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:40:50.712: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:41:20.823: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:41:50.933: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:42:21.044: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:42:51.158: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:43:21.279: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:43:51.389: INFO: Unable to read jessie_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:44:21.499: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:44:21.500: INFO: Lookups using dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:44:55.712: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:45:25.822: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:45:55.934: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:46:26.044: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:46:56.155: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:47:26.265: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:47:56.377: INFO: Unable to read jessie_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:48:26.488: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:48:26.488: INFO: Lookups using dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:48:56.598: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:49:26.709: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:49:56.821: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:50:26.932: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:50:57.043: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:51:27.155: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:51:57.266: INFO: Unable to read jessie_udp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:52:27.378: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1: the server is currently unable to handle the request (get pods dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1)
Oct 12 19:52:27.378: INFO: Lookups using dns-6807/dns-test-d2fc2b1a-85fb-4361-906e-f66e30d179d1 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6807.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:52:27.378: FAIL: Unexpected error:
    <*errors.errorString | 0xc00024c250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 191 lines ...
• Failure [1233.721 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct 12 19:52:27.378: Unexpected error:
      <*errors.errorString | 0xc00024c250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":75,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Oct 12 19:52:32.489: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:454
STEP: create the rc
STEP: delete the rc
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1012 19:48:06.497186    5497 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 12 19:53:06.711: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Oct 12 19:53:06.711: INFO: Deleting pod "simpletest.rc-ndznd" in namespace "gc-1293"
Oct 12 19:53:06.822: INFO: Deleting pod "simpletest.rc-nwzvr" in namespace "gc-1293"
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 12 19:53:06.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1293" for this suite.
... skipping 2 lines ...
• [SLOW TEST:336.745 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:454
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":24,"skipped":228,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Oct 12 19:53:07.164: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
Oct 12 19:51:15.499: INFO: Running '/tmp/kubectl3463948367/kubectl --server=https://api.e2e-7e1666f8e6-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2576 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.70.114.66:80 2>&1 || true; echo; done'
Oct 12 19:52:59.839: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.70.114.66:80\n+ echo\n"
Oct 12 19:52:59.839: INFO: stdout: "wget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\nservice-proxy-toggled-btss4\nwget: download timed out\n\nservice-proxy-toggled-btss4\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-btss4\n"
Oct 12 19:52:59.839: INFO: Unable to reach the following endpoints of service 100.70.114.66: map[service-proxy-toggled-26hxz:{} service-proxy-toggled-r5cwr:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2576
STEP: Deleting pod verify-service-up-exec-pod-zkfpt in namespace services-2576
Oct 12 19:53:05.073: FAIL: Unexpected error:
    <*errors.errorString | 0xc002d8a070>: {
        s: "service verification failed for: 100.70.114.66\nexpected [service-proxy-toggled-26hxz service-proxy-toggled-btss4 service-proxy-toggled-r5cwr]\nreceived [service-proxy-toggled-btss4 wget: download timed out]",
    }
    service verification failed for: 100.70.114.66
    expected [service-proxy-toggled-26hxz service-proxy-toggled-btss4 service-proxy-toggled-r5cwr]
    received [service-proxy-toggled-btss4 wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.28()
... skipping 208 lines ...
• Failure [344.203 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865

  Oct 12 19:53:05.073: Unexpected error:
      <*errors.errorString | 0xc002d8a070>: {
          s: "service verification failed for: 100.70.114.66\nexpected [service-proxy-toggled-26hxz service-proxy-toggled-btss4 service-proxy-toggled-r5cwr]\nreceived [service-proxy-toggled-btss4 wget: download timed out]",
      }
      service verification failed for: 100.70.114.66
      expected [service-proxy-toggled-26hxz service-proxy-toggled-btss4 service-proxy-toggled-r5cwr]
      received [service-proxy-toggled-btss4 wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":11,"skipped":77,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
Oct 12 19:53:09.623: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1012 19:48:40.334957    5429 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 12 19:53:40.554: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Oct 12 19:53:40.554: INFO: Deleting pod "simpletest.rc-4z5dj" in namespace "gc-7835"
Oct 12 19:53:40.671: INFO: Deleting pod "simpletest.rc-5v6rb" in namespace "gc-7835"
Oct 12 19:53:40.787: INFO: Deleting pod "simpletest.rc-68zsh" in namespace "gc-7835"
Oct 12 19:53:40.901: INFO: Deleting pod "simpletest.rc-8xnjt" in namespace "gc-7835"
Oct 12 19:53:41.016: INFO: Deleting pod "simpletest.rc-dk7qs" in namespace "gc-7835"
Oct 12 19:53:41.131: INFO: Deleting pod "simpletest.rc-f5h85" in namespace "gc-7835"
... skipping 10 lines ...
• [SLOW TEST:342.923 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":22,"skipped":137,"failed":5,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
Oct 12 19:53:41.935: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":19,"skipped":154,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]}
Oct 12 19:57:37.559: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 276 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  37s   default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     36s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    36s   kubelet            Created container webserver
  Normal  Started    36s   kubelet            Started container webserver

Oct 12 19:39:17.878: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.2.220&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Oct 12 19:39:17.878: INFO: ...failed...will try again in next pass
Oct 12 19:39:17.879: INFO: Breadth first check of 100.96.1.12 on host 172.20.57.193...
Oct 12 19:39:17.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.1.12&port=8080&tries=1'] Namespace:pod-network-test-9450 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:39:17.988: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:39:23.828: INFO: Waiting for responses: map[netserver-2:{}]
Oct 12 19:39:25.829: INFO: 
Output of kubectl describe pod pod-network-test-9450/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  48s   default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     47s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    47s   kubelet            Created container webserver
  Normal  Started    47s   kubelet            Started container webserver

Oct 12 19:39:28.392: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.1.12&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 12 19:39:28.392: INFO: ...failed...will try again in next pass
Oct 12 19:39:28.392: INFO: Breadth first check of 100.96.3.248 on host 172.20.61.115...
Oct 12 19:39:28.501: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.3.248&port=8080&tries=1'] Namespace:pod-network-test-9450 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:39:28.501: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:39:34.251: INFO: Waiting for responses: map[netserver-3:{}]
Oct 12 19:39:36.251: INFO: 
Output of kubectl describe pod pod-network-test-9450/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  58s   default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     57s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    57s   kubelet            Created container webserver
  Normal  Started    57s   kubelet            Started container webserver

Oct 12 19:39:38.882: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.3.248&port=8080&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Oct 12 19:39:38.882: INFO: ...failed...will try again in next pass
Oct 12 19:39:38.882: INFO: Going to retry 3 out of 4 pods....
Oct 12 19:39:38.882: INFO: Doublechecking 1 pods in host 172.20.47.216 which werent seen the first time.
Oct 12 19:39:38.882: INFO: Now attempting to probe pod [[[ 100.96.2.220 ]]]
Oct 12 19:39:38.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.2.220&port=8080&tries=1'] Namespace:pod-network-test-9450 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 12 19:39:38.991: INFO: >>> kubeConfig: /root/.kube/config
Oct 12 19:39:44.729: INFO: Waiting for responses: map[netserver-1:{}]
... skipping 377 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  7m4s  default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     7m3s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    7m3s  kubelet            Created container webserver
  Normal  Started    7m3s  kubelet            Started container webserver

Oct 12 19:45:44.019: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.2.220&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Oct 12 19:45:44.019: INFO: ... Done probing pod [[[ 100.96.2.220 ]]]
Oct 12 19:45:44.019: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     13m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    13m   kubelet            Created container webserver
  Normal  Started    13m   kubelet            Started container webserver

Oct 12 19:51:49.602: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.1.12&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Oct 12 19:51:49.602: INFO: ... Done probing pod [[[ 100.96.1.12 ]]]
Oct 12 19:51:49.602: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned pod-network-test-9450/netserver-3 to ip-172-20-61-115.eu-central-1.compute.internal
  Normal  Pulled     19m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    19m   kubelet            Created container webserver
  Normal  Started    19m   kubelet            Started container webserver

Oct 12 19:57:58.382: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.3.248&port=8080&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Oct 12 19:57:58.382: INFO: ... Done probing pod [[[ 100.96.3.248 ]]]
Oct 12 19:57:58.382: INFO: succeeded at polling 1 out of 4 connections
Oct 12 19:57:58.382: INFO: pod polling failure summary:
Oct 12 19:57:58.382: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.2.220&port=8080&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Oct 12 19:57:58.382: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.1.12&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Oct 12 19:57:58.382: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.4.231:9080/dial?request=hostname&protocol=http&host=100.96.3.248&port=8080&tries=1'
retrieved map[]
expected map[netserver-3:{}]
Oct 12 19:57:58.383: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0032b4c00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 150 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct 12 19:57:58.383: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":164,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Oct 12 19:58:02.896: INFO: Running AfterSuite actions on all nodes


[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 111 lines ...
Oct 12 19:46:59.137: INFO: Unable to read wheezy_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:47:29.254: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:47:59.363: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:48:29.472: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:48:59.582: INFO: Unable to read jessie_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:49:29.693: INFO: Unable to read jessie_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:49:29.693: INFO: Lookups using dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:50:04.806: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:50:34.916: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:51:05.025: INFO: Unable to read wheezy_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:51:35.136: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:52:05.245: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:52:35.356: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:53:05.466: INFO: Unable to read jessie_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:53:35.575: INFO: Unable to read jessie_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:53:35.575: INFO: Lookups using dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:54:09.804: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:54:39.914: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:55:10.024: INFO: Unable to read wheezy_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:55:40.133: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:56:10.242: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:56:40.352: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:57:10.462: INFO: Unable to read jessie_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:57:40.571: INFO: Unable to read jessie_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:57:40.572: INFO: Lookups using dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 19:58:14.803: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:58:44.913: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:59:15.022: INFO: Unable to read wheezy_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 19:59:45.131: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:00:15.240: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:00:45.351: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:01:15.461: INFO: Unable to read jessie_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:01:45.570: INFO: Unable to read jessie_tcp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:01:45.570: INFO: Lookups using dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct 12 20:02:15.681: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:02:45.791: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e)
Oct 12 20:03:15.901: INFO: Unable to read wheezy_udp@PodARecord from pod dns-543/dns-test-ed33f234-fb68-44cc-a016-dbbcb07d2c5e: the server is currently unable to handle the request (get pods dns-tes