This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-05 19:15
Elapsed59m53s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 135 lines ...
I1005 19:16:12.742374    4682 up.go:43] Cleaning up any leaked resources from previous cluster
I1005 19:16:12.742403    4682 dumplogs.go:40] /logs/artifacts/80a0f8f5-2610-11ec-ad81-1a4cc1c5f775/kops toolbox dump --name e2e-8d71322f12-62691.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I1005 19:16:12.757407    4702 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1005 19:16:12.757505    4702 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-8d71322f12-62691.test-cncf-aws.k8s.io" not found
W1005 19:16:13.257743    4682 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I1005 19:16:13.257815    4682 down.go:48] /logs/artifacts/80a0f8f5-2610-11ec-ad81-1a4cc1c5f775/kops delete cluster --name e2e-8d71322f12-62691.test-cncf-aws.k8s.io --yes
I1005 19:16:13.271523    4713 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1005 19:16:13.271609    4713 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-8d71322f12-62691.test-cncf-aws.k8s.io" not found
I1005 19:16:13.748973    4682 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/10/05 19:16:13 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I1005 19:16:13.756249    4682 http.go:37] curl https://ip.jsb.workers.dev
I1005 19:16:13.862510    4682 up.go:144] /logs/artifacts/80a0f8f5-2610-11ec-ad81-1a4cc1c5f775/kops create cluster --name e2e-8d71322f12-62691.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.5 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2905.2.5-hvm --channel=alpha --networking=kopeio --container-runtime=containerd --admin-access 34.121.80.31/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I1005 19:16:13.878215    4724 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1005 19:16:13.878647    4724 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I1005 19:16:13.924249    4724 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I1005 19:16:14.665270    4724 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I1005 19:16:37.166501    4682 up.go:181] /logs/artifacts/80a0f8f5-2610-11ec-ad81-1a4cc1c5f775/kops validate cluster --name e2e-8d71322f12-62691.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I1005 19:16:37.181361    4744 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I1005 19:16:37.181449    4744 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-8d71322f12-62691.test-cncf-aws.k8s.io

W1005 19:16:38.185390    4744 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:16:48.311411    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:16:58.344626    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:17:08.391233    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:17:18.426059    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:17:28.490754    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:17:38.528802    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:17:48.565193    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
W1005 19:17:58.593933    4744 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:08.640988    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:18.679088    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:28.713335    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:38.759174    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:48.790053    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:18:58.822317    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:08.868457    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:18.947807    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:28.981064    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:39.012712    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:49.042762    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W1005 19:19:59.079844    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 8 lines ...
Machine	i-0248055e1ce7d3b1c				machine "i-0248055e1ce7d3b1c" has not yet joined cluster
Machine	i-08433ac31a1487a92				machine "i-08433ac31a1487a92" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-kl5fm		system-cluster-critical pod "coredns-5dc785954d-kl5fm" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-82d4k	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-82d4k" is pending
Pod	kube-system/kopeio-networking-agent-tfn8s	system-node-critical pod "kopeio-networking-agent-tfn8s" is pending

Validation Failed
W1005 19:20:10.310299    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 6 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Machine	i-02468cd98e1e52b62				machine "i-02468cd98e1e52b62" has not yet joined cluster
Machine	i-0248055e1ce7d3b1c				machine "i-0248055e1ce7d3b1c" has not yet joined cluster
Pod	kube-system/kopeio-networking-agent-pln7h	system-node-critical pod "kopeio-networking-agent-pln7h" is pending

Validation Failed
W1005 19:20:21.420509    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Node	ip-172-20-41-232.ca-central-1.compute.internal				node "ip-172-20-41-232.ca-central-1.compute.internal" of role "node" is not ready
Pod	kube-system/kube-proxy-ip-172-20-41-232.ca-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-41-232.ca-central-1.compute.internal" is pending

Validation Failed
W1005 19:20:32.314062    4744 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 551 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 123 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 154 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-9924" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-6466" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:49.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8936" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:50.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-6239" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:22:50.895: INFO: Only supported for providers [openstack] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:52.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-993" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":2,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:22:52.573: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:22:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5337" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":2,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:22:54.088: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
• [SLOW TEST:8.707 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W1005 19:22:49.434158    5332 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  5 19:22:49.434: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  5 19:22:49.529: INFO: Waiting up to 5m0s for pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0" in namespace "security-context-7837" to be "Succeeded or Failed"
Oct  5 19:22:49.565: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.137411ms
Oct  5 19:22:51.595: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065781255s
Oct  5 19:22:53.664: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13431813s
Oct  5 19:22:55.697: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167520181s
Oct  5 19:22:57.728: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198993745s
STEP: Saw pod success
Oct  5 19:22:57.728: INFO: Pod "security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0" satisfied condition "Succeeded or Failed"
Oct  5 19:22:57.760: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0 container test-container: <nil>
STEP: delete the pod
Oct  5 19:22:58.175: INFO: Waiting for pod security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0 to disappear
Oct  5 19:22:58.205: INFO: Pod security-context-787d36fc-27a0-4e44-a0a8-ff14da1087a0 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
W1005 19:22:50.014838    5305 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct  5 19:22:50.014: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct  5 19:22:50.110: INFO: Waiting up to 5m0s for pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e" in namespace "emptydir-8009" to be "Succeeded or Failed"
Oct  5 19:22:50.142: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.041574ms
Oct  5 19:22:52.173: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062398266s
Oct  5 19:22:54.204: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093395118s
Oct  5 19:22:56.236: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125664243s
Oct  5 19:22:58.269: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158131809s
STEP: Saw pod success
Oct  5 19:22:58.269: INFO: Pod "pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e" satisfied condition "Succeeded or Failed"
Oct  5 19:22:58.299: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e container test-container: <nil>
STEP: delete the pod
Oct  5 19:22:58.367: INFO: Waiting for pod pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e to disappear
Oct  5 19:22:58.398: INFO: Pod pod-fed3b56b-e8d2-4aa7-9e98-456f0b43c58e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.169 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:9.333 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:22:58.666: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
• [SLOW TEST:10.818 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:00.135: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
• [SLOW TEST:11.381 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:00.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8181" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:00.915: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
Oct  5 19:22:51.517: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-06c97219-2925-4970-917a-d08fe7022998
STEP: Creating a pod to test consume configMaps
Oct  5 19:22:51.642: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d" in namespace "projected-3546" to be "Succeeded or Failed"
Oct  5 19:22:51.673: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.874265ms
Oct  5 19:22:53.720: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07876659s
Oct  5 19:22:55.761: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119466598s
Oct  5 19:22:57.799: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157555091s
Oct  5 19:22:59.831: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189120856s
Oct  5 19:23:01.864: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.221907616s
STEP: Saw pod success
Oct  5 19:23:01.864: INFO: Pod "pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d" satisfied condition "Succeeded or Failed"
Oct  5 19:23:01.897: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:23:02.226: INFO: Waiting for pod pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d to disappear
Oct  5 19:23:02.257: INFO: Pod pod-projected-configmaps-3f414449-8994-4ec1-a6db-8e88f9cf6b6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.939 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Oct  5 19:22:51.665: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Oct  5 19:22:51.757: INFO: Waiting up to 5m0s for pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2" in namespace "security-context-test-319" to be "Succeeded or Failed"
Oct  5 19:22:51.788: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.256853ms
Oct  5 19:22:53.819: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061284215s
Oct  5 19:22:55.849: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0920225s
Oct  5 19:22:57.907: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150070001s
Oct  5 19:22:59.939: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181395174s
Oct  5 19:23:01.971: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213331388s
Oct  5 19:23:04.004: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Running", Reason="", readiness=true. Elapsed: 12.246918653s
Oct  5 19:23:06.036: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.278347537s
Oct  5 19:23:06.036: INFO: Pod "busybox-user-0-44ae4870-c9be-489f-af9b-c42f92d52cf2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:06.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-319" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:06.163: INFO: Only supported for providers [gce gke] (not aws)
... skipping 107 lines ...
• [SLOW TEST:19.606 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:08.954: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 263 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-5fbfe2ce-a2f8-4385-bebc-459d65d6088d
STEP: Creating a pod to test consume secrets
Oct  5 19:23:01.141: INFO: Waiting up to 5m0s for pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f" in namespace "secrets-2402" to be "Succeeded or Failed"
Oct  5 19:23:01.172: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.519096ms
Oct  5 19:23:03.206: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065109055s
Oct  5 19:23:05.237: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09613002s
Oct  5 19:23:07.269: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128095869s
Oct  5 19:23:09.305: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16384767s
Oct  5 19:23:11.336: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.195174334s
STEP: Saw pod success
Oct  5 19:23:11.336: INFO: Pod "pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f" satisfied condition "Succeeded or Failed"
Oct  5 19:23:11.368: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f container secret-volume-test: <nil>
STEP: delete the pod
Oct  5 19:23:11.703: INFO: Waiting for pod pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f to disappear
Oct  5 19:23:11.735: INFO: Pod pod-secrets-9c836e35-9637-419b-83ff-8a698a1c718f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.877 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:23:11.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
STEP: Destroying namespace "services-5021" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:12.071: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:22:50.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:12.565 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:13.212: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-bdd81422-cb41-4de0-a419-8cf3ae9c1484
STEP: Creating a pod to test consume configMaps
Oct  5 19:23:09.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0" in namespace "projected-8174" to be "Succeeded or Failed"
Oct  5 19:23:09.233: INFO: Pod "pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.55981ms
Oct  5 19:23:11.264: INFO: Pod "pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061864515s
Oct  5 19:23:13.337: INFO: Pod "pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.134223012s
STEP: Saw pod success
Oct  5 19:23:13.337: INFO: Pod "pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0" satisfied condition "Succeeded or Failed"
Oct  5 19:23:13.387: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0 container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:23:13.515: INFO: Waiting for pod pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0 to disappear
Oct  5 19:23:13.545: INFO: Pod pod-projected-configmaps-6073a77a-4486-4647-a853-5fed73c4cbc0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:13.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8174" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Oct  5 19:23:03.374: INFO: PersistentVolumeClaim pvc-k8r76 found but phase is Pending instead of Bound.
Oct  5 19:23:05.406: INFO: PersistentVolumeClaim pvc-k8r76 found and phase=Bound (8.164517125s)
Oct  5 19:23:05.406: INFO: Waiting up to 3m0s for PersistentVolume local-x9rwj to have phase Bound
Oct  5 19:23:05.436: INFO: PersistentVolume local-x9rwj found and phase=Bound (30.124897ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-24zk
STEP: Creating a pod to test subpath
Oct  5 19:23:05.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-24zk" in namespace "provisioning-9320" to be "Succeeded or Failed"
Oct  5 19:23:05.559: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk": Phase="Pending", Reason="", readiness=false. Elapsed: 30.154175ms
Oct  5 19:23:07.590: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061717807s
Oct  5 19:23:09.622: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093216376s
Oct  5 19:23:11.653: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123995777s
Oct  5 19:23:13.684: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155050405s
STEP: Saw pod success
Oct  5 19:23:13.684: INFO: Pod "pod-subpath-test-preprovisionedpv-24zk" satisfied condition "Succeeded or Failed"
Oct  5 19:23:13.714: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-24zk container test-container-volume-preprovisionedpv-24zk: <nil>
STEP: delete the pod
Oct  5 19:23:13.799: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-24zk to disappear
Oct  5 19:23:13.833: INFO: Pod pod-subpath-test-preprovisionedpv-24zk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-24zk
Oct  5 19:23:13.833: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-24zk" in namespace "provisioning-9320"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:14.358: INFO: Only supported for providers [gce gke] (not aws)
... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:14.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9566" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:14.872: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:23:13.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed" in namespace "downward-api-1321" to be "Succeeded or Failed"
Oct  5 19:23:13.639: INFO: Pod "downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 30.641112ms
Oct  5 19:23:15.670: INFO: Pod "downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06157373s
Oct  5 19:23:17.702: INFO: Pod "downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093631126s
STEP: Saw pod success
Oct  5 19:23:17.702: INFO: Pod "downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed" satisfied condition "Succeeded or Failed"
Oct  5 19:23:17.732: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed container client-container: <nil>
STEP: delete the pod
Oct  5 19:23:17.803: INFO: Waiting for pod downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed to disappear
Oct  5 19:23:17.833: INFO: Pod downwardapi-volume-9a1f30ef-50a7-4b6d-a596-6d02a020d1ed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 24 lines ...
Oct  5 19:23:01.566: INFO: PersistentVolumeClaim pvc-dfbmb found but phase is Pending instead of Bound.
Oct  5 19:23:03.597: INFO: PersistentVolumeClaim pvc-dfbmb found and phase=Bound (4.092927534s)
Oct  5 19:23:03.597: INFO: Waiting up to 3m0s for PersistentVolume local-fkv2g to have phase Bound
Oct  5 19:23:03.629: INFO: PersistentVolume local-fkv2g found and phase=Bound (31.829076ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9cbt
STEP: Creating a pod to test subpath
Oct  5 19:23:03.724: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9cbt" in namespace "provisioning-9140" to be "Succeeded or Failed"
Oct  5 19:23:03.754: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 30.717877ms
Oct  5 19:23:05.786: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062270952s
Oct  5 19:23:07.819: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095262693s
Oct  5 19:23:09.851: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127193558s
Oct  5 19:23:11.883: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159209693s
Oct  5 19:23:13.914: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190216075s
STEP: Saw pod success
Oct  5 19:23:13.914: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt" satisfied condition "Succeeded or Failed"
Oct  5 19:23:13.956: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9cbt container test-container-subpath-preprovisionedpv-9cbt: <nil>
STEP: delete the pod
Oct  5 19:23:14.034: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9cbt to disappear
Oct  5 19:23:14.065: INFO: Pod pod-subpath-test-preprovisionedpv-9cbt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9cbt
Oct  5 19:23:14.066: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9cbt" in namespace "provisioning-9140"
STEP: Creating pod pod-subpath-test-preprovisionedpv-9cbt
STEP: Creating a pod to test subpath
Oct  5 19:23:14.130: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9cbt" in namespace "provisioning-9140" to be "Succeeded or Failed"
Oct  5 19:23:14.161: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 30.699701ms
Oct  5 19:23:16.193: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062568798s
Oct  5 19:23:18.224: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093789972s
STEP: Saw pod success
Oct  5 19:23:18.224: INFO: Pod "pod-subpath-test-preprovisionedpv-9cbt" satisfied condition "Succeeded or Failed"
Oct  5 19:23:18.255: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9cbt container test-container-subpath-preprovisionedpv-9cbt: <nil>
STEP: delete the pod
Oct  5 19:23:18.322: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9cbt to disappear
Oct  5 19:23:18.352: INFO: Pod pod-subpath-test-preprovisionedpv-9cbt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9cbt
Oct  5 19:23:18.352: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9cbt" in namespace "provisioning-9140"
... skipping 37 lines ...
Oct  5 19:22:54.395: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-590qr7xl
STEP: creating a claim
Oct  5 19:22:54.426: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rtr5
STEP: Creating a pod to test subpath
Oct  5 19:22:54.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rtr5" in namespace "provisioning-590" to be "Succeeded or Failed"
Oct  5 19:22:54.555: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.56812ms
Oct  5 19:22:56.586: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062133165s
Oct  5 19:22:58.620: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095650642s
Oct  5 19:23:00.652: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127235105s
Oct  5 19:23:02.683: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158856651s
Oct  5 19:23:04.714: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189279989s
Oct  5 19:23:06.746: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.221249547s
Oct  5 19:23:08.777: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.252957152s
Oct  5 19:23:10.817: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.292526943s
Oct  5 19:23:12.849: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.324530408s
Oct  5 19:23:14.880: INFO: Pod "pod-subpath-test-dynamicpv-rtr5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.356109016s
STEP: Saw pod success
Oct  5 19:23:14.880: INFO: Pod "pod-subpath-test-dynamicpv-rtr5" satisfied condition "Succeeded or Failed"
Oct  5 19:23:14.911: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-rtr5 container test-container-subpath-dynamicpv-rtr5: <nil>
STEP: delete the pod
Oct  5 19:23:14.990: INFO: Waiting for pod pod-subpath-test-dynamicpv-rtr5 to disappear
Oct  5 19:23:15.020: INFO: Pod pod-subpath-test-dynamicpv-rtr5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rtr5
Oct  5 19:23:15.020: INFO: Deleting pod "pod-subpath-test-dynamicpv-rtr5" in namespace "provisioning-590"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:20.411: INFO: Only supported for providers [vsphere] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-7a20b3a7-ff62-4062-aad2-3f4d57ede3c1
STEP: Creating a pod to test consume configMaps
Oct  5 19:23:13.245: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4" in namespace "projected-2425" to be "Succeeded or Failed"
Oct  5 19:23:13.284: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.881003ms
Oct  5 19:23:15.315: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069446461s
Oct  5 19:23:17.347: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101152296s
Oct  5 19:23:19.378: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132629362s
Oct  5 19:23:21.409: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.163540409s
STEP: Saw pod success
Oct  5 19:23:21.409: INFO: Pod "pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4" satisfied condition "Succeeded or Failed"
Oct  5 19:23:21.440: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Oct  5 19:23:21.507: INFO: Waiting for pod pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4 to disappear
Oct  5 19:23:21.538: INFO: Pod pod-projected-configmaps-d651d109-5445-48cf-b8a8-8132a1b301a4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.571 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:21.611: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-b298c9a9-8477-490b-99c9-fae0a9e613b9
STEP: Creating a pod to test consume configMaps
Oct  5 19:23:15.095: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3" in namespace "projected-4085" to be "Succeeded or Failed"
Oct  5 19:23:15.125: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.235226ms
Oct  5 19:23:17.164: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068530133s
Oct  5 19:23:19.196: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100492179s
Oct  5 19:23:21.226: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13097107s
Oct  5 19:23:23.271: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176174998s
STEP: Saw pod success
Oct  5 19:23:23.271: INFO: Pod "pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3" satisfied condition "Succeeded or Failed"
Oct  5 19:23:23.302: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3 container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:23:23.396: INFO: Waiting for pod pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3 to disappear
Oct  5 19:23:23.429: INFO: Pod pod-projected-configmaps-622927ee-faed-496b-9342-c9d8198923f3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Oct  5 19:23:13.779: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  5 19:23:13.779: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-shfl
STEP: Creating a pod to test subpath
Oct  5 19:23:13.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-shfl" in namespace "provisioning-8267" to be "Succeeded or Failed"
Oct  5 19:23:13.844: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Pending", Reason="", readiness=false. Elapsed: 32.195504ms
Oct  5 19:23:15.875: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063010616s
Oct  5 19:23:17.906: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094091849s
Oct  5 19:23:19.938: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126382825s
Oct  5 19:23:21.969: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156946479s
Oct  5 19:23:24.002: INFO: Pod "pod-subpath-test-inlinevolume-shfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.190231156s
STEP: Saw pod success
Oct  5 19:23:24.002: INFO: Pod "pod-subpath-test-inlinevolume-shfl" satisfied condition "Succeeded or Failed"
Oct  5 19:23:24.033: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-shfl container test-container-subpath-inlinevolume-shfl: <nil>
STEP: delete the pod
Oct  5 19:23:24.105: INFO: Waiting for pod pod-subpath-test-inlinevolume-shfl to disappear
Oct  5 19:23:24.139: INFO: Pod pod-subpath-test-inlinevolume-shfl no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-shfl
Oct  5 19:23:24.139: INFO: Deleting pod "pod-subpath-test-inlinevolume-shfl" in namespace "provisioning-8267"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:24.277: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:24.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9589" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":4,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:24.626: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
Oct  5 19:23:17.899: INFO: PersistentVolumeClaim pvc-v25mp found but phase is Pending instead of Bound.
Oct  5 19:23:19.930: INFO: PersistentVolumeClaim pvc-v25mp found and phase=Bound (16.278129841s)
Oct  5 19:23:19.930: INFO: Waiting up to 3m0s for PersistentVolume local-dcb24 to have phase Bound
Oct  5 19:23:19.961: INFO: PersistentVolume local-dcb24 found and phase=Bound (31.315246ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c5p4
STEP: Creating a pod to test subpath
Oct  5 19:23:20.055: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c5p4" in namespace "provisioning-1701" to be "Succeeded or Failed"
Oct  5 19:23:20.085: INFO: Pod "pod-subpath-test-preprovisionedpv-c5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.018433ms
Oct  5 19:23:22.116: INFO: Pod "pod-subpath-test-preprovisionedpv-c5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060851634s
Oct  5 19:23:24.147: INFO: Pod "pod-subpath-test-preprovisionedpv-c5p4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091518707s
Oct  5 19:23:26.179: INFO: Pod "pod-subpath-test-preprovisionedpv-c5p4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124313819s
STEP: Saw pod success
Oct  5 19:23:26.180: INFO: Pod "pod-subpath-test-preprovisionedpv-c5p4" satisfied condition "Succeeded or Failed"
Oct  5 19:23:26.210: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-c5p4 container test-container-subpath-preprovisionedpv-c5p4: <nil>
STEP: delete the pod
Oct  5 19:23:26.281: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c5p4 to disappear
Oct  5 19:23:26.311: INFO: Pod pod-subpath-test-preprovisionedpv-c5p4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c5p4
Oct  5 19:23:26.311: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c5p4" in namespace "provisioning-1701"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:27.387: INFO: Only supported for providers [gce gke] (not aws)
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3064" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":1,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:28.111: INFO: Only supported for providers [gce gke] (not aws)
... skipping 209 lines ...
Oct  5 19:23:18.494: INFO: PersistentVolumeClaim pvc-8wksx found but phase is Pending instead of Bound.
Oct  5 19:23:20.526: INFO: PersistentVolumeClaim pvc-8wksx found and phase=Bound (16.291562264s)
Oct  5 19:23:20.526: INFO: Waiting up to 3m0s for PersistentVolume local-hqmln to have phase Bound
Oct  5 19:23:20.557: INFO: PersistentVolume local-hqmln found and phase=Bound (30.882354ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2xnb
STEP: Creating a pod to test exec-volume-test
Oct  5 19:23:20.654: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2xnb" in namespace "volume-5114" to be "Succeeded or Failed"
Oct  5 19:23:20.684: INFO: Pod "exec-volume-test-preprovisionedpv-2xnb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.712854ms
Oct  5 19:23:22.717: INFO: Pod "exec-volume-test-preprovisionedpv-2xnb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062808499s
Oct  5 19:23:24.748: INFO: Pod "exec-volume-test-preprovisionedpv-2xnb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093995272s
Oct  5 19:23:26.781: INFO: Pod "exec-volume-test-preprovisionedpv-2xnb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126748807s
STEP: Saw pod success
Oct  5 19:23:26.781: INFO: Pod "exec-volume-test-preprovisionedpv-2xnb" satisfied condition "Succeeded or Failed"
Oct  5 19:23:26.812: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-2xnb container exec-container-preprovisionedpv-2xnb: <nil>
STEP: delete the pod
Oct  5 19:23:26.889: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2xnb to disappear
Oct  5 19:23:26.920: INFO: Pod exec-volume-test-preprovisionedpv-2xnb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2xnb
Oct  5 19:23:26.920: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2xnb" in namespace "volume-5114"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:28.236: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 238 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:28.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2535" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":3,"skipped":10,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:28.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":2,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 9 lines ...
Oct  5 19:22:49.693: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct  5 19:22:50.106: INFO: Successfully created a new PD: "aws://ca-central-1a/vol-04b96b7238d203c47".
Oct  5 19:22:50.106: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-755n
STEP: Creating a pod to test exec-volume-test
Oct  5 19:22:50.141: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-755n" in namespace "volume-5594" to be "Succeeded or Failed"
Oct  5 19:22:50.172: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 30.183388ms
Oct  5 19:22:52.203: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061631181s
Oct  5 19:22:54.236: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094486126s
Oct  5 19:22:56.269: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127360746s
Oct  5 19:22:58.299: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158088888s
Oct  5 19:23:00.332: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19095671s
Oct  5 19:23:02.364: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222460375s
Oct  5 19:23:04.395: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.253443455s
Oct  5 19:23:06.426: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.285102266s
Oct  5 19:23:08.457: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 18.316107675s
Oct  5 19:23:10.488: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Pending", Reason="", readiness=false. Elapsed: 20.347000126s
Oct  5 19:23:12.520: INFO: Pod "exec-volume-test-inlinevolume-755n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.379066507s
STEP: Saw pod success
Oct  5 19:23:12.520: INFO: Pod "exec-volume-test-inlinevolume-755n" satisfied condition "Succeeded or Failed"
Oct  5 19:23:12.551: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod exec-volume-test-inlinevolume-755n container exec-container-inlinevolume-755n: <nil>
STEP: delete the pod
Oct  5 19:23:12.642: INFO: Waiting for pod exec-volume-test-inlinevolume-755n to disappear
Oct  5 19:23:12.672: INFO: Pod exec-volume-test-inlinevolume-755n no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-755n
Oct  5 19:23:12.672: INFO: Deleting pod "exec-volume-test-inlinevolume-755n" in namespace "volume-5594"
Oct  5 19:23:12.821: INFO: Couldn't delete PD "aws://ca-central-1a/vol-04b96b7238d203c47", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04b96b7238d203c47 is currently attached to i-02468cd98e1e52b62
	status code: 400, request id: bbacd33c-f4c2-49e7-a6f7-b655209bac26
Oct  5 19:23:18.057: INFO: Couldn't delete PD "aws://ca-central-1a/vol-04b96b7238d203c47", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04b96b7238d203c47 is currently attached to i-02468cd98e1e52b62
	status code: 400, request id: 873042c8-c1ce-413e-aee2-a3f2f64ccd0c
Oct  5 19:23:23.308: INFO: Couldn't delete PD "aws://ca-central-1a/vol-04b96b7238d203c47", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-04b96b7238d203c47 is currently attached to i-02468cd98e1e52b62
	status code: 400, request id: 1c204124-1308-4cfc-8842-acf622a08ed3
Oct  5 19:23:28.614: INFO: Successfully deleted PD "aws://ca-central-1a/vol-04b96b7238d203c47".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:28.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5594" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:29.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-7052" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":2,"skipped":32,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:23:23.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  5 19:23:23.710: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93" in namespace "security-context-test-1295" to be "Succeeded or Failed"
Oct  5 19:23:23.740: INFO: Pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93": Phase="Pending", Reason="", readiness=false. Elapsed: 30.529355ms
Oct  5 19:23:25.771: INFO: Pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061417147s
Oct  5 19:23:27.802: INFO: Pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092067874s
Oct  5 19:23:29.832: INFO: Pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122751725s
Oct  5 19:23:29.833: INFO: Pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93" satisfied condition "Succeeded or Failed"
Oct  5 19:23:29.885: INFO: Got logs for pod "busybox-privileged-false-bcc8072e-f804-490b-ba42-b76395107a93": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:29.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1295" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:30.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5884" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:30.165: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:30.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-8754" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:30.822: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 44 lines ...
Oct  5 19:23:17.543: INFO: PersistentVolumeClaim pvc-8g86s found but phase is Pending instead of Bound.
Oct  5 19:23:19.574: INFO: PersistentVolumeClaim pvc-8g86s found and phase=Bound (14.252716408s)
Oct  5 19:23:19.574: INFO: Waiting up to 3m0s for PersistentVolume local-jvjqh to have phase Bound
Oct  5 19:23:19.605: INFO: PersistentVolume local-jvjqh found and phase=Bound (30.227638ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zrqg
STEP: Creating a pod to test subpath
Oct  5 19:23:19.697: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zrqg" in namespace "provisioning-4821" to be "Succeeded or Failed"
Oct  5 19:23:19.727: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Pending", Reason="", readiness=false. Elapsed: 30.193409ms
Oct  5 19:23:21.766: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069377067s
Oct  5 19:23:23.797: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100405162s
Oct  5 19:23:25.828: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131173733s
Oct  5 19:23:27.860: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162946932s
STEP: Saw pod success
Oct  5 19:23:27.860: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg" satisfied condition "Succeeded or Failed"
Oct  5 19:23:27.892: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-zrqg container test-container-subpath-preprovisionedpv-zrqg: <nil>
STEP: delete the pod
Oct  5 19:23:27.997: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zrqg to disappear
Oct  5 19:23:28.032: INFO: Pod pod-subpath-test-preprovisionedpv-zrqg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zrqg
Oct  5 19:23:28.032: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zrqg" in namespace "provisioning-4821"
STEP: Creating pod pod-subpath-test-preprovisionedpv-zrqg
STEP: Creating a pod to test subpath
Oct  5 19:23:28.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zrqg" in namespace "provisioning-4821" to be "Succeeded or Failed"
Oct  5 19:23:28.124: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Pending", Reason="", readiness=false. Elapsed: 30.199267ms
Oct  5 19:23:30.156: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062035962s
STEP: Saw pod success
Oct  5 19:23:30.156: INFO: Pod "pod-subpath-test-preprovisionedpv-zrqg" satisfied condition "Succeeded or Failed"
Oct  5 19:23:30.186: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-zrqg container test-container-subpath-preprovisionedpv-zrqg: <nil>
STEP: delete the pod
Oct  5 19:23:30.262: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zrqg to disappear
Oct  5 19:23:30.297: INFO: Pod pod-subpath-test-preprovisionedpv-zrqg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zrqg
Oct  5 19:23:30.297: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zrqg" in namespace "provisioning-4821"
... skipping 21 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:30.842: INFO: Only supported for providers [gce gke] (not aws)
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1075" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":3,"skipped":7,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct  5 19:23:06.377: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Oct  5 19:23:06.377: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-chj6
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:23:06.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-chj6" in namespace "provisioning-152" to be "Succeeded or Failed"
Oct  5 19:23:06.443: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.64513ms
Oct  5 19:23:08.474: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064375035s
Oct  5 19:23:10.506: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095785566s
Oct  5 19:23:12.538: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 6.127588907s
Oct  5 19:23:14.569: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 8.158875734s
Oct  5 19:23:16.600: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 10.19026462s
... skipping 3 lines ...
Oct  5 19:23:24.726: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 18.315628816s
Oct  5 19:23:26.757: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 20.3466846s
Oct  5 19:23:28.789: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 22.379331824s
Oct  5 19:23:30.827: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Running", Reason="", readiness=true. Elapsed: 24.417013051s
Oct  5 19:23:32.860: INFO: Pod "pod-subpath-test-inlinevolume-chj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.449433733s
STEP: Saw pod success
Oct  5 19:23:32.860: INFO: Pod "pod-subpath-test-inlinevolume-chj6" satisfied condition "Succeeded or Failed"
Oct  5 19:23:32.893: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-chj6 container test-container-subpath-inlinevolume-chj6: <nil>
STEP: delete the pod
Oct  5 19:23:32.975: INFO: Waiting for pod pod-subpath-test-inlinevolume-chj6 to disappear
Oct  5 19:23:33.006: INFO: Pod pod-subpath-test-inlinevolume-chj6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-chj6
Oct  5 19:23:33.006: INFO: Deleting pod "pod-subpath-test-inlinevolume-chj6" in namespace "provisioning-152"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Oct  5 19:23:28.977: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7296 explain e2e-test-crd-publish-openapi-8898-crds.spec'
Oct  5 19:23:29.323: INFO: stderr: ""
Oct  5 19:23:29.323: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8898-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct  5 19:23:29.323: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7296 explain e2e-test-crd-publish-openapi-8898-crds.spec.bars'
Oct  5 19:23:29.653: INFO: stderr: ""
Oct  5 19:23:29.653: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8898-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct  5 19:23:29.653: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7296 explain e2e-test-crd-publish-openapi-8898-crds.spec.bars2'
Oct  5 19:23:29.982: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:33.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7296" for this suite.
... skipping 2 lines ...
• [SLOW TEST:12.138 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:33.789: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  5 19:23:28.613: INFO: Waiting up to 5m0s for pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150" in namespace "emptydir-6275" to be "Succeeded or Failed"
Oct  5 19:23:28.645: INFO: Pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150": Phase="Pending", Reason="", readiness=false. Elapsed: 31.668781ms
Oct  5 19:23:30.677: INFO: Pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063255219s
Oct  5 19:23:32.709: INFO: Pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095629963s
Oct  5 19:23:34.741: INFO: Pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127418769s
STEP: Saw pod success
Oct  5 19:23:34.741: INFO: Pod "pod-50230d09-832a-43a5-9ec2-1c89f020a150" satisfied condition "Succeeded or Failed"
Oct  5 19:23:34.772: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-50230d09-832a-43a5-9ec2-1c89f020a150 container test-container: <nil>
STEP: delete the pod
Oct  5 19:23:34.840: INFO: Waiting for pod pod-50230d09-832a-43a5-9ec2-1c89f020a150 to disappear
Oct  5 19:23:34.873: INFO: Pod pod-50230d09-832a-43a5-9ec2-1c89f020a150 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:34.952: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:36.137: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 105 lines ...
• [SLOW TEST:28.504 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":2,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:23:34.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d" in namespace "projected-7417" to be "Succeeded or Failed"
Oct  5 19:23:34.075: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.787692ms
Oct  5 19:23:36.106: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062077197s
Oct  5 19:23:38.137: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092954984s
Oct  5 19:23:40.171: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126708505s
Oct  5 19:23:42.201: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157445643s
Oct  5 19:23:44.232: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188302592s
Oct  5 19:23:46.263: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.219129426s
Oct  5 19:23:48.296: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.251986816s
STEP: Saw pod success
Oct  5 19:23:48.296: INFO: Pod "downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d" satisfied condition "Succeeded or Failed"
Oct  5 19:23:48.326: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d container client-container: <nil>
STEP: delete the pod
Oct  5 19:23:48.400: INFO: Waiting for pod downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d to disappear
Oct  5 19:23:48.430: INFO: Pod downwardapi-volume-bbd03c97-a259-4a64-b182-95adb9c74c1d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.711 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Oct  5 19:23:24.797: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Oct  5 19:23:24.828: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-s85r
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:23:24.861: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-s85r" in namespace "provisioning-3663" to be "Succeeded or Failed"
Oct  5 19:23:24.891: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Pending", Reason="", readiness=false. Elapsed: 30.348033ms
Oct  5 19:23:26.922: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061606157s
Oct  5 19:23:28.955: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094268239s
Oct  5 19:23:30.987: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 6.126247591s
Oct  5 19:23:33.018: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 8.157701323s
Oct  5 19:23:35.051: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 10.19002649s
... skipping 2 lines ...
Oct  5 19:23:41.145: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 16.283990968s
Oct  5 19:23:43.176: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 18.315487342s
Oct  5 19:23:45.209: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 20.347922901s
Oct  5 19:23:47.241: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Running", Reason="", readiness=true. Elapsed: 22.379829732s
Oct  5 19:23:49.271: INFO: Pod "pod-subpath-test-inlinevolume-s85r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.410515458s
STEP: Saw pod success
Oct  5 19:23:49.271: INFO: Pod "pod-subpath-test-inlinevolume-s85r" satisfied condition "Succeeded or Failed"
Oct  5 19:23:49.301: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-s85r container test-container-subpath-inlinevolume-s85r: <nil>
STEP: delete the pod
Oct  5 19:23:49.371: INFO: Waiting for pod pod-subpath-test-inlinevolume-s85r to disappear
Oct  5 19:23:49.401: INFO: Pod pod-subpath-test-inlinevolume-s85r no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-s85r
Oct  5 19:23:49.401: INFO: Deleting pod "pod-subpath-test-inlinevolume-s85r" in namespace "provisioning-3663"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:49.540: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:49.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8017" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:23:49.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:50.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-7814" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Oct  5 19:23:17.986: INFO: PersistentVolumeClaim pvc-ls4vl found but phase is Pending instead of Bound.
Oct  5 19:23:20.018: INFO: PersistentVolumeClaim pvc-ls4vl found and phase=Bound (2.062026965s)
Oct  5 19:23:20.018: INFO: Waiting up to 3m0s for PersistentVolume local-7l67h to have phase Bound
Oct  5 19:23:20.048: INFO: PersistentVolume local-7l67h found and phase=Bound (30.558492ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-f5c5
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:23:20.142: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f5c5" in namespace "provisioning-6944" to be "Succeeded or Failed"
Oct  5 19:23:20.174: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.697803ms
Oct  5 19:23:22.205: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062848413s
Oct  5 19:23:24.236: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09360226s
Oct  5 19:23:26.269: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 6.126657963s
Oct  5 19:23:28.300: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 8.15787211s
Oct  5 19:23:30.341: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 10.198927375s
... skipping 5 lines ...
Oct  5 19:23:42.534: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 22.392014513s
Oct  5 19:23:44.565: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 24.422937048s
Oct  5 19:23:46.597: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 26.454426452s
Oct  5 19:23:48.629: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Running", Reason="", readiness=true. Elapsed: 28.486370411s
Oct  5 19:23:50.661: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.518752091s
STEP: Saw pod success
Oct  5 19:23:50.661: INFO: Pod "pod-subpath-test-preprovisionedpv-f5c5" satisfied condition "Succeeded or Failed"
Oct  5 19:23:50.692: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-f5c5 container test-container-subpath-preprovisionedpv-f5c5: <nil>
STEP: delete the pod
Oct  5 19:23:50.764: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f5c5 to disappear
Oct  5 19:23:50.795: INFO: Pod pod-subpath-test-preprovisionedpv-f5c5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-f5c5
Oct  5 19:23:50.795: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f5c5" in namespace "provisioning-6944"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:23:51.570: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
• [SLOW TEST:24.739 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:23:54.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Oct  5 19:23:55.126: INFO: Waiting up to 5m0s for pod "test-pod-a63351d8-91da-477f-96e3-867be0bfe576" in namespace "svcaccounts-6374" to be "Succeeded or Failed"
Oct  5 19:23:55.157: INFO: Pod "test-pod-a63351d8-91da-477f-96e3-867be0bfe576": Phase="Pending", Reason="", readiness=false. Elapsed: 31.221279ms
Oct  5 19:23:57.198: INFO: Pod "test-pod-a63351d8-91da-477f-96e3-867be0bfe576": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072191072s
Oct  5 19:23:59.229: INFO: Pod "test-pod-a63351d8-91da-477f-96e3-867be0bfe576": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102774203s
STEP: Saw pod success
Oct  5 19:23:59.229: INFO: Pod "test-pod-a63351d8-91da-477f-96e3-867be0bfe576" satisfied condition "Succeeded or Failed"
Oct  5 19:23:59.260: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod test-pod-a63351d8-91da-477f-96e3-867be0bfe576 container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:23:59.328: INFO: Waiting for pod test-pod-a63351d8-91da-477f-96e3-867be0bfe576 to disappear
Oct  5 19:23:59.358: INFO: Pod test-pod-a63351d8-91da-477f-96e3-867be0bfe576 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:23:59.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6374" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Oct  5 19:23:29.597: INFO: Creating resource for dynamic PV
Oct  5 19:23:29.597: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-29752vc6h
STEP: creating a claim
STEP: Expanding non-expandable pvc
Oct  5 19:23:29.697: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  5 19:23:29.763: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:31.825: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:33.835: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:35.826: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:37.826: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:39.824: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:41.831: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:43.827: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:45.828: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:47.831: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:49.825: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:51.827: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:53.826: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:55.826: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:57.825: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:59.826: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-29752vc6h",
  	... // 2 identical fields
  }

Oct  5 19:23:59.890: INFO: Error updating pvc awswnwh8: PersistentVolumeClaim "awswnwh8" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:00.073: INFO: Only supported for providers [openstack] (not aws)
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:01.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-27" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:02.077: INFO: Only supported for providers [gce gke] (not aws)
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:02.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8163" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:02.572: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:03.635: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
• [SLOW TEST:25.066 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:04.248: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 163 lines ...
• [SLOW TEST:15.221 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:06.074: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
• [SLOW TEST:7.205 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:07.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:07.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6163" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
STEP: Destroying namespace "services-7104" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":9,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:09.184: INFO: Only supported for providers [gce gke] (not aws)
... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":3,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:09.579: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:38.687 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":5,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:04.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  5 19:24:04.448: INFO: Waiting up to 5m0s for pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180" in namespace "downward-api-2352" to be "Succeeded or Failed"
Oct  5 19:24:04.482: INFO: Pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180": Phase="Pending", Reason="", readiness=false. Elapsed: 33.063202ms
Oct  5 19:24:06.514: INFO: Pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065251547s
Oct  5 19:24:08.545: INFO: Pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096919464s
Oct  5 19:24:10.577: INFO: Pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128273267s
STEP: Saw pod success
Oct  5 19:24:10.577: INFO: Pod "downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180" satisfied condition "Succeeded or Failed"
Oct  5 19:24:10.608: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180 container dapi-container: <nil>
STEP: delete the pod
Oct  5 19:24:10.702: INFO: Waiting for pod downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180 to disappear
Oct  5 19:24:10.735: INFO: Pod downward-api-534092ab-18bd-4e6b-9b5e-bbb2b0016180 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.544 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:05.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
• [SLOW TEST:6.696 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:12.444: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:13.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8611" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:13.568: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 34 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  5 19:23:35.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Listing all of the created validation webhooks
Oct  5 19:24:10.369: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.StatusError | 0xc000f4a280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
... skipping 586 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:24:10.370: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.StatusError | 0xc000f4a280>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {
                  SelfLink: "",
                  ResourceVersion: "",
... skipping 9 lines ...
      }
      Timeout: request did not complete within requested timeout context deadline exceeded
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:680
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":18,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:15.466: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":4,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:16.189: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 136 lines ...
Oct  5 19:24:12.195: INFO: Waiting for pod aws-client to disappear
Oct  5 19:24:12.226: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct  5 19:24:12.226: INFO: Deleting PersistentVolumeClaim "pvc-mmx7v"
Oct  5 19:24:12.259: INFO: Deleting PersistentVolume "aws-rkwzb"
Oct  5 19:24:12.512: INFO: Couldn't delete PD "aws://ca-central-1a/vol-000ae07dd484a774a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-000ae07dd484a774a is currently attached to i-02468cd98e1e52b62
	status code: 400, request id: 534fac35-7dde-4685-9f34-24a3d8243f29
Oct  5 19:24:17.775: INFO: Successfully deleted PD "aws://ca-central-1a/vol-000ae07dd484a774a".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:17.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8999" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.405 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:24:11.010: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f" in namespace "downward-api-4701" to be "Succeeded or Failed"
Oct  5 19:24:11.043: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.806138ms
Oct  5 19:24:13.074: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064251693s
Oct  5 19:24:15.106: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096426143s
Oct  5 19:24:17.140: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130760884s
Oct  5 19:24:19.172: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162614838s
Oct  5 19:24:21.204: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194264174s
Oct  5 19:24:23.237: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226870285s
STEP: Saw pod success
Oct  5 19:24:23.237: INFO: Pod "downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f" satisfied condition "Succeeded or Failed"
Oct  5 19:24:23.268: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f container client-container: <nil>
STEP: delete the pod
Oct  5 19:24:23.334: INFO: Waiting for pod downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f to disappear
Oct  5 19:24:23.365: INFO: Pod downwardapi-volume-91cf6648-27f9-4173-864e-11b6858d502f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.612 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:23.437: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 2 lines ...
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:23.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-f9f0b9ff-7302-469d-a27c-18fc4039a86a
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:23.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:23.721: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
• [SLOW TEST:21.547 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":7,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:24.167: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 172 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":6,"skipped":30,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:23:17.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:67.394 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:25.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:24:23.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92" in namespace "projected-8873" to be "Succeeded or Failed"
Oct  5 19:24:23.961: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92": Phase="Pending", Reason="", readiness=false. Elapsed: 30.708136ms
Oct  5 19:24:25.993: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062194614s
Oct  5 19:24:28.025: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093934368s
Oct  5 19:24:30.057: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125771359s
Oct  5 19:24:32.089: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158228399s
STEP: Saw pod success
Oct  5 19:24:32.089: INFO: Pod "downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92" satisfied condition "Succeeded or Failed"
Oct  5 19:24:32.123: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92 container client-container: <nil>
STEP: delete the pod
Oct  5 19:24:32.197: INFO: Waiting for pod downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92 to disappear
Oct  5 19:24:32.228: INFO: Pod downwardapi-volume-792aa438-3834-4e4a-af2c-aceaf4a80d92 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.555 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":55,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:32.331: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 111 lines ...
Oct  5 19:23:46.093: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-698t8] to have phase Bound
Oct  5 19:23:46.123: INFO: PersistentVolumeClaim pvc-698t8 found and phase=Bound (30.806199ms)
STEP: Deleting the previously created pod
Oct  5 19:24:04.281: INFO: Deleting pod "pvc-volume-tester-jkrm8" in namespace "csi-mock-volumes-7425"
Oct  5 19:24:04.313: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jkrm8" to be fully deleted
STEP: Checking CSI driver logs
Oct  5 19:24:12.411: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f6ed586a-ff0b-4198-915e-ad2cdc42cce0/volumes/kubernetes.io~csi/pvc-809598f1-0a2b-4be8-b658-1164e7bfe69a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-jkrm8
Oct  5 19:24:12.411: INFO: Deleting pod "pvc-volume-tester-jkrm8" in namespace "csi-mock-volumes-7425"
STEP: Deleting claim pvc-698t8
Oct  5 19:24:12.505: INFO: Waiting up to 2m0s for PersistentVolume pvc-809598f1-0a2b-4be8-b658-1164e7bfe69a to get deleted
Oct  5 19:24:12.543: INFO: PersistentVolume pvc-809598f1-0a2b-4be8-b658-1164e7bfe69a found and phase=Bound (37.795298ms)
Oct  5 19:24:14.575: INFO: PersistentVolume pvc-809598f1-0a2b-4be8-b658-1164e7bfe69a was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":4,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:33.873: INFO: Only supported for providers [openstack] (not aws)
... skipping 171 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":5,"skipped":23,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:25.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:34.113: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-7a670248-a93f-4cdc-9142-defc6e3442e4
STEP: Creating a pod to test consume configMaps
Oct  5 19:24:32.605: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d" in namespace "projected-2628" to be "Succeeded or Failed"
Oct  5 19:24:32.636: INFO: Pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.720551ms
Oct  5 19:24:34.671: INFO: Pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066069452s
Oct  5 19:24:36.703: INFO: Pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098037604s
Oct  5 19:24:38.735: INFO: Pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129813003s
STEP: Saw pod success
Oct  5 19:24:38.735: INFO: Pod "pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d" satisfied condition "Succeeded or Failed"
Oct  5 19:24:38.766: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:24:38.833: INFO: Waiting for pod pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d to disappear
Oct  5 19:24:38.864: INFO: Pod pod-projected-configmaps-b913490f-6dd3-46d1-8835-56da18d0083d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
Oct  5 19:24:09.760: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-835rgk7j
STEP: creating a claim
Oct  5 19:24:09.791: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Oct  5 19:24:09.855: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  5 19:24:09.920: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:11.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:13.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:15.983: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:17.982: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:19.982: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:21.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:23.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:25.983: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:27.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:29.991: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:31.982: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:33.982: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:35.983: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:37.981: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:39.983: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-835rgk7j",
  	... // 2 identical fields
  }

Oct  5 19:24:40.044: INFO: Error updating pvc aws5pcmd: PersistentVolumeClaim "aws5pcmd" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":46,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:40.235: INFO: Only supported for providers [azure] (not aws)
... skipping 65 lines ...
Oct  5 19:24:33.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct  5 19:24:34.040: INFO: Waiting up to 5m0s for pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83" in namespace "emptydir-8865" to be "Succeeded or Failed"
Oct  5 19:24:34.071: INFO: Pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83": Phase="Pending", Reason="", readiness=false. Elapsed: 30.996637ms
Oct  5 19:24:36.103: INFO: Pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062740675s
Oct  5 19:24:38.137: INFO: Pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097544363s
Oct  5 19:24:40.170: INFO: Pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130025078s
STEP: Saw pod success
Oct  5 19:24:40.170: INFO: Pod "pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83" satisfied condition "Succeeded or Failed"
Oct  5 19:24:40.201: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83 container test-container: <nil>
STEP: delete the pod
Oct  5 19:24:40.268: INFO: Waiting for pod pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83 to disappear
Oct  5 19:24:40.301: INFO: Pod pod-091cd0ff-7a31-44e1-a2f3-71e6f9c65e83 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.522 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:40.380: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Oct  5 19:24:32.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Oct  5 19:24:32.878: INFO: Waiting up to 5m0s for pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2" in namespace "security-context-1705" to be "Succeeded or Failed"
Oct  5 19:24:32.910: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.489398ms
Oct  5 19:24:34.941: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062633515s
Oct  5 19:24:36.973: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094766526s
Oct  5 19:24:39.005: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126176267s
Oct  5 19:24:41.036: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157074605s
STEP: Saw pod success
Oct  5 19:24:41.036: INFO: Pod "security-context-57ecc706-da5c-49c2-b49b-f031b73250e2" satisfied condition "Succeeded or Failed"
Oct  5 19:24:41.066: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod security-context-57ecc706-da5c-49c2-b49b-f031b73250e2 container test-container: <nil>
STEP: delete the pod
Oct  5 19:24:41.146: INFO: Waiting for pod security-context-57ecc706-da5c-49c2-b49b-f031b73250e2 to disappear
Oct  5 19:24:41.177: INFO: Pod security-context-57ecc706-da5c-49c2-b49b-f031b73250e2 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.550 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:41.257: INFO: Only supported for providers [vsphere] (not aws)
... skipping 92 lines ...
Oct  5 19:24:17.328: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7707 exec execpod-affinitym44cs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.68.17.0:80/'
Oct  5 19:24:20.133: INFO: rc: 28
Oct  5 19:24:20.133: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7707 exec execpod-affinitym44cs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.68.17.0:80/'
Oct  5 19:24:22.657: INFO: rc: 28
Oct  5 19:24:22.657: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7707 exec execpod-affinitym44cs -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.68.17.0:80/'
Oct  5 19:24:25.099: INFO: rc: 28
Oct  5 19:24:25.099: FAIL: Session is sticky after reaching the timeout

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0010ea000, 0x779f8f8, 0xc00079f1e0, 0xc000ee6000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2523 +0xc96
k8s.io/kubernetes/test/e2e/network.glob..func24.23()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1798 +0x9c
... skipping 272 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:24:25.099: Session is sticky after reaching the timeout

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2523
------------------------------
{"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":6,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:42.396: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 61 lines ...
STEP: Creating a validating webhook configuration
Oct  5 19:24:04.489: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:14.658: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:24.756: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:34.853: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:44.917: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:44.917: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002be240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 479 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:24:44.917: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002be240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":5,"skipped":12,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:50.184: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:50.394: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 65 lines ...
• [SLOW TEST:13.021 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:54.299: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 107 lines ...
Oct  5 19:23:21.639: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7066
Oct  5 19:23:21.672: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7066
Oct  5 19:23:21.705: INFO: creating *v1.StatefulSet: csi-mock-volumes-7066-9871/csi-mockplugin
Oct  5 19:23:21.737: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7066
Oct  5 19:23:21.782: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7066"
Oct  5 19:23:21.812: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7066 to register on node ip-172-20-32-132.ca-central-1.compute.internal
I1005 19:23:30.439345    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7066","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1005 19:23:30.615480    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1005 19:23:30.646178    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7066","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1005 19:23:30.676790    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1005 19:23:30.709370    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1005 19:23:30.784822    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7066"},"Error":"","FullError":null}
STEP: Creating pod
Oct  5 19:23:38.281: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Oct  5 19:23:38.324: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-h58sm] to have phase Bound
I1005 19:23:38.333796    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Oct  5 19:23:38.355: INFO: PersistentVolumeClaim pvc-h58sm found but phase is Pending instead of Bound.
I1005 19:23:38.365183    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3"}}},"Error":"","FullError":null}
Oct  5 19:23:40.386: INFO: PersistentVolumeClaim pvc-h58sm found and phase=Bound (2.062081007s)
I1005 19:23:41.638298    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  5 19:23:41.671: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:23:42.053297    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3","storage.kubernetes.io/csiProvisionerIdentity":"1633461810725-8081-csi-mock-csi-mock-volumes-7066"}},"Response":{},"Error":"","FullError":null}
I1005 19:23:42.442561    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  5 19:23:42.484: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:23:42.891: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:23:43.143: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:23:43.408053    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3/globalmount","target_path":"/var/lib/kubelet/pods/8612a60e-fb78-4c5d-9eb3-cfbc4b986fb0/volumes/kubernetes.io~csi/pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3","storage.kubernetes.io/csiProvisionerIdentity":"1633461810725-8081-csi-mock-csi-mock-volumes-7066"}},"Response":{},"Error":"","FullError":null}
Oct  5 19:23:50.540: INFO: Deleting pod "pvc-volume-tester-xsf5z" in namespace "csi-mock-volumes-7066"
Oct  5 19:23:50.572: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xsf5z" to be fully deleted
Oct  5 19:23:53.648: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:23:53.940734    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/8612a60e-fb78-4c5d-9eb3-cfbc4b986fb0/volumes/kubernetes.io~csi/pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3/mount"},"Response":{},"Error":"","FullError":null}
I1005 19:23:54.051814    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1005 19:23:54.083135    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3/globalmount"},"Response":{},"Error":"","FullError":null}
I1005 19:23:56.692908    5300 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  5 19:23:57.670: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h58sm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7066", SelfLink:"", UID:"669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", ResourceVersion:"3793", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058618, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c3c048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c3c060)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003230020), VolumeMode:(*v1.PersistentVolumeMode)(0xc003230030), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:23:57.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h58sm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7066", SelfLink:"", UID:"669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", ResourceVersion:"3794", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058618, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7066"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c3c0d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c3c0f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c3c108), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c3c120)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003230060), VolumeMode:(*v1.PersistentVolumeMode)(0xc003230070), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:23:57.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h58sm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7066", SelfLink:"", UID:"669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", ResourceVersion:"3797", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058618, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7066"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003cd6a20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003cd6a38)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003cd6a50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003cd6a68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", StorageClassName:(*string)(0xc00088a1a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00088a1e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:23:57.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h58sm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7066", SelfLink:"", UID:"669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", ResourceVersion:"3798", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058618, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7066"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030c9488), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030c94e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030c9518), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030c9590)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", StorageClassName:(*string)(0xc0004fbd20), VolumeMode:(*v1.PersistentVolumeMode)(0xc0004fbd30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:23:57.671: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h58sm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7066", SelfLink:"", UID:"669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", ResourceVersion:"4326", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058618, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003d2f638), DeletionGracePeriodSeconds:(*int64)(0xc003ff6f38), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7066"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d2f650), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d2f668)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d2f680), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d2f698)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-669b1565-c7c2-4e5b-ab0c-1cc70b993eb3", StorageClassName:(*string)(0xc000875e20), VolumeMode:(*v1.PersistentVolumeMode)(0xc000875e40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":4,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:56.217: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 92 lines ...
• [SLOW TEST:16.722 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:57.172: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 22 lines ...
STEP: Creating a kubernetes client
Oct  5 19:24:15.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct  5 19:24:15.666: INFO: PodSpec: initContainers in spec.initContainers
Oct  5 19:24:57.339: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4dfea7a2-ed44-4271-8264-7986752c8dbe", GenerateName:"", Namespace:"init-container-3998", SelfLink:"", UID:"d83d5ea9-a34a-4517-b61d-a80f02396e70", ResourceVersion:"6452", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769058655, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"666206009"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0023b31e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023b3200)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0023b3218), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0023b3230)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-v8phm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00202a180), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-v8phm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-v8phm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-v8phm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002868970), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-46-201.ca-central-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00135e850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0028689f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002868a10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002868a18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002868a1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0028603e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769058655, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769058655, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769058655, loc:(*time.Location)(0x9e12f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769058655, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.46.201", PodIP:"100.96.1.52", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.52"}}, StartTime:(*v1.Time)(0xc0023b3260), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00135e930)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00135e9a0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://10c8dceb0a72f8f94015f46bde51ef420a6c35c82d8a990209ec4d6075cf944a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00202a2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00202a280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002868a9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:24:57.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3998" for this suite.


• [SLOW TEST:41.888 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:50.413: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
• [SLOW TEST:7.115 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:24:57.536: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 221 lines ...
STEP: Deleting pod hostexec-ip-172-20-41-186.ca-central-1.compute.internal-5lnrl in namespace volumemode-709
Oct  5 19:24:56.275: INFO: Deleting pod "pod-4b0acc62-270e-42a0-a34d-fcdc36ffa079" in namespace "volumemode-709"
Oct  5 19:24:56.307: INFO: Wait up to 5m0s for pod "pod-4b0acc62-270e-42a0-a34d-fcdc36ffa079" to be fully deleted
STEP: Deleting pv and pvc
Oct  5 19:25:00.369: INFO: Deleting PersistentVolumeClaim "pvc-ggjtb"
Oct  5 19:25:00.401: INFO: Deleting PersistentVolume "aws-krdn8"
Oct  5 19:25:00.580: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0c1955eefc7ff386f", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c1955eefc7ff386f is currently attached to i-08433ac31a1487a92
	status code: 400, request id: 1c579b5e-bc5b-4632-b07d-abd2d28b02a3
Oct  5 19:25:05.842: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0c1955eefc7ff386f".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:05.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-709" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:05.928: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":5,"skipped":33,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:57.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:07.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9898" for this suite.


• [SLOW TEST:10.293 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":6,"skipped":33,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Oct  5 19:25:01.875: INFO: PersistentVolumeClaim pvc-lc2lz found but phase is Pending instead of Bound.
Oct  5 19:25:03.907: INFO: PersistentVolumeClaim pvc-lc2lz found and phase=Bound (4.094774773s)
Oct  5 19:25:03.907: INFO: Waiting up to 3m0s for PersistentVolume local-dfbxr to have phase Bound
Oct  5 19:25:03.938: INFO: PersistentVolume local-dfbxr found and phase=Bound (31.341398ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-62jv
STEP: Creating a pod to test subpath
Oct  5 19:25:04.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-62jv" in namespace "provisioning-5835" to be "Succeeded or Failed"
Oct  5 19:25:04.066: INFO: Pod "pod-subpath-test-preprovisionedpv-62jv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.021766ms
Oct  5 19:25:06.098: INFO: Pod "pod-subpath-test-preprovisionedpv-62jv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062670036s
Oct  5 19:25:08.131: INFO: Pod "pod-subpath-test-preprovisionedpv-62jv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095735466s
STEP: Saw pod success
Oct  5 19:25:08.131: INFO: Pod "pod-subpath-test-preprovisionedpv-62jv" satisfied condition "Succeeded or Failed"
Oct  5 19:25:08.166: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-62jv container test-container-volume-preprovisionedpv-62jv: <nil>
STEP: delete the pod
Oct  5 19:25:08.247: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-62jv to disappear
Oct  5 19:25:08.287: INFO: Pod pod-subpath-test-preprovisionedpv-62jv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-62jv
Oct  5 19:25:08.287: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-62jv" in namespace "provisioning-5835"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":46,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Oct  5 19:25:09.227: INFO: AfterEach: Cleaning up test resources.
Oct  5 19:25:09.227: INFO: pvc is nil
Oct  5 19:25:09.227: INFO: Deleting PersistentVolume "hostpath-pznpg"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":8,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-windows] Device Plugin
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Oct  5 19:25:09.275: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:09.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4461" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":9,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Oct  5 19:25:03.406: INFO: PersistentVolumeClaim pvc-vp857 found but phase is Pending instead of Bound.
Oct  5 19:25:05.437: INFO: PersistentVolumeClaim pvc-vp857 found and phase=Bound (8.162571029s)
Oct  5 19:25:05.437: INFO: Waiting up to 3m0s for PersistentVolume local-lmxvn to have phase Bound
Oct  5 19:25:05.468: INFO: PersistentVolume local-lmxvn found and phase=Bound (30.578116ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bzqx
STEP: Creating a pod to test subpath
Oct  5 19:25:05.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bzqx" in namespace "provisioning-3471" to be "Succeeded or Failed"
Oct  5 19:25:05.593: INFO: Pod "pod-subpath-test-preprovisionedpv-bzqx": Phase="Pending", Reason="", readiness=false. Elapsed: 31.176307ms
Oct  5 19:25:07.625: INFO: Pod "pod-subpath-test-preprovisionedpv-bzqx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06329794s
Oct  5 19:25:09.657: INFO: Pod "pod-subpath-test-preprovisionedpv-bzqx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09471762s
STEP: Saw pod success
Oct  5 19:25:09.657: INFO: Pod "pod-subpath-test-preprovisionedpv-bzqx" satisfied condition "Succeeded or Failed"
Oct  5 19:25:09.687: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-bzqx container test-container-subpath-preprovisionedpv-bzqx: <nil>
STEP: delete the pod
Oct  5 19:25:09.756: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bzqx to disappear
Oct  5 19:25:09.787: INFO: Pod pod-subpath-test-preprovisionedpv-bzqx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bzqx
Oct  5 19:25:09.787: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bzqx" in namespace "provisioning-3471"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:10.853: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
• [SLOW TEST:6.508 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":7,"skipped":34,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":6,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:14.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct  5 19:25:14.442: INFO: Waiting up to 5m0s for pod "pod-9c100868-c90e-4585-b722-1abac43d5198" in namespace "emptydir-36" to be "Succeeded or Failed"
Oct  5 19:25:14.472: INFO: Pod "pod-9c100868-c90e-4585-b722-1abac43d5198": Phase="Pending", Reason="", readiness=false. Elapsed: 30.264707ms
Oct  5 19:25:16.503: INFO: Pod "pod-9c100868-c90e-4585-b722-1abac43d5198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060884458s
Oct  5 19:25:18.535: INFO: Pod "pod-9c100868-c90e-4585-b722-1abac43d5198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092503849s
STEP: Saw pod success
Oct  5 19:25:18.535: INFO: Pod "pod-9c100868-c90e-4585-b722-1abac43d5198" satisfied condition "Succeeded or Failed"
Oct  5 19:25:18.565: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-9c100868-c90e-4585-b722-1abac43d5198 container test-container: <nil>
STEP: delete the pod
Oct  5 19:25:18.631: INFO: Waiting for pod pod-9c100868-c90e-4585-b722-1abac43d5198 to disappear
Oct  5 19:25:18.662: INFO: Pod pod-9c100868-c90e-4585-b722-1abac43d5198 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:18.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-36" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":39,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:18.749: INFO: Only supported for providers [gce gke] (not aws)
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:12.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":48,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:23.061: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:23.083: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:23.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7745" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":5,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:21.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-600e91cb-5e46-44ac-a5f8-0ca9b7cc6022
STEP: Creating a pod to test consume secrets
Oct  5 19:25:21.341: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374" in namespace "projected-3965" to be "Succeeded or Failed"
Oct  5 19:25:21.372: INFO: Pod "pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374": Phase="Pending", Reason="", readiness=false. Elapsed: 30.655493ms
Oct  5 19:25:23.403: INFO: Pod "pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062095628s
STEP: Saw pod success
Oct  5 19:25:23.403: INFO: Pod "pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374" satisfied condition "Succeeded or Failed"
Oct  5 19:25:23.434: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374 container projected-secret-volume-test: <nil>
STEP: delete the pod
Oct  5 19:25:23.503: INFO: Waiting for pod pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374 to disappear
Oct  5 19:25:23.533: INFO: Pod pod-projected-secrets-2caaaa1f-8b6b-45d3-bb38-ae6d2ea48374 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:23.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3965" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:23.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-3410/configmap-test-bf2cbbd3-c5dd-492c-b282-cf1508ded8ea
STEP: Creating a pod to test consume configMaps
Oct  5 19:25:23.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1" in namespace "configmap-3410" to be "Succeeded or Failed"
Oct  5 19:25:23.811: INFO: Pod "pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.663972ms
Oct  5 19:25:25.842: INFO: Pod "pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064633768s
STEP: Saw pod success
Oct  5 19:25:25.842: INFO: Pod "pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1" satisfied condition "Succeeded or Failed"
Oct  5 19:25:25.872: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1 container env-test: <nil>
STEP: delete the pod
Oct  5 19:25:25.939: INFO: Waiting for pod pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1 to disappear
Oct  5 19:25:25.970: INFO: Pod pod-configmaps-2f090bb0-bf7a-402b-8cd7-47137d0fbbd1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:25.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3410" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:26.058: INFO: Only supported for providers [azure] (not aws)
... skipping 46 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:26.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:26.184: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:27.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-261" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":11,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:27.108: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 123 lines ...
I1005 19:22:54.091297    5465 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:22:57.091629    5465 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:23:00.092254    5465 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  5 19:23:00.092: INFO: Creating new exec pod
Oct  5 19:23:13.187: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:18.712: INFO: rc: 1
Oct  5 19:23:18.713: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:19.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:25.292: INFO: rc: 1
Oct  5 19:23:25.293: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w+  2 externalip-testecho 80
 hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:25.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:31.259: INFO: rc: 1
Oct  5 19:23:31.259: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:31.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:37.157: INFO: rc: 1
Oct  5 19:23:37.157: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:37.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:43.527: INFO: rc: 1
Oct  5 19:23:43.527: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:43.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:49.157: INFO: rc: 1
Oct  5 19:23:49.157: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:49.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:23:55.162: INFO: rc: 1
Oct  5 19:23:55.162: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:23:55.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:01.198: INFO: rc: 1
Oct  5 19:24:01.198: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:01.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:07.330: INFO: rc: 1
Oct  5 19:24:07.330: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalip-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:07.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:13.285: INFO: rc: 1
Oct  5 19:24:13.285: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:13.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:19.169: INFO: rc: 1
Oct  5 19:24:19.169: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:19.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:25.175: INFO: rc: 1
Oct  5 19:24:25.176: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:25.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:31.282: INFO: rc: 1
Oct  5 19:24:31.282: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:31.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:37.183: INFO: rc: 1
Oct  5 19:24:37.183: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:37.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:43.220: INFO: rc: 1
Oct  5 19:24:43.220: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:43.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:49.172: INFO: rc: 1
Oct  5 19:24:49.172: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + ncecho -v hostName
 -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:49.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:24:55.197: INFO: rc: 1
Oct  5 19:24:55.197: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:55.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:25:01.153: INFO: rc: 1
Oct  5 19:25:01.153: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:01.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:25:07.211: INFO: rc: 1
Oct  5 19:25:07.211: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:07.714: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:25:13.164: INFO: rc: 1
Oct  5 19:25:13.164: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:13.713: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:25:19.222: INFO: rc: 1
Oct  5 19:25:19.222: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:19.222: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct  5 19:25:24.786: INFO: rc: 1
Oct  5 19:25:24.786: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8803 exec execpod5n2pp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80:
Command stdout:

stderr:
+ + echonc -v hostName
 -t -w 2 externalip-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:24.787: FAIL: Unexpected error:
    <*errors.errorString | 0xc002b08130>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol
occurred

... skipping 14 lines ...
STEP: Found 16 events.
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:51 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-sxrjl
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:51 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-qklpk
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:51 +0000 UTC - event for externalip-test-qklpk: {default-scheduler } Scheduled: Successfully assigned services-8803/externalip-test-qklpk to ip-172-20-41-232.ca-central-1.compute.internal
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:51 +0000 UTC - event for externalip-test-sxrjl: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:51 +0000 UTC - event for externalip-test-sxrjl: {default-scheduler } Scheduled: Successfully assigned services-8803/externalip-test-sxrjl to ip-172-20-46-201.ca-central-1.compute.internal
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:52 +0000 UTC - event for externalip-test-qklpk: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-79lsr" : failed to sync configmap cache: timed out waiting for the condition
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:53 +0000 UTC - event for externalip-test-qklpk: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:53 +0000 UTC - event for externalip-test-qklpk: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Created: Created container externalip-test
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:53 +0000 UTC - event for externalip-test-qklpk: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Started: Started container externalip-test
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:54 +0000 UTC - event for externalip-test-sxrjl: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.43105048s
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:54 +0000 UTC - event for externalip-test-sxrjl: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Created: Created container externalip-test
Oct  5 19:25:24.917: INFO: At 2021-10-05 19:22:54 +0000 UTC - event for externalip-test-sxrjl: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Started: Started container externalip-test
... skipping 258 lines ...
• Failure [157.898 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177

  Oct  5 19:25:24.787: Unexpected error:
      <*errors.errorString | 0xc002b08130>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalip-test:80 over TCP protocol
  occurred

... skipping 23 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7632-crds.webhook.example.com via the AdmissionRegistration API
Oct  5 19:24:42.259: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:24:52.423: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:25:02.527: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:25:12.623: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:25:22.694: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:25:22.694: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 522 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:25:22.694: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":6,"skipped":33,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:27.959: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":49,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:30.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-267" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":10,"skipped":56,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:30.864: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:32.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4834" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":43,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:32.433: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:35.124: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 22 lines ...
STEP: Creating a kubernetes client
Oct  5 19:25:32.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Oct  5 19:25:32.607: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:35.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3191" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":8,"skipped":46,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:35.726: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
Oct  5 19:25:35.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Oct  5 19:25:35.953: INFO: Waiting up to 5m0s for pod "client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc" in namespace "containers-802" to be "Succeeded or Failed"
Oct  5 19:25:35.984: INFO: Pod "client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.464231ms
Oct  5 19:25:38.015: INFO: Pod "client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061568696s
STEP: Saw pod success
Oct  5 19:25:38.015: INFO: Pod "client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc" satisfied condition "Succeeded or Failed"
Oct  5 19:25:38.046: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:25:38.113: INFO: Waiting for pod client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc to disappear
Oct  5 19:25:38.143: INFO: Pod client-containers-36d012ea-19c8-49e0-ada4-fcdc85f3b0dc no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:38.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-802" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":55,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:38.228: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
Oct  5 19:24:57.721: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-2094kb9lz
STEP: creating a claim
Oct  5 19:24:57.755: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-9vvz
STEP: Creating a pod to test exec-volume-test
Oct  5 19:24:57.849: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-9vvz" in namespace "volume-2094" to be "Succeeded or Failed"
Oct  5 19:24:57.880: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.359694ms
Oct  5 19:24:59.911: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06181377s
Oct  5 19:25:01.942: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092855177s
Oct  5 19:25:03.973: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12352673s
Oct  5 19:25:06.006: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157225736s
Oct  5 19:25:08.038: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189027033s
Oct  5 19:25:10.069: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.219635689s
Oct  5 19:25:12.100: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.250334238s
Oct  5 19:25:14.131: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.281758101s
Oct  5 19:25:16.163: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.313386505s
Oct  5 19:25:18.193: INFO: Pod "exec-volume-test-dynamicpv-9vvz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.344065426s
STEP: Saw pod success
Oct  5 19:25:18.193: INFO: Pod "exec-volume-test-dynamicpv-9vvz" satisfied condition "Succeeded or Failed"
Oct  5 19:25:18.223: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod exec-volume-test-dynamicpv-9vvz container exec-container-dynamicpv-9vvz: <nil>
STEP: delete the pod
Oct  5 19:25:18.330: INFO: Waiting for pod exec-volume-test-dynamicpv-9vvz to disappear
Oct  5 19:25:18.360: INFO: Pod exec-volume-test-dynamicpv-9vvz no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-9vvz
Oct  5 19:25:18.360: INFO: Deleting pod "exec-volume-test-dynamicpv-9vvz" in namespace "volume-2094"
... skipping 49 lines ...
Oct  5 19:25:31.766: INFO: PersistentVolumeClaim pvc-sktzm found but phase is Pending instead of Bound.
Oct  5 19:25:33.797: INFO: PersistentVolumeClaim pvc-sktzm found and phase=Bound (14.28311997s)
Oct  5 19:25:33.797: INFO: Waiting up to 3m0s for PersistentVolume local-nnpsk to have phase Bound
Oct  5 19:25:33.828: INFO: PersistentVolume local-nnpsk found and phase=Bound (30.774927ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z254
STEP: Creating a pod to test subpath
Oct  5 19:25:33.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z254" in namespace "provisioning-2681" to be "Succeeded or Failed"
Oct  5 19:25:33.952: INFO: Pod "pod-subpath-test-preprovisionedpv-z254": Phase="Pending", Reason="", readiness=false. Elapsed: 30.762297ms
Oct  5 19:25:35.984: INFO: Pod "pod-subpath-test-preprovisionedpv-z254": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062663263s
Oct  5 19:25:38.015: INFO: Pod "pod-subpath-test-preprovisionedpv-z254": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093852199s
STEP: Saw pod success
Oct  5 19:25:38.016: INFO: Pod "pod-subpath-test-preprovisionedpv-z254" satisfied condition "Succeeded or Failed"
Oct  5 19:25:38.046: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-z254 container test-container-volume-preprovisionedpv-z254: <nil>
STEP: delete the pod
Oct  5 19:25:38.122: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z254 to disappear
Oct  5 19:25:38.153: INFO: Pod pod-subpath-test-preprovisionedpv-z254 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z254
Oct  5 19:25:38.153: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z254" in namespace "provisioning-2681"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":70,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:38.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename request-timeout
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:38.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-1018" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":4,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:38.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Oct  5 19:25:39.019: INFO: Waiting up to 5m0s for pod "downward-api-bcede5cb-070b-44ab-bb42-698afee57d31" in namespace "downward-api-171" to be "Succeeded or Failed"
Oct  5 19:25:39.050: INFO: Pod "downward-api-bcede5cb-070b-44ab-bb42-698afee57d31": Phase="Pending", Reason="", readiness=false. Elapsed: 30.747205ms
Oct  5 19:25:41.082: INFO: Pod "downward-api-bcede5cb-070b-44ab-bb42-698afee57d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062264622s
STEP: Saw pod success
Oct  5 19:25:41.082: INFO: Pod "downward-api-bcede5cb-070b-44ab-bb42-698afee57d31" satisfied condition "Succeeded or Failed"
Oct  5 19:25:41.113: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod downward-api-bcede5cb-070b-44ab-bb42-698afee57d31 container dapi-container: <nil>
STEP: delete the pod
Oct  5 19:25:41.181: INFO: Waiting for pod downward-api-bcede5cb-070b-44ab-bb42-698afee57d31 to disappear
Oct  5 19:25:41.212: INFO: Pod downward-api-bcede5cb-070b-44ab-bb42-698afee57d31 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:41.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-171" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:39.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-0496b50c-44be-4a04-b49e-729868f6aef3
STEP: Creating a pod to test consume configMaps
Oct  5 19:25:39.233: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b" in namespace "projected-9846" to be "Succeeded or Failed"
Oct  5 19:25:39.263: INFO: Pod "pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.069139ms
Oct  5 19:25:41.294: INFO: Pod "pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060705068s
STEP: Saw pod success
Oct  5 19:25:41.294: INFO: Pod "pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b" satisfied condition "Succeeded or Failed"
Oct  5 19:25:41.324: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:25:41.399: INFO: Waiting for pod pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b to disappear
Oct  5 19:25:41.429: INFO: Pod pod-projected-configmaps-4664d5de-6707-4c3b-829d-4b986cb7876b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:41.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9846" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:41.510: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:11.594 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":11,"skipped":65,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:42.511: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
Oct  5 19:25:46.622: INFO: PersistentVolumeClaim pvc-dn56l found but phase is Pending instead of Bound.
Oct  5 19:25:48.654: INFO: PersistentVolumeClaim pvc-dn56l found and phase=Bound (10.18935576s)
Oct  5 19:25:48.654: INFO: Waiting up to 3m0s for PersistentVolume local-ps66x to have phase Bound
Oct  5 19:25:48.685: INFO: PersistentVolume local-ps66x found and phase=Bound (30.801495ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gj9s
STEP: Creating a pod to test subpath
Oct  5 19:25:48.780: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gj9s" in namespace "provisioning-8517" to be "Succeeded or Failed"
Oct  5 19:25:48.811: INFO: Pod "pod-subpath-test-preprovisionedpv-gj9s": Phase="Pending", Reason="", readiness=false. Elapsed: 30.896104ms
Oct  5 19:25:50.844: INFO: Pod "pod-subpath-test-preprovisionedpv-gj9s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0634293s
Oct  5 19:25:52.877: INFO: Pod "pod-subpath-test-preprovisionedpv-gj9s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096707588s
STEP: Saw pod success
Oct  5 19:25:52.877: INFO: Pod "pod-subpath-test-preprovisionedpv-gj9s" satisfied condition "Succeeded or Failed"
Oct  5 19:25:52.908: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-gj9s container test-container-volume-preprovisionedpv-gj9s: <nil>
STEP: delete the pod
Oct  5 19:25:52.984: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gj9s to disappear
Oct  5 19:25:53.015: INFO: Pod pod-subpath-test-preprovisionedpv-gj9s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gj9s
Oct  5 19:25:53.015: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gj9s" in namespace "provisioning-8517"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:54.336: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 2 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:42.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-723" for this suite.


• [SLOW TEST:12.313 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:231
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":12,"skipped":72,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:54.853: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 84 lines ...
Oct  5 19:25:48.233: INFO: PersistentVolumeClaim pvc-b7bdm found but phase is Pending instead of Bound.
Oct  5 19:25:50.265: INFO: PersistentVolumeClaim pvc-b7bdm found and phase=Bound (6.123540594s)
Oct  5 19:25:50.265: INFO: Waiting up to 3m0s for PersistentVolume local-r44js to have phase Bound
Oct  5 19:25:50.296: INFO: PersistentVolume local-r44js found and phase=Bound (30.161482ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h4kt
STEP: Creating a pod to test subpath
Oct  5 19:25:50.389: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h4kt" in namespace "provisioning-9651" to be "Succeeded or Failed"
Oct  5 19:25:50.419: INFO: Pod "pod-subpath-test-preprovisionedpv-h4kt": Phase="Pending", Reason="", readiness=false. Elapsed: 30.514889ms
Oct  5 19:25:52.451: INFO: Pod "pod-subpath-test-preprovisionedpv-h4kt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062357725s
Oct  5 19:25:54.482: INFO: Pod "pod-subpath-test-preprovisionedpv-h4kt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092992448s
STEP: Saw pod success
Oct  5 19:25:54.482: INFO: Pod "pod-subpath-test-preprovisionedpv-h4kt" satisfied condition "Succeeded or Failed"
Oct  5 19:25:54.512: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-h4kt container test-container-subpath-preprovisionedpv-h4kt: <nil>
STEP: delete the pod
Oct  5 19:25:54.581: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h4kt to disappear
Oct  5 19:25:54.611: INFO: Pod pod-subpath-test-preprovisionedpv-h4kt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h4kt
Oct  5 19:25:54.611: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h4kt" in namespace "provisioning-9651"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:55.184: INFO: Only supported for providers [gce gke] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:56.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-5626" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":7,"skipped":47,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:56.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Oct  5 19:25:56.513: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4045" to be "Succeeded or Failed"
Oct  5 19:25:56.544: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.299547ms
Oct  5 19:25:58.575: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061451439s
STEP: Saw pod success
Oct  5 19:25:58.575: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  5 19:25:58.605: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  5 19:25:58.671: INFO: Waiting for pod pod-host-path-test to disappear
Oct  5 19:25:58.702: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8241" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":79,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:25:58.877: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:25:58.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename podtemplate
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:25:59.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":54,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct  5 19:25:59.105: INFO: Waiting up to 5m0s for pod "pod-b2eb1800-25e2-40bf-89d6-27d69ef29154" in namespace "emptydir-6057" to be "Succeeded or Failed"
Oct  5 19:25:59.135: INFO: Pod "pod-b2eb1800-25e2-40bf-89d6-27d69ef29154": Phase="Pending", Reason="", readiness=false. Elapsed: 30.272035ms
Oct  5 19:26:01.166: INFO: Pod "pod-b2eb1800-25e2-40bf-89d6-27d69ef29154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061362074s
STEP: Saw pod success
Oct  5 19:26:01.166: INFO: Pod "pod-b2eb1800-25e2-40bf-89d6-27d69ef29154" satisfied condition "Succeeded or Failed"
Oct  5 19:26:01.198: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-b2eb1800-25e2-40bf-89d6-27d69ef29154 container test-container: <nil>
STEP: delete the pod
Oct  5 19:26:01.273: INFO: Waiting for pod pod-b2eb1800-25e2-40bf-89d6-27d69ef29154 to disappear
Oct  5 19:26:01.303: INFO: Pod pod-b2eb1800-25e2-40bf-89d6-27d69ef29154 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6057" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":14,"skipped":93,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:01.383: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
I1005 19:23:00.949652    5446 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:23:03.949798    5446 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Oct  5 19:23:04.054: INFO: Creating new exec pod
Oct  5 19:23:12.148: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:23:27.748: INFO: rc: 1
Oct  5 19:23:27.748: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:23:29.748: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:23:45.255: INFO: rc: 1
Oct  5 19:23:45.255: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:23:45.748: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:24:01.199: INFO: rc: 1
Oct  5 19:24:01.199: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:24:01.749: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:24:17.354: INFO: rc: 1
Oct  5 19:24:17.354: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:24:17.748: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:24:33.214: INFO: rc: 1
Oct  5 19:24:33.214: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:24:33.749: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:24:49.224: INFO: rc: 1
Oct  5 19:24:49.224: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:24:49.749: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:25:05.321: INFO: rc: 1
Oct  5 19:25:05.321: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:25:05.748: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:25:21.204: INFO: rc: 1
Oct  5 19:25:21.204: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:25:21.749: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:25:37.192: INFO: rc: 1
Oct  5 19:25:37.193: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:25:37.193: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1292 exec execpodcgtxf -- /bin/sh -x -c nslookup clusterip-service.services-1292.svc.cluster.local'
Oct  5 19:25:52.641: INFO: rc: 1
Oct  5 19:25:52.641: INFO: ExternalName service "services-1292/execpodcgtxf" failed to resolve to IP
Oct  5 19:25:52.642: FAIL: Unexpected error:
    <*errors.errorString | 0xc00023e240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 294 lines ...
• Failure [200.482 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:25:52.642: Unexpected error:
      <*errors.errorString | 0xc00023e240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1393
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":0,"skipped":24,"failed":1,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
Oct  5 19:25:24.390: INFO: PersistentVolumeClaim pvc-czhxd found but phase is Pending instead of Bound.
Oct  5 19:25:26.424: INFO: PersistentVolumeClaim pvc-czhxd found and phase=Bound (2.078712169s)
STEP: Deleting the previously created pod
Oct  5 19:25:48.580: INFO: Deleting pod "pvc-volume-tester-lngc9" in namespace "csi-mock-volumes-4415"
Oct  5 19:25:48.612: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lngc9" to be fully deleted
STEP: Checking CSI driver logs
Oct  5 19:25:54.708: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c811ba33-71f5-44fd-b3e2-4ce0c40f6c12/volumes/kubernetes.io~csi/pvc-174c4a84-f99e-41db-aa89-2199e849bae0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-lngc9
Oct  5 19:25:54.709: INFO: Deleting pod "pvc-volume-tester-lngc9" in namespace "csi-mock-volumes-4415"
STEP: Deleting claim pvc-czhxd
Oct  5 19:25:54.801: INFO: Waiting up to 2m0s for PersistentVolume pvc-174c4a84-f99e-41db-aa89-2199e849bae0 to get deleted
Oct  5 19:25:54.832: INFO: PersistentVolume pvc-174c4a84-f99e-41db-aa89-2199e849bae0 found and phase=Released (30.290787ms)
Oct  5 19:25:56.863: INFO: PersistentVolume pvc-174c4a84-f99e-41db-aa89-2199e849bae0 was removed
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":5,"skipped":20,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:16.117: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 125 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":25,"failed":1,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:23.808: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-7228e518-1ea2-45d2-ae66-29d6be07448f
STEP: Creating a pod to test consume secrets
Oct  5 19:26:24.171: INFO: Waiting up to 5m0s for pod "pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6" in namespace "secrets-1257" to be "Succeeded or Failed"
Oct  5 19:26:24.202: INFO: Pod "pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.488469ms
Oct  5 19:26:26.234: INFO: Pod "pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062365971s
STEP: Saw pod success
Oct  5 19:26:26.234: INFO: Pod "pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6" satisfied condition "Succeeded or Failed"
Oct  5 19:26:26.265: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6 container secret-volume-test: <nil>
STEP: delete the pod
Oct  5 19:26:26.334: INFO: Waiting for pod pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6 to disappear
Oct  5 19:26:26.365: INFO: Pod pod-secrets-4fc6c65c-f33f-4a38-b210-fc68a34024a6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:26.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1257" for this suite.
STEP: Destroying namespace "secret-namespace-3579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:26.476: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 81 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 134 lines ...
• [SLOW TEST:37.678 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":10,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:36.912: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
• [SLOW TEST:78.030 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":52,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:41.706: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 169 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:42.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4371" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":10,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:42.161: INFO: Only supported for providers [openstack] (not aws)
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":51,"failed":0}
[BeforeEach] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:24:57.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-2570
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2570
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Oct  5 19:26:46.346: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 265 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:26:46.346: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":11,"skipped":51,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:48.550: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":10,"skipped":56,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:26:47.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:26:47.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2" in namespace "projected-3079" to be "Succeeded or Failed"
Oct  5 19:26:47.793: INFO: Pod "downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.876686ms
Oct  5 19:26:49.826: INFO: Pod "downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063056541s
STEP: Saw pod success
Oct  5 19:26:49.826: INFO: Pod "downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2" satisfied condition "Succeeded or Failed"
Oct  5 19:26:49.857: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2 container client-container: <nil>
STEP: delete the pod
Oct  5 19:26:49.925: INFO: Waiting for pod downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2 to disappear
Oct  5 19:26:49.956: INFO: Pod downwardapi-volume-e6f26642-a213-4842-9b10-97de59ee86c2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:49.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3079" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":56,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:50.050: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
Oct  5 19:25:54.508: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-2817vd5qm
STEP: creating a claim
Oct  5 19:25:54.539: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-8djq
STEP: Creating a pod to test exec-volume-test
Oct  5 19:25:54.634: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-8djq" in namespace "volume-2817" to be "Succeeded or Failed"
Oct  5 19:25:54.664: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.4788ms
Oct  5 19:25:56.697: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062761331s
Oct  5 19:25:58.728: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094192414s
Oct  5 19:26:00.759: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12561109s
Oct  5 19:26:02.803: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169561156s
Oct  5 19:26:04.835: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200899807s
... skipping 3 lines ...
Oct  5 19:26:12.963: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 18.328710937s
Oct  5 19:26:14.995: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 20.361237748s
Oct  5 19:26:17.027: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 22.393534935s
Oct  5 19:26:19.058: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Pending", Reason="", readiness=false. Elapsed: 24.424678778s
Oct  5 19:26:21.090: INFO: Pod "exec-volume-test-dynamicpv-8djq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.456274149s
STEP: Saw pod success
Oct  5 19:26:21.090: INFO: Pod "exec-volume-test-dynamicpv-8djq" satisfied condition "Succeeded or Failed"
Oct  5 19:26:21.121: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod exec-volume-test-dynamicpv-8djq container exec-container-dynamicpv-8djq: <nil>
STEP: delete the pod
Oct  5 19:26:21.198: INFO: Waiting for pod exec-volume-test-dynamicpv-8djq to disappear
Oct  5 19:26:21.231: INFO: Pod exec-volume-test-dynamicpv-8djq no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-8djq
Oct  5 19:26:21.231: INFO: Deleting pod "exec-volume-test-dynamicpv-8djq" in namespace "volume-2817"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:26:51.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-limits-on-node
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:51.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-3376" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:53.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-6993" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:26:53.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 69 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-fd1773db-4441-49ea-90de-6b3560cd33f8
STEP: Creating a pod to test consume configMaps
Oct  5 19:26:50.291: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d" in namespace "projected-80" to be "Succeeded or Failed"
Oct  5 19:26:50.322: INFO: Pod "pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.776946ms
Oct  5 19:26:52.353: INFO: Pod "pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062317461s
Oct  5 19:26:54.386: INFO: Pod "pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094634566s
STEP: Saw pod success
Oct  5 19:26:54.386: INFO: Pod "pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d" satisfied condition "Succeeded or Failed"
Oct  5 19:26:54.417: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d container agnhost-container: <nil>
STEP: delete the pod
Oct  5 19:26:54.485: INFO: Waiting for pod pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d to disappear
Oct  5 19:26:54.515: INFO: Pod pod-projected-configmaps-0797fcbe-ac6f-47cd-83dd-2eaf48a5d35d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:26:54.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-80" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":64,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:54.606: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 178 lines ...
STEP: Listing all of the created validation webhooks
Oct  5 19:26:14.211: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:26:24.385: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:26:34.494: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:26:44.588: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:26:54.665: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:26:54.666: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 15 lines ...
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:00 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-78988fc6cd to 1
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:00 +0000 UTC - event for sample-webhook-deployment-78988fc6cd: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-78988fc6cd-696fg
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:00 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-696fg: {default-scheduler } Scheduled: Successfully assigned webhook-6514/sample-webhook-deployment-78988fc6cd-696fg to ip-172-20-46-201.ca-central-1.compute.internal
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:01 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-696fg: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:01 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-696fg: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Created: Created container sample-webhook
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:01 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-696fg: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Started: Started container sample-webhook
Oct  5 19:26:54.698: INFO: At 2021-10-05 19:26:01 +0000 UTC - event for sample-webhook-deployment-78988fc6cd-696fg: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Unhealthy: Readiness probe failed: Get "https://100.96.1.86:8444/readyz": dial tcp 100.96.1.86:8444: connect: connection refused
Oct  5 19:26:54.729: INFO: POD                                         NODE                                            PHASE    GRACE  CONDITIONS
Oct  5 19:26:54.729: INFO: sample-webhook-deployment-78988fc6cd-696fg  ip-172-20-46-201.ca-central-1.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:26:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:26:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:26:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:26:00 +0000 UTC  }]
Oct  5 19:26:54.729: INFO: 
Oct  5 19:26:54.760: INFO: 
Logging node info for node ip-172-20-32-132.ca-central-1.compute.internal
Oct  5 19:26:54.790: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-132.ca-central-1.compute.internal    6e372300-3e30-443f-a6e1-d56e9d91996a 9556 0 2021-10-05 19:20:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ca-central-1 failure-domain.beta.kubernetes.io/zone:ca-central-1a kops.k8s.io/instancegroup:nodes-ca-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-132.ca-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-32-132.ca-central-1.compute.internal topology.kubernetes.io/region:ca-central-1 topology.kubernetes.io/zone:ca-central-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-05 19:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kube-controller-manager Update v1 2021-10-05 19:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}} {kubelet Update v1 2021-10-05 19:26:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ca-central-1a/i-02468cd98e1e52b62,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-05 19:26:52 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-05 19:26:52 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-05 19:26:52 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-05 19:26:52 +0000 UTC,LastTransitionTime:2021-10-05 19:20:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.132,},NodeAddress{Type:ExternalIP,Address:3.96.195.58,},NodeAddress{Type:Hostname,Address:ip-172-20-32-132.ca-central-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-132.ca-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-96-195-58.ca-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2ed6a81eb3494309396370c0043173,SystemUUID:ec2ed6a8-1eb3-4943-0939-6370c0043173,BootID:b3826fd6-2065-47e3-8b57-4ea7e4a65bf3,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0051f4c868739c872],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0051f4c868739c872,DevicePath:/dev/xvdbu,},},Config:nil,},}
... skipping 473 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:26:54.666: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":5,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-dhtt
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:26:37.266: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-dhtt" in namespace "subpath-1005" to be "Succeeded or Failed"
Oct  5 19:26:37.299: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Pending", Reason="", readiness=false. Elapsed: 32.804883ms
Oct  5 19:26:39.330: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 2.064462055s
Oct  5 19:26:41.362: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 4.095549045s
Oct  5 19:26:43.392: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 6.126353794s
Oct  5 19:26:45.424: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 8.158402901s
Oct  5 19:26:47.458: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 10.191601964s
Oct  5 19:26:49.489: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 12.222783998s
Oct  5 19:26:51.520: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 14.254355995s
Oct  5 19:26:53.554: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 16.28798877s
Oct  5 19:26:55.585: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 18.319421695s
Oct  5 19:26:57.618: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Running", Reason="", readiness=true. Elapsed: 20.352142157s
Oct  5 19:26:59.649: INFO: Pod "pod-subpath-test-downwardapi-dhtt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.383050841s
STEP: Saw pod success
Oct  5 19:26:59.649: INFO: Pod "pod-subpath-test-downwardapi-dhtt" satisfied condition "Succeeded or Failed"
Oct  5 19:26:59.679: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-downwardapi-dhtt container test-container-subpath-downwardapi-dhtt: <nil>
STEP: delete the pod
Oct  5 19:26:59.764: INFO: Waiting for pod pod-subpath-test-downwardapi-dhtt to disappear
Oct  5 19:26:59.802: INFO: Pod pod-subpath-test-downwardapi-dhtt no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-dhtt
Oct  5 19:26:59.802: INFO: Deleting pod "pod-subpath-test-downwardapi-dhtt" in namespace "subpath-1005"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:26:59.920: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 77 lines ...
Oct  5 19:25:46.121: INFO: PersistentVolumeClaim csi-hostpath5thtm found but phase is Pending instead of Bound.
Oct  5 19:25:48.153: INFO: PersistentVolumeClaim csi-hostpath5thtm found but phase is Pending instead of Bound.
Oct  5 19:25:50.183: INFO: PersistentVolumeClaim csi-hostpath5thtm found but phase is Pending instead of Bound.
Oct  5 19:25:52.215: INFO: PersistentVolumeClaim csi-hostpath5thtm found and phase=Bound (12.219846202s)
STEP: Expanding non-expandable pvc
Oct  5 19:25:52.276: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct  5 19:25:52.338: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:25:54.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:25:56.403: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:25:58.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:00.403: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:02.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:04.399: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:06.401: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:08.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:10.403: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:12.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:14.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:16.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:18.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:20.401: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:22.400: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Oct  5 19:26:22.462: INFO: Error updating pvc csi-hostpath5thtm: persistentvolumeclaims "csi-hostpath5thtm" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Oct  5 19:26:22.463: INFO: Deleting PersistentVolumeClaim "csi-hostpath5thtm"
Oct  5 19:26:22.494: INFO: Waiting up to 5m0s for PersistentVolume pvc-f58de780-d71d-448a-86fd-14baf3fd8cda to get deleted
Oct  5 19:26:22.525: INFO: PersistentVolume pvc-f58de780-d71d-448a-86fd-14baf3fd8cda found and phase=Released (31.00264ms)
Oct  5 19:26:27.555: INFO: PersistentVolume pvc-f58de780-d71d-448a-86fd-14baf3fd8cda was removed
STEP: Deleting sc
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":10,"skipped":60,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:27:02.853: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:246.285 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:27:04.312: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 73 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-2d54da58-64d0-43ac-8823-ed7c70793b75
STEP: Creating a pod to test consume configMaps
Oct  5 19:27:03.083: INFO: Waiting up to 5m0s for pod "pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762" in namespace "configmap-3132" to be "Succeeded or Failed"
Oct  5 19:27:03.114: INFO: Pod "pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762": Phase="Pending", Reason="", readiness=false. Elapsed: 30.384439ms
Oct  5 19:27:05.169: INFO: Pod "pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.085602281s
STEP: Saw pod success
Oct  5 19:27:05.169: INFO: Pod "pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762" satisfied condition "Succeeded or Failed"
Oct  5 19:27:05.255: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762 container configmap-volume-test: <nil>
STEP: delete the pod
Oct  5 19:27:05.360: INFO: Waiting for pod pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762 to disappear
Oct  5 19:27:05.390: INFO: Pod pod-configmaps-060d93af-efc8-4167-a37a-10d81a404762 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:27:05.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3132" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":64,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:27:05.488: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 57 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Oct  5 19:25:31.170: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Oct  5 19:25:31.201: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:26:03.370: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-1840-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-374.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Oct  5 19:26:33.504: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-1840-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-374.svc:9443/crdconvert?timeout=30s": context deadline exceeded
Oct  5 19:27:03.536: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-1840-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-374.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Oct  5 19:27:03.536: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 275 lines ...
• Failure [99.447 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:27:03.536: Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":11,"skipped":86,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:27:10.095: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...

Oct  5 19:27:10.693: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-3978  cbe0ad1f-ecdf-4177-b59a-a6044f99f322 10193 3 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 94ae62b4-4f99-4172-ba17-8ae8b0035a4d 0xc0027723d7 0xc0027723d8}] []  [{kube-controller-manager Update apps/v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"94ae62b4-4f99-4172-ba17-8ae8b0035a4d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002772458 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Oct  5 19:27:10.693: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Oct  5 19:27:10.693: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-3978  44e54955-cb2e-457f-99d1-073095e99eab 10213 3 2021-10-05 19:26:59 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 94ae62b4-4f99-4172-ba17-8ae8b0035a4d 0xc0027724b7 0xc0027724b8}] []  [{kube-controller-manager Update apps/v1 2021-10-05 19:27:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"94ae62b4-4f99-4172-ba17-8ae8b0035a4d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002772528 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},}
Oct  5 19:27:10.728: INFO: Pod "webserver-deployment-795d758f88-2kxq6" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-2kxq6 webserver-deployment-795d758f88- deployment-3978  daf3ac64-0756-473a-b79b-3be981391b7b 10116 0 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753067 0xc003753068}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mksb8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mksb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-232.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.232,PodIP:100.96.4.93,StartTime:2021-10-05 19:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-fpztc" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fpztc webserver-deployment-795d758f88- deployment-3978  9c8f531d-9777-47ce-ac3d-42ccbee291a7 10125 0 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753260 0xc003753261}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k4ngr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4ngr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-32-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.32.132,PodIP:,StartTime:2021-10-05 19:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-fzbrf" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fzbrf webserver-deployment-795d758f88- deployment-3978  b403a5eb-026d-49f5-a777-c86417a8113a 10207 0 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753437 0xc003753438}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.2.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j9krk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j9krk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-186.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.186,PodIP:100.96.2.98,StartTime:2021-10-05 19:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-hh6gj" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-hh6gj webserver-deployment-795d758f88- deployment-3978  64f35c80-78f5-448f-8ee5-c749c82a7117 10167 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753630 0xc003753631}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dnl66,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnl66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-32-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-n7c5l" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-n7c5l webserver-deployment-795d758f88- deployment-3978  e3375154-d900-4ba5-b262-9b92429d9970 10192 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753790 0xc003753791}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jmrk5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jmrk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-46-201.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.46.201,PodIP:,StartTime:2021-10-05 19:27:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-nwnlg" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-nwnlg webserver-deployment-795d758f88- deployment-3978  0d7ee59f-d5e8-4c91-9a1a-7cb8d6428b33 10172 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753977 0xc003753978}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-872qt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-872qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-186.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.729: INFO: Pod "webserver-deployment-795d758f88-pnqmd" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-pnqmd webserver-deployment-795d758f88- deployment-3978  fd7e8632-f141-4425-9519-702f5dd2be7a 10165 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753af0 0xc003753af1}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mj9wj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mj9wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-232.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.232,PodIP:,StartTime:2021-10-05 19:27:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-qvx65" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-qvx65 webserver-deployment-795d758f88- deployment-3978  74cba447-9707-48b5-8bd1-7555bb3d99e9 10152 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753cd7 0xc003753cd8}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7z55m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7z55m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-186.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-s9v6c" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-s9v6c webserver-deployment-795d758f88- deployment-3978  cc13f9ea-3c4e-48c1-bdd1-2142474cfa41 10148 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753e70 0xc003753e71}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f2dvk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f2dvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-32-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-v2kqd" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-v2kqd webserver-deployment-795d758f88- deployment-3978  8197a584-02e1-4967-8e68-725f8f63bbf2 10124 0 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003753fd0 0xc003753fd1}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6cf47,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cf47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-46-201.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.46.201,PodIP:100.96.1.93,StartTime:2021-10-05 19:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-ws9gv" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-ws9gv webserver-deployment-795d758f88- deployment-3978  4f996906-2c5d-47f9-85e9-492a292e830f 10191 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc0030881d0 0xc0030881d1}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xmxsq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmxsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-232.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.232,PodIP:,StartTime:2021-10-05 19:27:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-zf9vx" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-zf9vx webserver-deployment-795d758f88- deployment-3978  6ccbebbe-5802-41e0-a6c0-45b7a2b9d11b 10190 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc0030883a7 0xc0030883a8}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6c2qx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6c2qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-46-201.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.46.201,PodIP:,StartTime:2021-10-05 19:27:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-795d758f88-zt6l9" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-zt6l9 webserver-deployment-795d758f88- deployment-3978  18297cb3-55a7-41ec-9ef6-b668ab4f0f0e 10123 0 2021-10-05 19:27:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 cbe0ad1f-ecdf-4177-b59a-a6044f99f322 0xc003088577 0xc003088578}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbe0ad1f-ecdf-4177-b59a-a6044f99f322\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r2vgt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2vgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-46-201.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.46.201,PodIP:100.96.1.94,StartTime:2021-10-05 19:27:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.730: INFO: Pod "webserver-deployment-847dcfb7fb-2f9m5" is not available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2f9m5 webserver-deployment-847dcfb7fb- deployment-3978  d66313e6-b1ba-48ba-bab1-124a2aad776d 10149 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 44e54955-cb2e-457f-99d1-073095e99eab 0xc003088770 0xc003088771}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44e54955-cb2e-457f-99d1-073095e99eab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jf8zz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf8zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-232.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.232,PodIP:,StartTime:2021-10-05 19:27:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.731: INFO: Pod "webserver-deployment-847dcfb7fb-2q998" is not available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2q998 webserver-deployment-847dcfb7fb- deployment-3978  4aa11a4a-9167-40fc-9439-079734912d80 10175 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 44e54955-cb2e-457f-99d1-073095e99eab 0xc003088927 0xc003088928}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44e54955-cb2e-457f-99d1-073095e99eab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5jrl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-32-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct  5 19:27:10.731: INFO: Pod "webserver-deployment-847dcfb7fb-9cw68" is not available:
&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9cw68 webserver-deployment-847dcfb7fb- deployment-3978  df738d6c-d132-4ef2-a271-21405f636f9e 10154 0 2021-10-05 19:27:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 44e54955-cb2e-457f-99d1-073095e99eab 0xc003088a80 0xc003088a81}] []  [{kube-controller-manager Update v1 2021-10-05 19:27:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"44e54955-cb2e-457f-99d1-073095e99eab\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x95w8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x95w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-32-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-10-05 19:27:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 40 lines ...
• [SLOW TEST:11.221 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":6,"skipped":60,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct  5 19:27:01.721: INFO: PersistentVolumeClaim pvc-ddnpw found but phase is Pending instead of Bound.
Oct  5 19:27:03.759: INFO: PersistentVolumeClaim pvc-ddnpw found and phase=Bound (6.132532827s)
Oct  5 19:27:03.759: INFO: Waiting up to 3m0s for PersistentVolume local-tlnbq to have phase Bound
Oct  5 19:27:03.790: INFO: PersistentVolume local-tlnbq found and phase=Bound (30.63533ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8x4n
STEP: Creating a pod to test subpath
Oct  5 19:27:03.885: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8x4n" in namespace "provisioning-646" to be "Succeeded or Failed"
Oct  5 19:27:03.916: INFO: Pod "pod-subpath-test-preprovisionedpv-8x4n": Phase="Pending", Reason="", readiness=false. Elapsed: 30.525104ms
Oct  5 19:27:05.955: INFO: Pod "pod-subpath-test-preprovisionedpv-8x4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069834764s
Oct  5 19:27:07.986: INFO: Pod "pod-subpath-test-preprovisionedpv-8x4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100771205s
Oct  5 19:27:10.022: INFO: Pod "pod-subpath-test-preprovisionedpv-8x4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136223207s
STEP: Saw pod success
Oct  5 19:27:10.022: INFO: Pod "pod-subpath-test-preprovisionedpv-8x4n" satisfied condition "Succeeded or Failed"
Oct  5 19:27:10.052: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-8x4n container test-container-volume-preprovisionedpv-8x4n: <nil>
STEP: delete the pod
Oct  5 19:27:10.122: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8x4n to disappear
Oct  5 19:27:10.152: INFO: Pod pod-subpath-test-preprovisionedpv-8x4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8x4n
Oct  5 19:27:10.152: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8x4n" in namespace "provisioning-646"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":10,"skipped":72,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:27:11.045: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 33 lines ...
I1005 19:24:40.264170    5505 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:24:43.264706    5505 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:24:46.265006    5505 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  5 19:24:46.359: INFO: Creating new exec pod
Oct  5 19:24:49.483: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:24:51.946: INFO: rc: 1
Oct  5 19:24:51.946: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:52.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:24:55.464: INFO: rc: 1
Oct  5 19:24:55.464: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:55.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:24:58.435: INFO: rc: 1
Oct  5 19:24:58.435: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:24:58.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:01.657: INFO: rc: 1
Oct  5 19:25:01.657: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:01.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:05.251: INFO: rc: 1
Oct  5 19:25:05.251: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:05.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:08.436: INFO: rc: 1
Oct  5 19:25:08.436: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:08.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:11.512: INFO: rc: 1
Oct  5 19:25:11.512: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:11.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:14.555: INFO: rc: 1
Oct  5 19:25:14.555: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:14.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:17.474: INFO: rc: 1
Oct  5 19:25:17.474: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:17.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:20.410: INFO: rc: 1
Oct  5 19:25:20.410: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:20.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:23.405: INFO: rc: 1
Oct  5 19:25:23.406: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:23.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:26.473: INFO: rc: 1
Oct  5 19:25:26.473: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:26.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:29.423: INFO: rc: 1
Oct  5 19:25:29.423: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:29.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:32.494: INFO: rc: 1
Oct  5 19:25:32.494: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:32.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:35.398: INFO: rc: 1
Oct  5 19:25:35.398: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:35.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:38.698: INFO: rc: 1
Oct  5 19:25:38.698: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:38.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:41.542: INFO: rc: 1
Oct  5 19:25:41.542: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:41.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:44.390: INFO: rc: 1
Oct  5 19:25:44.390: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:44.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:47.496: INFO: rc: 1
Oct  5 19:25:47.496: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:47.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:50.424: INFO: rc: 1
Oct  5 19:25:50.424: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:50.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:53.429: INFO: rc: 1
Oct  5 19:25:53.429: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:53.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:56.399: INFO: rc: 1
Oct  5 19:25:56.399: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:56.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:25:59.420: INFO: rc: 1
Oct  5 19:25:59.420: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:25:59.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:02.395: INFO: rc: 1
Oct  5 19:26:02.395: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:02.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:05.454: INFO: rc: 1
Oct  5 19:26:05.454: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:05.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:08.390: INFO: rc: 1
Oct  5 19:26:08.390: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:08.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:11.490: INFO: rc: 1
Oct  5 19:26:11.490: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:11.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:14.395: INFO: rc: 1
Oct  5 19:26:14.395: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:14.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:17.418: INFO: rc: 1
Oct  5 19:26:17.418: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:17.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:20.387: INFO: rc: 1
Oct  5 19:26:20.387: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:20.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:23.458: INFO: rc: 1
Oct  5 19:26:23.458: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:23.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:26.394: INFO: rc: 1
Oct  5 19:26:26.394: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:26.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:29.401: INFO: rc: 1
Oct  5 19:26:29.401: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:29.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:32.490: INFO: rc: 1
Oct  5 19:26:32.490: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:32.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:35.508: INFO: rc: 1
Oct  5 19:26:35.508: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:35.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:38.449: INFO: rc: 1
Oct  5 19:26:38.449: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:38.947: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:41.399: INFO: rc: 1
Oct  5 19:26:41.399: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:26:41.946: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Oct  5 19:26:44.437: INFO: rc: 1
Oct  5 19:26:44.437: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7767 exec execpod-affinityvfgqx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: 
... skipping 68969 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":19,"skipped":152,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config"]}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:25.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6380" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":53,"skipped":319,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:26.072: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 213 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Oct  5 19:44:26.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38" in namespace "projected-8985" to be "Succeeded or Failed"
Oct  5 19:44:26.582: INFO: Pod "downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38": Phase="Pending", Reason="", readiness=false. Elapsed: 30.378576ms
Oct  5 19:44:28.614: INFO: Pod "downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062031895s
STEP: Saw pod success
Oct  5 19:44:28.614: INFO: Pod "downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38" satisfied condition "Succeeded or Failed"
Oct  5 19:44:28.644: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38 container client-container: <nil>
STEP: delete the pod
Oct  5 19:44:28.712: INFO: Waiting for pod downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38 to disappear
Oct  5 19:44:28.742: INFO: Pod downwardapi-volume-d9cc2fee-b605-444f-b14b-b2d2d6493b38 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:28.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8985" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":332,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":21,"skipped":163,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:28.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Oct  5 19:44:28.348: INFO: Waiting up to 5m0s for pod "security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f" in namespace "security-context-3032" to be "Succeeded or Failed"
Oct  5 19:44:28.378: INFO: Pod "security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.365629ms
Oct  5 19:44:30.410: INFO: Pod "security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062601979s
STEP: Saw pod success
Oct  5 19:44:30.410: INFO: Pod "security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f" satisfied condition "Succeeded or Failed"
Oct  5 19:44:30.441: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f container test-container: <nil>
STEP: delete the pod
Oct  5 19:44:30.507: INFO: Waiting for pod security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f to disappear
Oct  5 19:44:30.538: INFO: Pod security-context-a2ca7037-70ac-4fcf-8779-c7be9b27588f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 131 lines ...
• [SLOW TEST:137.419 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:158
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":41,"skipped":296,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:30.726: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
Oct  5 19:44:18.853: INFO: PersistentVolumeClaim pvc-t7xst found and phase=Bound (2.061673509s)
Oct  5 19:44:18.854: INFO: Waiting up to 3m0s for PersistentVolume nfs-sp58k to have phase Bound
Oct  5 19:44:18.884: INFO: PersistentVolume nfs-sp58k found and phase=Bound (30.835263ms)
STEP: Checking pod has write access to PersistentVolume
Oct  5 19:44:18.946: INFO: Creating nfs test pod
Oct  5 19:44:18.979: INFO: Pod should terminate with exitcode 0 (success)
Oct  5 19:44:18.979: INFO: Waiting up to 5m0s for pod "pvc-tester-ndbqj" in namespace "pv-4034" to be "Succeeded or Failed"
Oct  5 19:44:19.010: INFO: Pod "pvc-tester-ndbqj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.911352ms
Oct  5 19:44:21.043: INFO: Pod "pvc-tester-ndbqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06330168s
Oct  5 19:44:23.075: INFO: Pod "pvc-tester-ndbqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095619096s
STEP: Saw pod success
Oct  5 19:44:23.075: INFO: Pod "pvc-tester-ndbqj" satisfied condition "Succeeded or Failed"
Oct  5 19:44:23.075: INFO: Pod pvc-tester-ndbqj succeeded 
Oct  5 19:44:23.075: INFO: Deleting pod "pvc-tester-ndbqj" in namespace "pv-4034"
Oct  5 19:44:23.109: INFO: Wait up to 5m0s for pod "pvc-tester-ndbqj" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  5 19:44:23.140: INFO: Deleting PVC pvc-t7xst to trigger reclamation of PV 
Oct  5 19:44:23.140: INFO: Deleting PersistentVolumeClaim "pvc-t7xst"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":40,"skipped":317,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:39.471: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 36 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-9444 to expose endpoints map[hairpin:[8080]]
Oct  5 19:42:28.345: INFO: successfully validated that service hairpin-test in namespace services-9444 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct  5 19:42:29.346: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:42:34.828: INFO: rc: 1
Oct  5 19:42:34.828: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:42:35.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:42:41.301: INFO: rc: 1
Oct  5 19:42:41.301: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:42:41.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:42:47.282: INFO: rc: 1
Oct  5 19:42:47.282: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:42:47.828: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:42:53.311: INFO: rc: 1
Oct  5 19:42:53.311: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:42:53.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:42:59.306: INFO: rc: 1
Oct  5 19:42:59.306: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:42:59.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:05.291: INFO: rc: 1
Oct  5 19:43:05.291: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:05.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:11.318: INFO: rc: 1
Oct  5 19:43:11.318: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:11.828: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:17.279: INFO: rc: 1
Oct  5 19:43:17.279: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:17.828: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:23.317: INFO: rc: 1
Oct  5 19:43:23.317: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:23.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:29.283: INFO: rc: 1
Oct  5 19:43:29.284: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:29.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:35.280: INFO: rc: 1
Oct  5 19:43:35.280: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:35.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:41.278: INFO: rc: 1
Oct  5 19:43:41.278: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:41.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:47.308: INFO: rc: 1
Oct  5 19:43:47.308: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:47.828: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:53.281: INFO: rc: 1
Oct  5 19:43:53.281: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:53.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:43:59.277: INFO: rc: 1
Oct  5 19:43:59.277: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:59.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:05.357: INFO: rc: 1
Oct  5 19:44:05.357: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:05.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:11.363: INFO: rc: 1
Oct  5 19:44:11.363: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:11.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:17.285: INFO: rc: 1
Oct  5 19:44:17.286: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:17.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:23.406: INFO: rc: 1
Oct  5 19:44:23.406: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -v -t -w 2 hairpin-test 8080
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:23.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:29.341: INFO: rc: 1
Oct  5 19:44:29.341: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:29.829: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:35.290: INFO: rc: 1
Oct  5 19:44:35.290: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:35.290: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct  5 19:44:40.770: INFO: rc: 1
Oct  5 19:44:40.770: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9444 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 hairpin-test 8080
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:40.771: FAIL: Unexpected error:
    <*errors.errorString | 0xc004d9a5a0>: {
        s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
occurred

... skipping 220 lines ...
• Failure [139.007 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986

  Oct  5 19:44:40.771: Unexpected error:
      <*errors.errorString | 0xc004d9a5a0>: {
          s: "service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint hairpin-test:8080 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012
------------------------------
{"msg":"FAILED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":8,"skipped":79,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:42.925: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":37,"skipped":258,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:10.004: INFO: >>> kubeConfig: /root/.kube/config
... skipping 90 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":38,"skipped":258,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1005 19:39:45.052486    5441 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct  5 19:44:45.113: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:45.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7826" for this suite.


• [SLOW TEST:300.606 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":26,"skipped":206,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:42.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-5b23252c-c8cd-4783-ae19-f48e4464e364
STEP: Creating a pod to test consume secrets
Oct  5 19:44:43.162: INFO: Waiting up to 5m0s for pod "pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48" in namespace "secrets-764" to be "Succeeded or Failed"
Oct  5 19:44:43.192: INFO: Pod "pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48": Phase="Pending", Reason="", readiness=false. Elapsed: 30.688989ms
Oct  5 19:44:45.224: INFO: Pod "pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062050787s
STEP: Saw pod success
Oct  5 19:44:45.224: INFO: Pod "pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48" satisfied condition "Succeeded or Failed"
Oct  5 19:44:45.255: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48 container secret-volume-test: <nil>
STEP: delete the pod
Oct  5 19:44:45.322: INFO: Waiting for pod pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48 to disappear
Oct  5 19:44:45.353: INFO: Pod pod-secrets-544d31e5-f0b9-4758-a6d7-e4535e900b48 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-ztkm
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:44:25.323: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ztkm" in namespace "subpath-8584" to be "Succeeded or Failed"
Oct  5 19:44:25.353: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Pending", Reason="", readiness=false. Elapsed: 30.395474ms
Oct  5 19:44:27.385: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 2.062080937s
Oct  5 19:44:29.417: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 4.093741656s
Oct  5 19:44:31.454: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 6.130535069s
Oct  5 19:44:33.485: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.161855579s
Oct  5 19:44:35.517: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.193654728s
Oct  5 19:44:37.549: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.226014247s
Oct  5 19:44:39.580: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.256977299s
Oct  5 19:44:41.611: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.288353767s
Oct  5 19:44:43.643: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.319817872s
Oct  5 19:44:45.674: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.351440699s
Oct  5 19:44:47.706: INFO: Pod "pod-subpath-test-projected-ztkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.38256799s
STEP: Saw pod success
Oct  5 19:44:47.706: INFO: Pod "pod-subpath-test-projected-ztkm" satisfied condition "Succeeded or Failed"
Oct  5 19:44:47.736: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-projected-ztkm container test-container-subpath-projected-ztkm: <nil>
STEP: delete the pod
Oct  5 19:44:47.806: INFO: Waiting for pod pod-subpath-test-projected-ztkm to disappear
Oct  5 19:44:47.836: INFO: Pod pod-subpath-test-projected-ztkm no longer exists
STEP: Deleting pod pod-subpath-test-projected-ztkm
Oct  5 19:44:47.836: INFO: Deleting pod "pod-subpath-test-projected-ztkm" in namespace "subpath-8584"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":154,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:47.948: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 128 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Oct  5 19:44:54.068: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-266" to be "Succeeded or Failed"
Oct  5 19:44:54.099: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.181597ms
Oct  5 19:44:56.130: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061116728s
STEP: Saw pod success
Oct  5 19:44:56.130: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Oct  5 19:44:56.160: INFO: Trying to get logs from node ip-172-20-41-232.ca-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Oct  5 19:44:56.228: INFO: Waiting for pod pod-host-path-test to disappear
Oct  5 19:44:56.258: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:56.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-266" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":42,"skipped":327,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:56.331: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:56.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8757" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":351,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:17.483 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":41,"skipped":320,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:56.986: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 41 lines ...
Oct  5 19:44:48.280: INFO: PersistentVolumeClaim pvc-xh9fp found but phase is Pending instead of Bound.
Oct  5 19:44:50.315: INFO: PersistentVolumeClaim pvc-xh9fp found and phase=Bound (2.067861865s)
Oct  5 19:44:50.315: INFO: Waiting up to 3m0s for PersistentVolume local-g6jft to have phase Bound
Oct  5 19:44:50.346: INFO: PersistentVolume local-g6jft found and phase=Bound (30.683062ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dp2m
STEP: Creating a pod to test subpath
Oct  5 19:44:50.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dp2m" in namespace "provisioning-5081" to be "Succeeded or Failed"
Oct  5 19:44:50.471: INFO: Pod "pod-subpath-test-preprovisionedpv-dp2m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.879369ms
Oct  5 19:44:52.502: INFO: Pod "pod-subpath-test-preprovisionedpv-dp2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062437992s
Oct  5 19:44:54.533: INFO: Pod "pod-subpath-test-preprovisionedpv-dp2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093398879s
Oct  5 19:44:56.565: INFO: Pod "pod-subpath-test-preprovisionedpv-dp2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125351811s
STEP: Saw pod success
Oct  5 19:44:56.565: INFO: Pod "pod-subpath-test-preprovisionedpv-dp2m" satisfied condition "Succeeded or Failed"
Oct  5 19:44:56.596: INFO: Trying to get logs from node ip-172-20-41-186.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dp2m container test-container-subpath-preprovisionedpv-dp2m: <nil>
STEP: delete the pod
Oct  5 19:44:56.669: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dp2m to disappear
Oct  5 19:44:56.699: INFO: Pod pod-subpath-test-preprovisionedpv-dp2m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dp2m
Oct  5 19:44:56.699: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dp2m" in namespace "provisioning-5081"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":39,"skipped":260,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:44:59.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3249" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":44,"skipped":352,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:44:59.518: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 109 lines ...
STEP: Creating a mutating webhook configuration
Oct  5 19:44:14.901: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:25.066: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:35.166: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:45.265: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:55.329: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:55.329: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 456 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:44:55.329: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":46,"skipped":250,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:45:06.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8550" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":47,"skipped":252,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:45:06.125: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct  5 19:45:06.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":48,"skipped":258,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:45:06.290: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
Oct  5 19:45:06.553: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.233 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 18 lines ...
Oct  5 19:45:03.140: INFO: PersistentVolumeClaim pvc-6sftt found but phase is Pending instead of Bound.
Oct  5 19:45:05.172: INFO: PersistentVolumeClaim pvc-6sftt found and phase=Bound (2.062568602s)
Oct  5 19:45:05.172: INFO: Waiting up to 3m0s for PersistentVolume local-z4xj6 to have phase Bound
Oct  5 19:45:05.203: INFO: PersistentVolume local-z4xj6 found and phase=Bound (30.720252ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hgks
STEP: Creating a pod to test subpath
Oct  5 19:45:05.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hgks" in namespace "provisioning-4223" to be "Succeeded or Failed"
Oct  5 19:45:05.329: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks": Phase="Pending", Reason="", readiness=false. Elapsed: 30.781683ms
Oct  5 19:45:07.362: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062970625s
Oct  5 19:45:09.393: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094099146s
STEP: Saw pod success
Oct  5 19:45:09.393: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks" satisfied condition "Succeeded or Failed"
Oct  5 19:45:09.423: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-hgks container test-container-subpath-preprovisionedpv-hgks: <nil>
STEP: delete the pod
Oct  5 19:45:09.492: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hgks to disappear
Oct  5 19:45:09.522: INFO: Pod pod-subpath-test-preprovisionedpv-hgks no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hgks
Oct  5 19:45:09.522: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hgks" in namespace "provisioning-4223"
STEP: Creating pod pod-subpath-test-preprovisionedpv-hgks
STEP: Creating a pod to test subpath
Oct  5 19:45:09.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hgks" in namespace "provisioning-4223" to be "Succeeded or Failed"
Oct  5 19:45:09.615: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks": Phase="Pending", Reason="", readiness=false. Elapsed: 30.432653ms
Oct  5 19:45:11.647: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062207914s
STEP: Saw pod success
Oct  5 19:45:11.647: INFO: Pod "pod-subpath-test-preprovisionedpv-hgks" satisfied condition "Succeeded or Failed"
Oct  5 19:45:11.678: INFO: Trying to get logs from node ip-172-20-32-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-hgks container test-container-subpath-preprovisionedpv-hgks: <nil>
STEP: delete the pod
Oct  5 19:45:11.761: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hgks to disappear
Oct  5 19:45:11.796: INFO: Pod pod-subpath-test-preprovisionedpv-hgks no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hgks
Oct  5 19:45:11.796: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hgks" in namespace "provisioning-4223"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":40,"skipped":263,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct  5 19:45:12.339: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6003-crds.webhook.example.com via the AdmissionRegistration API
Oct  5 19:44:27.082: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:37.248: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:47.346: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:44:57.454: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:45:07.520: INFO: Waiting for webhook configuration to be ready...
Oct  5 19:45:07.520: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b6240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 448 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:45:07.520: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b6240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":19,"skipped":98,"failed":2,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Oct  5 19:44:48.183: INFO: PersistentVolumeClaim pvc-bgv2v found but phase is Pending instead of Bound.
Oct  5 19:44:50.216: INFO: PersistentVolumeClaim pvc-bgv2v found and phase=Bound (2.06362768s)
Oct  5 19:44:50.216: INFO: Waiting up to 3m0s for PersistentVolume local-67krp to have phase Bound
Oct  5 19:44:50.248: INFO: PersistentVolume local-67krp found and phase=Bound (32.376014ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t9qn
STEP: Creating a pod to test atomic-volume-subpath
Oct  5 19:44:50.347: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t9qn" in namespace "provisioning-1587" to be "Succeeded or Failed"
Oct  5 19:44:50.378: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.02101ms
Oct  5 19:44:52.411: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063448569s
Oct  5 19:44:54.442: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 4.094688335s
Oct  5 19:44:56.474: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 6.126506522s
Oct  5 19:44:58.588: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 8.240849972s
Oct  5 19:45:00.628: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 10.28053518s
... skipping 2 lines ...
Oct  5 19:45:06.723: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 16.375822961s
Oct  5 19:45:08.754: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 18.407079924s
Oct  5 19:45:10.787: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 20.439555574s
Oct  5 19:45:12.819: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Running", Reason="", readiness=true. Elapsed: 22.471947369s
Oct  5 19:45:14.851: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.503431469s
STEP: Saw pod success
Oct  5 19:45:14.851: INFO: Pod "pod-subpath-test-preprovisionedpv-t9qn" satisfied condition "Succeeded or Failed"
Oct  5 19:45:14.882: INFO: Trying to get logs from node ip-172-20-46-201.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-t9qn container test-container-subpath-preprovisionedpv-t9qn: <nil>
STEP: delete the pod
Oct  5 19:45:14.951: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t9qn to disappear
Oct  5 19:45:14.982: INFO: Pod pod-subpath-test-preprovisionedpv-t9qn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t9qn
Oct  5 19:45:14.982: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t9qn" in namespace "provisioning-1587"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":27,"skipped":211,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Oct  5 19:45:16.102: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
• [SLOW TEST:17.491 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":45,"skipped":375,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
Oct  5 19:45:17.148: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
Oct  5 19:45:19.342: INFO: PersistentVolumeClaim pvc-2zl5d found and phase=Bound (10.192070653s)
Oct  5 19:45:19.342: INFO: Waiting up to 3m0s for PersistentVolume nfs-989ch to have phase Bound
Oct  5 19:45:19.372: INFO: PersistentVolume nfs-989ch found and phase=Bound (30.741392ms)
STEP: Checking pod has write access to PersistentVolume
Oct  5 19:45:19.433: INFO: Creating nfs test pod
Oct  5 19:45:19.465: INFO: Pod should terminate with exitcode 0 (success)
Oct  5 19:45:19.465: INFO: Waiting up to 5m0s for pod "pvc-tester-zxtn4" in namespace "pv-3671" to be "Succeeded or Failed"
Oct  5 19:45:19.496: INFO: Pod "pvc-tester-zxtn4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.477953ms
Oct  5 19:45:21.527: INFO: Pod "pvc-tester-zxtn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061957445s
STEP: Saw pod success
Oct  5 19:45:21.527: INFO: Pod "pvc-tester-zxtn4" satisfied condition "Succeeded or Failed"
Oct  5 19:45:21.527: INFO: Pod pvc-tester-zxtn4 succeeded 
Oct  5 19:45:21.527: INFO: Deleting pod "pvc-tester-zxtn4" in namespace "pv-3671"
Oct  5 19:45:21.562: INFO: Wait up to 5m0s for pod "pvc-tester-zxtn4" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Oct  5 19:45:21.592: INFO: Deleting PVC pvc-2zl5d to trigger reclamation of PV 
Oct  5 19:45:21.592: INFO: Deleting PersistentVolumeClaim "pvc-2zl5d"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":49,"skipped":272,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
Oct  5 19:45:27.909: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
I1005 19:43:05.904570    5465 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7053, replica count: 2
I1005 19:43:08.955676    5465 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1005 19:43:11.955963    5465 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct  5 19:43:11.956: INFO: Creating new exec pod
Oct  5 19:43:15.082: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:20.540: INFO: rc: 1
Oct  5 19:43:20.540: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:21.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:26.989: INFO: rc: 1
Oct  5 19:43:26.989: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:27.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:32.990: INFO: rc: 1
Oct  5 19:43:32.990: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:33.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:39.010: INFO: rc: 1
Oct  5 19:43:39.010: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:39.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:44.992: INFO: rc: 1
Oct  5 19:43:44.992: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:45.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:50.997: INFO: rc: 1
Oct  5 19:43:50.997: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:51.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:43:57.050: INFO: rc: 1
Oct  5 19:43:57.051: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echonc hostName -v
 -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:43:57.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:02.990: INFO: rc: 1
Oct  5 19:44:02.990: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:03.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:09.076: INFO: rc: 1
Oct  5 19:44:09.076: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + ncecho -v hostName
 -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:09.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:15.028: INFO: rc: 1
Oct  5 19:44:15.028: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:15.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:21.052: INFO: rc: 1
Oct  5 19:44:21.053: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:21.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:27.038: INFO: rc: 1
Oct  5 19:44:27.038: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:27.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:33.000: INFO: rc: 1
Oct  5 19:44:33.000: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:33.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:39.010: INFO: rc: 1
Oct  5 19:44:39.010: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:39.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:45.054: INFO: rc: 1
Oct  5 19:44:45.054: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:45.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:51.003: INFO: rc: 1
Oct  5 19:44:51.003: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:51.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:44:57.105: INFO: rc: 1
Oct  5 19:44:57.105: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:44:57.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:45:03.214: INFO: rc: 1
Oct  5 19:45:03.214: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:45:03.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:45:09.037: INFO: rc: 1
Oct  5 19:45:09.037: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:45:09.541: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:45:15.113: INFO: rc: 1
Oct  5 19:45:15.113: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:45:15.540: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:45:21.004: INFO: rc: 1
Oct  5 19:45:21.004: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:45:21.004: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Oct  5 19:45:26.462: INFO: rc: 1
Oct  5 19:45:26.462: INFO: Service reachability failing with error: error running /tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7053 exec execpodq74cl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct  5 19:45:26.463: FAIL: Unexpected error:
    <*errors.errorString | 0xc0028640e0>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 223 lines ...
• Failure [143.087 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:45:26.463: Unexpected error:
      <*errors.errorString | 0xc0028640e0>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":19,"skipped":129,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}
Oct  5 19:45:28.714: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":27,"skipped":201,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:21.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Oct  5 19:45:28.602: INFO: boom-server pod logs: 2021/10/05 19:44:24 external ip: 100.96.4.142
2021/10/05 19:44:24 listen on 0.0.0.0:9000
2021/10/05 19:44:24 probing 100.96.4.142

Oct  5 19:45:28.602: FAIL: Boom server pod did not sent any bad packet to the client

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c37b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002c37b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
... skipping 211 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282

  Oct  5 19:45:28.602: Boom server pod did not sent any bad packet to the client

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":27,"skipped":201,"failed":2,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries"]}
Oct  5 19:45:30.802: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
Oct  5 19:26:45.593: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:27:15.626: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:27:45.658: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:28:15.691: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:28:45.725: INFO: Unable to read jessie_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:29:15.777: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:29:15.777: INFO: Lookups using dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct  5 19:29:50.811: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:30:20.843: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:30:50.874: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:31:20.906: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:31:50.946: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:32:20.978: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:32:51.009: INFO: Unable to read jessie_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:33:21.040: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:33:21.040: INFO: Lookups using dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct  5 19:33:55.821: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:34:25.859: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:34:55.894: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:35:25.928: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:35:55.960: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:36:25.994: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:36:56.029: INFO: Unable to read jessie_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:37:26.062: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:37:26.062: INFO: Lookups using dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct  5 19:38:00.809: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:38:30.842: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:39:00.873: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:39:30.905: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:40:00.936: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:40:30.970: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:41:01.001: INFO: Unable to read jessie_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:41:31.032: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:41:31.032: INFO: Lookups using dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct  5 19:42:01.064: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:42:31.095: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:43:01.126: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:43:31.157: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:44:01.189: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:44:31.221: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:45:01.254: INFO: Unable to read jessie_udp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:45:31.285: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025: the server is currently unable to handle the request (get pods dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025)
Oct  5 19:45:31.285: INFO: Lookups using dns-9737/dns-test-42204a7d-c8cc-4fd6-9240-d95c2a52b025 failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9737.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Oct  5 19:45:31.285: FAIL: Unexpected error:
    <*errors.errorString | 0xc000244250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 218 lines ...
• Failure [1230.169 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:45:31.285: Unexpected error:
      <*errors.errorString | 0xc000244250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 43 lines ...
Oct  5 19:44:14.903: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2033
Oct  5 19:44:14.936: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2033
Oct  5 19:44:14.967: INFO: creating *v1.StatefulSet: csi-mock-volumes-2033-5434/csi-mockplugin
Oct  5 19:44:15.009: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2033
Oct  5 19:44:15.047: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2033"
Oct  5 19:44:15.079: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2033 to register on node ip-172-20-46-201.ca-central-1.compute.internal
I1005 19:44:18.472747    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I1005 19:44:18.504176    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2033","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1005 19:44:18.538592    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I1005 19:44:18.574825    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I1005 19:44:18.676912    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2033","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I1005 19:44:18.885041    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2033"},"Error":"","FullError":null}
STEP: Creating pod
Oct  5 19:44:20.242: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I1005 19:44:20.342689    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c96acf32-db58-4046-8700-41af9e4f695d","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I1005 19:44:20.382433    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c96acf32-db58-4046-8700-41af9e4f695d","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c96acf32-db58-4046-8700-41af9e4f695d"}}},"Error":"","FullError":null}
I1005 19:44:21.535334    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  5 19:44:21.568: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:44:21.830239    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c96acf32-db58-4046-8700-41af9e4f695d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c96acf32-db58-4046-8700-41af9e4f695d","storage.kubernetes.io/csiProvisionerIdentity":"1633463058593-8081-csi-mock-csi-mock-volumes-2033"}},"Response":{},"Error":"","FullError":null}
I1005 19:44:21.864126    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Oct  5 19:44:21.895: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:22.191: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:22.473: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:44:22.738203    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c96acf32-db58-4046-8700-41af9e4f695d/globalmount","target_path":"/var/lib/kubelet/pods/8fe01aad-7cc3-4d7d-be18-67d9ad6aa04f/volumes/kubernetes.io~csi/pvc-c96acf32-db58-4046-8700-41af9e4f695d/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c96acf32-db58-4046-8700-41af9e4f695d","storage.kubernetes.io/csiProvisionerIdentity":"1633463058593-8081-csi-mock-csi-mock-volumes-2033"}},"Response":{},"Error":"","FullError":null}
Oct  5 19:44:24.377: INFO: Deleting pod "pvc-volume-tester-zzm9v" in namespace "csi-mock-volumes-2033"
Oct  5 19:44:24.409: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zzm9v" to be fully deleted
Oct  5 19:44:28.118: INFO: >>> kubeConfig: /root/.kube/config
I1005 19:44:28.417474    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/8fe01aad-7cc3-4d7d-be18-67d9ad6aa04f/volumes/kubernetes.io~csi/pvc-c96acf32-db58-4046-8700-41af9e4f695d/mount"},"Response":{},"Error":"","FullError":null}
I1005 19:44:28.521669    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I1005 19:44:28.553132    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c96acf32-db58-4046-8700-41af9e4f695d/globalmount"},"Response":{},"Error":"","FullError":null}
I1005 19:44:34.514680    5346 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Oct  5 19:44:35.503: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-l9prt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2033", SelfLink:"", UID:"c96acf32-db58-4046-8700-41af9e4f695d", ResourceVersion:"42552", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769059860, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00373aea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00373aeb8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000a8b190), VolumeMode:(*v1.PersistentVolumeMode)(0xc000a8b1a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:44:35.503: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-l9prt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2033", SelfLink:"", UID:"c96acf32-db58-4046-8700-41af9e4f695d", ResourceVersion:"42554", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769059860, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-46-201.ca-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c343f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c34408)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c34420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c34438)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0050a76c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0050a76d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:44:35.503: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-l9prt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2033", SelfLink:"", UID:"c96acf32-db58-4046-8700-41af9e4f695d", ResourceVersion:"42555", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769059860, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2033", "volume.kubernetes.io/selected-node":"ip-172-20-46-201.ca-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c34f48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c34f60)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c34f78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c34f90)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c34fa8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c34fc0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002b00730), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b00740), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:44:35.503: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-l9prt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2033", SelfLink:"", UID:"c96acf32-db58-4046-8700-41af9e4f695d", ResourceVersion:"42559", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769059860, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2033", "volume.kubernetes.io/selected-node":"ip-172-20-46-201.ca-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c34ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c35008)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c35020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c35038)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c35050), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c35068)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c96acf32-db58-4046-8700-41af9e4f695d", StorageClassName:(*string)(0xc002b00770), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b00780), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Oct  5 19:44:35.503: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-l9prt", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2033", SelfLink:"", UID:"c96acf32-db58-4046-8700-41af9e4f695d", ResourceVersion:"42560", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63769059860, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2033", "volume.kubernetes.io/selected-node":"ip-172-20-46-201.ca-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c35098), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c350b0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c350c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c350e0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c350f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c35110)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c96acf32-db58-4046-8700-41af9e4f695d", StorageClassName:(*string)(0xc002b007b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b007c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":36,"skipped":254,"failed":2,"failures":["[sig-network] DNS should provide DNS for services  [Conformance]","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}
Oct  5 19:45:33.991: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":55,"skipped":335,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
Oct  5 19:45:41.948: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
Oct  5 19:43:06.310: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.3.79 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:06.310: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:07.588: INFO: Found all 1 expected endpoints: [netserver-0]
Oct  5 19:43:07.588: INFO: Going to poll 100.96.2.73 on port 8081 at least 0 times, with a maximum of 46 tries before failing
Oct  5 19:43:07.621: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:07.621: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:08.880: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:08.880: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:10.913: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:10.913: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:12.181: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:12.181: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:14.213: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:14.213: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:15.506: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:15.507: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:17.538: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:17.538: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:18.843: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:18.843: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:20.882: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:20.882: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:22.152: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:22.152: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:24.185: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:24.185: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:25.440: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:25.440: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:27.472: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:27.472: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:28.752: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:28.753: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:30.784: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:30.784: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:32.048: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:32.048: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:34.080: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:34.080: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:35.409: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:35.409: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:37.440: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:37.440: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:38.714: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:38.714: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:40.747: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:40.747: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:42.068: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:42.068: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:44.100: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:44.100: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:45.365: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:45.365: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:47.398: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:47.398: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:48.663: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:48.663: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:50.696: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:50.696: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:51.957: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:51.957: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:53.989: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:53.989: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:55.269: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:55.269: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:43:57.302: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:43:57.302: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:43:58.613: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:43:58.613: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:00.645: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:00.645: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:02.325: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:02.325: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:04.356: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:04.357: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:05.670: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:05.670: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:07.703: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:07.703: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:09.001: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:09.001: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:11.034: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:11.034: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:12.646: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:12.646: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:14.678: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:14.678: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:15.961: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:15.961: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:18.033: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:18.033: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:19.362: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:19.362: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:21.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:21.404: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:22.662: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:22.662: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:24.694: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:24.694: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:25.953: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:25.953: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:27.985: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:27.985: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:29.248: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:29.248: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:31.281: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:31.281: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:32.896: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:32.896: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:34.928: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:34.928: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:36.238: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:36.238: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:38.270: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:38.270: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:39.560: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:39.560: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:41.592: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:41.592: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:42.876: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:42.876: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:44.909: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:44.909: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:46.231: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:46.231: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:48.263: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:48.263: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:49.525: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:49.525: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:51.559: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:51.559: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:52.810: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:52.810: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:54.841: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:54.842: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:56.120: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:56.120: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:44:58.195: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:44:58.195: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:44:59.652: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:44:59.652: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:01.684: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:01.684: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:02.955: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:02.955: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:04.989: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:04.989: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:06.280: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:06.280: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:08.321: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:08.321: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:09.649: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:09.649: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:11.680: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:11.680: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:12.963: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:12.963: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:14.996: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:14.996: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:16.295: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:16.296: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:18.327: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:18.327: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:19.687: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:19.688: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:21.721: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:21.721: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:23.012: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:23.013: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:25.045: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:25.045: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:26.311: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:26.311: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:28.342: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:28.342: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:29.638: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:29.638: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:31.671: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:31.671: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:32.983: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:32.983: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:35.015: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:35.015: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:36.299: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:36.299: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:38.332: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1029 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct  5 19:45:38.332: INFO: >>> kubeConfig: /root/.kube/config
Oct  5 19:45:39.610: INFO: Failed to execute "echo hostName | nc -w 1 -u 100.96.2.73 8081 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Oct  5 19:45:39.610: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-1], actual=[])
Oct  5 19:45:41.611: INFO: 
Output of kubectl describe pod pod-network-test-1029/netserver-0:

Oct  5 19:45:41.611: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=pod-network-test-1029 describe pod netserver-0 --namespace=pod-network-test-1029'
Oct  5 19:45:41.865: INFO: stderr: ""
... skipping 237 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m59s  default-scheduler  Successfully assigned pod-network-test-1029/netserver-3 to ip-172-20-46-201.ca-central-1.compute.internal
  Normal  Pulled     2m58s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    2m58s  kubelet            Created container webserver
  Normal  Started    2m58s  kubelet            Started container webserver

Oct  5 19:45:42.669: FAIL: Error dialing UDP from node to pod: failed to find expected endpoints, 
tries 46
Command echo hostName | nc -w 1 -u 100.96.2.73 8081
retrieved map[]
expected map[netserver-1:{}]

Full Stack Trace
... skipping 219 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct  5 19:45:42.669: Error dialing UDP from node to pod: failed to find expected endpoints, 
    tries 46
    Command echo hostName | nc -w 1 -u 100.96.2.73 8081
    retrieved map[]
    expected map[netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":322,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
Oct  5 19:45:44.620: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":20,"skipped":102,"failed":2,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
Oct  5 19:45:50.785: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":22,"skipped":163,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:30.611: INFO: >>> kubeConfig: /root/.kube/config
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":23,"skipped":163,"failed":3,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Oct  5 19:45:53.743: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":21,"skipped":163,"failed":3,"failures":["[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Simple pod should handle in-cluster config"]}
Oct  5 19:46:04.399: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":41,"skipped":270,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
Oct  5 19:46:15.123: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":81,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct  5 19:44:45.426: INFO: >>> kubeConfig: /root/.kube/config
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":10,"skipped":81,"failed":3,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
Oct  5 19:46:25.074: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Oct  5 19:43:44.801: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:44:14.832: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-1220.svc.cluster.local from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:44:44.873: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:45:14.904: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:45:44.935: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:46:14.966: INFO: Unable to read jessie_udp@kubernetes.default from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: the server is currently unable to handle the request (get pods dns-test-489d460a-f118-4af4-a589-64b654470178)
Oct  5 19:46:44.671: FAIL: Unable to read jessie_tcp@kubernetes.default from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: Get "https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-1220/pods/dns-test-489d460a-f118-4af4-a589-64b654470178/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001645d48, 0x299a700, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009c3b48, 0xc001645d48, 0xc0009c3b48, 0xc001645d48)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000ddef00, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E1005 19:46:44.672472    5414 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Oct  5 19:46:44.671: Unable to read jessie_tcp@kubernetes.default from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: Get \"https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-1220/pods/dns-test-489d460a-f118-4af4-a589-64b654470178/proxy/results/jessie_tcp@kubernetes.default\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001645d48, 0x299a700, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009c3b48, 0xc001645d48, 0xc0009c3b48, 0xc001645d48)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001645d48, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0040c0e00, 0x10, 0x10, 0x6ed05c6, 0x7, 0xc003bcfc00, 0x779f8f8, 0xc0036a7b80, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000ee9ce0, 0xc003bcfc00, 0xc0040c0e00, 0x10, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.3()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:107 +0x68f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ddef00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000ddef00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000ddef00, 0x70e7b58)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a6f0a0, 0xc0023022c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6a6f0a0, 0xc0023022c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002c5e2c0, 0x153, 0x868a4a4, 0x7d, 0xd3, 0xc000e94000, 0x800)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61c84e0, 0x75c1ba0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002c5e2c0, 0x153, 0xc001645788, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002c5e2c0, 0x153, 0xc001645870, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f73783, 0x24, 0xc001645ad0, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc0009c3b00, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001645d48, 0x299a700, 0x0, 0x0)
... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90

  Oct  5 19:46:44.671: Unable to read jessie_tcp@kubernetes.default from pod dns-1220/dns-test-489d460a-f118-4af4-a589-64b654470178: Get "https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io/api/v1/namespaces/dns-1220/pods/dns-test-489d460a-f118-4af4-a589-64b654470178/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
------------------------------
{"msg":"FAILED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":23,"skipped":158,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
Oct  5 19:46:46.403: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
Oct  5 19:43:00.920: INFO: stderr: ""
Oct  5 19:43:00.921: INFO: stdout: "true"
Oct  5 19:43:00.921: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:43:01.101: INFO: stderr: ""
Oct  5 19:43:01.101: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:43:01.101: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:43:31.132: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:43:36.133: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:43:36.322: INFO: stderr: ""
Oct  5 19:43:36.322: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:43:36.322: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:43:36.504: INFO: stderr: ""
Oct  5 19:43:36.504: INFO: stdout: "true"
Oct  5 19:43:36.504: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:43:36.681: INFO: stderr: ""
Oct  5 19:43:36.682: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:43:36.682: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:44:06.713: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:44:11.715: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:44:11.924: INFO: stderr: ""
Oct  5 19:44:11.924: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:44:11.924: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:44:12.121: INFO: stderr: ""
Oct  5 19:44:12.121: INFO: stdout: "true"
Oct  5 19:44:12.121: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:44:12.318: INFO: stderr: ""
Oct  5 19:44:12.318: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:44:12.318: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:44:42.350: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:44:47.350: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:44:47.565: INFO: stderr: ""
Oct  5 19:44:47.565: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:44:47.565: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:44:47.751: INFO: stderr: ""
Oct  5 19:44:47.751: INFO: stdout: "true"
Oct  5 19:44:47.751: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:44:47.934: INFO: stderr: ""
Oct  5 19:44:47.934: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:44:47.934: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:45:17.966: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:45:22.967: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:45:23.152: INFO: stderr: ""
Oct  5 19:45:23.152: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:45:23.152: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:45:23.336: INFO: stderr: ""
Oct  5 19:45:23.336: INFO: stdout: "true"
Oct  5 19:45:23.336: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:45:23.524: INFO: stderr: ""
Oct  5 19:45:23.524: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:45:23.524: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:45:53.555: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:45:58.555: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:45:58.775: INFO: stderr: ""
Oct  5 19:45:58.775: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:45:58.775: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:45:58.964: INFO: stderr: ""
Oct  5 19:45:58.964: INFO: stdout: "true"
Oct  5 19:45:58.964: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:45:59.146: INFO: stderr: ""
Oct  5 19:45:59.146: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:45:59.146: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:46:29.177: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:46:34.178: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:46:34.368: INFO: stderr: ""
Oct  5 19:46:34.368: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:46:34.368: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:46:34.567: INFO: stderr: ""
Oct  5 19:46:34.567: INFO: stdout: "true"
Oct  5 19:46:34.568: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:46:34.749: INFO: stderr: ""
Oct  5 19:46:34.749: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:46:34.749: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:47:04.780: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:47:09.780: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:47:10.046: INFO: stderr: ""
Oct  5 19:47:10.046: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:47:10.046: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:47:10.234: INFO: stderr: ""
Oct  5 19:47:10.234: INFO: stdout: "true"
Oct  5 19:47:10.234: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:47:10.422: INFO: stderr: ""
Oct  5 19:47:10.422: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:47:10.422: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:47:40.452: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:47:45.454: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Oct  5 19:47:45.672: INFO: stderr: ""
Oct  5 19:47:45.673: INFO: stdout: "update-demo-nautilus-r69mt update-demo-nautilus-vlwjn "
Oct  5 19:47:45.673: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Oct  5 19:47:45.858: INFO: stderr: ""
Oct  5 19:47:45.859: INFO: stdout: "true"
Oct  5 19:47:45.859: INFO: Running '/tmp/kubectl3639816471/kubectl --server=https://api.e2e-8d71322f12-62691.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-781 get pods update-demo-nautilus-r69mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Oct  5 19:47:46.055: INFO: stderr: ""
Oct  5 19:47:46.056: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Oct  5 19:47:46.056: INFO: validating pod update-demo-nautilus-r69mt
Oct  5 19:48:16.087: INFO: update-demo-nautilus-r69mt is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-r69mt)
Oct  5 19:48:21.089: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311 +0x29b
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002572a80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 176 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Oct  5 19:48:21.089: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":19,"skipped":181,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] Services should be able to up and down services","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}
Oct  5 19:48:23.574: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:244.302 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":324,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
Oct  5 19:49:01.308: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-88698565-d11d-4423-a1d2-e25a969f2ad3]
STEP: Verifying pods for RC slow-terminating-unready-pod
Oct  5 19:32:39.114: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Oct  5 19:33:11.278: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:33:43.372: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:34:15.382: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:34:47.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:35:19.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:35:51.369: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:36:23.379: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:36:55.374: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:37:27.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:37:59.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:38:31.372: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:39:03.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:39:35.372: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:40:07.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:40:39.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:41:11.377: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:41:43.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:42:15.373: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:42:47.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:43:19.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:43:51.372: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:44:23.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:44:55.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:45:27.375: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:45:59.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:46:31.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:47:03.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:47:35.371: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:48:07.370: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:48:39.372: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:49:09.465: INFO: Controller slow-terminating-unready-pod: Failed to GET from replica 1 [slow-terminating-unready-pod-5fcvq]: the server is currently unable to handle the request (get pods slow-terminating-unready-pod-5fcvq)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63769059159, loc:(*time.Location)(0x9e12f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Oct  5 19:49:09.465: FAIL: Unexpected error:
    <*errors.errorString | 0xc002b4a380>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.21()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688 +0xb99
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003f42a80)
... skipping 12 lines ...
STEP: Found 7 events.
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:39 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulCreate: Created pod: slow-terminating-unready-pod-5fcvq
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:39 +0000 UTC - event for slow-terminating-unready-pod-5fcvq: {default-scheduler } Scheduled: Successfully assigned services-7721/slow-terminating-unready-pod-5fcvq to ip-172-20-41-186.ca-central-1.compute.internal
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:39 +0000 UTC - event for slow-terminating-unready-pod-5fcvq: {kubelet ip-172-20-41-186.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:39 +0000 UTC - event for slow-terminating-unready-pod-5fcvq: {kubelet ip-172-20-41-186.ca-central-1.compute.internal} Created: Created container slow-terminating-unready-pod
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:40 +0000 UTC - event for slow-terminating-unready-pod-5fcvq: {kubelet ip-172-20-41-186.ca-central-1.compute.internal} Started: Started container slow-terminating-unready-pod
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:32:40 +0000 UTC - event for slow-terminating-unready-pod-5fcvq: {kubelet ip-172-20-41-186.ca-central-1.compute.internal} Unhealthy: Readiness probe failed: 
Oct  5 19:49:09.629: INFO: At 2021-10-05 19:49:09 +0000 UTC - event for slow-terminating-unready-pod: {replication-controller } SuccessfulDelete: Deleted pod: slow-terminating-unready-pod-5fcvq
Oct  5 19:49:09.659: INFO: POD                                 NODE                                            PHASE    GRACE  CONDITIONS
Oct  5 19:49:09.660: INFO: slow-terminating-unready-pod-5fcvq  ip-172-20-41-186.ca-central-1.compute.internal  Running  600s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:32:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:32:39 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:32:39 +0000 UTC ContainersNotReady containers with unready status: [slow-terminating-unready-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-05 19:32:39 +0000 UTC  }]
Oct  5 19:49:09.660: INFO: 
Oct  5 19:49:09.691: INFO: 
Logging node info for node ip-172-20-32-132.ca-central-1.compute.internal
... skipping 127 lines ...
• Failure [992.468 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624

  Oct  5 19:49:09.465: Unexpected error:
      <*errors.errorString | 0xc002b4a380>: {
          s: "failed to wait for pods responding: timed out waiting for the condition",
      }
      failed to wait for pods responding: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1688
------------------------------
{"msg":"FAILED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":13,"skipped":146,"failed":2,"failures":["[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should create endpoints for unready pods"]}
Oct  5 19:49:11.336: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
Oct  5 19:44:06.299: INFO: PersistentVolume nfs-qgf2g found and phase=Bound (30.530202ms)
Oct  5 19:44:06.329: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-54684] to have phase Bound
Oct  5 19:44:06.361: INFO: PersistentVolumeClaim pvc-54684 found and phase=Bound (31.659472ms)
STEP: Checking pod has write access to PersistentVolumes
Oct  5 19:44:06.391: INFO: Creating nfs test pod
Oct  5 19:44:06.423: INFO: Pod should terminate with exitcode 0 (success)
Oct  5 19:44:06.423: INFO: Waiting up to 5m0s for pod "pvc-tester-v9rfq" in namespace "pv-4826" to be "Succeeded or Failed"
Oct  5 19:44:06.454: INFO: Pod "pvc-tester-v9rfq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.636436ms
Oct  5 19:44:08.486: INFO: Pod "pvc-tester-v9rfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062809229s
Oct  5 19:44:10.517: INFO: Pod "pvc-tester-v9rfq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094083817s
Oct  5 19:44:12.557: INFO: Pod "pvc-tester-v9rfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133497422s
STEP: Saw pod success
Oct  5 19:44:12.557: INFO: Pod "pvc-tester-v9rfq" satisfied condition "Succeeded or Failed"
Oct  5 19:44:12.557: INFO: Pod pvc-tester-v9rfq succeeded 
Oct  5 19:44:12.557: INFO: Deleting pod "pvc-tester-v9rfq" in namespace "pv-4826"
Oct  5 19:44:12.596: INFO: Wait up to 5m0s for pod "pvc-tester-v9rfq" to be fully deleted
Oct  5 19:44:12.661: INFO: Creating nfs test pod
Oct  5 19:44:12.693: INFO: Pod should terminate with exitcode 0 (success)
Oct  5 19:44:12.693: INFO: Waiting up to 5m0s for pod "pvc-tester-vfvw6" in namespace "pv-4826" to be "Succeeded or Failed"
Oct  5 19:44:12.731: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.277226ms
Oct  5 19:44:14.762: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069466808s
Oct  5 19:44:16.797: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103937005s
Oct  5 19:44:18.828: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135451203s
Oct  5 19:44:20.860: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167356599s
Oct  5 19:44:22.892: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198774327s
... skipping 138 lines ...
Oct  5 19:49:05.343: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.649697319s
Oct  5 19:49:07.375: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.682453628s
Oct  5 19:49:09.408: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.71466252s
Oct  5 19:49:11.439: INFO: Pod "pvc-tester-vfvw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.745907421s
Oct  5 19:49:13.439: INFO: Deleting pod "pvc-tester-vfvw6" in namespace "pv-4826"
Oct  5 19:49:13.472: INFO: Wait up to 5m0s for pod "pvc-tester-vfvw6" to be fully deleted
Oct  5 19:49:21.534: FAIL: Unexpected error:
    <*errors.errorString | 0xc000a19d00>: {
        s: "pod \"pvc-tester-vfvw6\" did not exit with Success: pod \"pvc-tester-vfvw6\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-vfvw6\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-vfvw6" did not exit with Success: pod "pvc-tester-vfvw6" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-vfvw6" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00398d380)
... skipping 28 lines ...
Oct  5 19:49:37.851: INFO: At 2021-10-05 19:44:06 +0000 UTC - event for pvc-tester-v9rfq: {default-scheduler } Scheduled: Successfully assigned pv-4826/pvc-tester-v9rfq to ip-172-20-41-232.ca-central-1.compute.internal
Oct  5 19:49:37.851: INFO: At 2021-10-05 19:44:07 +0000 UTC - event for pvc-tester-v9rfq: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Started: Started container write-pod
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:44:07 +0000 UTC - event for pvc-tester-v9rfq: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Created: Created container write-pod
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:44:07 +0000 UTC - event for pvc-tester-v9rfq: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:44:12 +0000 UTC - event for pvc-tester-vfvw6: {default-scheduler } Scheduled: Successfully assigned pv-4826/pvc-tester-vfvw6 to ip-172-20-32-132.ca-central-1.compute.internal
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:46:15 +0000 UTC - event for pvc-tester-vfvw6: {kubelet ip-172-20-32-132.ca-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-m62mq volume1]: timed out waiting for the condition
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:47:13 +0000 UTC - event for pvc-tester-vfvw6: {kubelet ip-172-20-32-132.ca-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-65rv6" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.4.132:/exports /var/lib/kubelet/pods/1311c9c5-7155-4684-bf9a-82c284e89e69/volumes/kubernetes.io~nfs/nfs-65rv6
Output: mount.nfs: Connection timed out

Oct  5 19:49:37.852: INFO: At 2021-10-05 19:48:33 +0000 UTC - event for pvc-tester-vfvw6: {kubelet ip-172-20-32-132.ca-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-m62mq]: timed out waiting for the condition
Oct  5 19:49:37.852: INFO: At 2021-10-05 19:49:21 +0000 UTC - event for nfs-server: {kubelet ip-172-20-41-232.ca-central-1.compute.internal} Killing: Stopping container nfs-server
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243

      Oct  5 19:49:21.534: Unexpected error:
          <*errors.errorString | 0xc000a19d00>: {
              s: "pod \"pvc-tester-vfvw6\" did not exit with Success: pod \"pvc-tester-vfvw6\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-vfvw6\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-vfvw6" did not exit with Success: pod "pvc-tester-vfvw6" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-vfvw6" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":27,"skipped":136,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
Oct  5 19:49:39.613: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct  5 19:37:19.870: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:37:49.900: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:37:49.901: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:38:24.932: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:38:54.963: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:38:54.963: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:39:29.933: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:39:59.964: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:39:59.964: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:40:34.934: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:41:04.968: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:41:04.968: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:41:39.932: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:42:09.962: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:42:09.963: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:42:44.932: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:43:14.963: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:43:14.963: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:43:49.932: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:44:19.964: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:44:19.965: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:44:54.933: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:45:24.967: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:45:24.967: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:45:59.935: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:46:29.966: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:46:29.966: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:47:04.932: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:47:34.963: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:47:34.963: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:48:09.933: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:48:39.965: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:48:39.965: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:49:09.995: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:49:40.026: INFO: Unable to read jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local from pod dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: the server is currently unable to handle the request (get pods dns-test-105de018-a4f4-4120-b0e0-eebe6812339e)
Oct  5 19:49:40.026: INFO: Lookups using dns-2279/dns-test-105de018-a4f4-4120-b0e0-eebe6812339e failed for: [wheezy_udp@dns-test-service-3.dns-2279.svc.cluster.local jessie_udp@dns-test-service-3.dns-2279.svc.cluster.local]

Oct  5 19:49:40.027: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002be240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 25 lines ...
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Created: Created container querier
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Started: Started container querier
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4" already present on machine
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Created: Created container jessie-querier
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:36:47 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} Started: Started container jessie-querier
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:37:49 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} BackOff: Back-off restarting failed container
Oct  5 19:49:40.127: INFO: At 2021-10-05 19:37:49 +0000 UTC - event for dns-test-105de018-a4f4-4120-b0e0-eebe6812339e: {kubelet ip-172-20-46-201.ca-central-1.compute.internal} BackOff: Back-off restarting failed container
Oct  5 19:49:40.157: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct  5 19:49:40.158: INFO: 
Oct  5 19:49:40.189: INFO: 
Logging node info for node ip-172-20-32-132.ca-central-1.compute.internal
Oct  5 19:49:40.219: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-132.ca-central-1.compute.internal    6e372300-3e30-443f-a6e1-d56e9d91996a 44623 0 2021-10-05 19:20:21 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ca-central-1 failure-domain.beta.kubernetes.io/zone:ca-central-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ca-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-132.ca-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node mounted_volume_expand:mounted-volume-expand-6916 node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.hostpath.csi/node:ip-172-20-32-132.ca-central-1.compute.internal topology.kubernetes.io/region:ca-central-1 topology.kubernetes.io/zone:ca-central-1a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-05 19:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {e2e.test Update v1 2021-10-05 19:33:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:mounted_volume_expand":{}}}}} {kube-controller-manager Update v1 2021-10-05 19:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}}} {kubelet Update v1 2021-10-05 19:43:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.hostpath.csi/node":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ca-central-1a/i-02468cd98e1e52b62,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47455764480 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4061720576 0} {<nil>} 3966524Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42710187962 0} {<nil>} 42710187962 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3956862976 0} {<nil>} 3864124Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-05 19:48:27 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-05 19:48:27 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-05 19:48:27 +0000 UTC,LastTransitionTime:2021-10-05 19:20:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-05 19:48:27 +0000 UTC,LastTransitionTime:2021-10-05 19:20:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.132,},NodeAddress{Type:ExternalIP,Address:3.96.195.58,},NodeAddress{Type:Hostname,Address:ip-172-20-32-132.ca-central-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-132.ca-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-96-195-58.ca-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2ed6a81eb3494309396370c0043173,SystemUUID:ec2ed6a8-1eb3-4943-0939-6370c0043173,BootID:b3826fd6-2065-47e3-8b57-4ea7e4a65bf3,KernelVersion:5.10.69-flatcar,OSImage:Flatcar Container Linux by Kinvolk 2905.2.5 (Oklo),ContainerRuntimeVersion:containerd://1.5.4,KubeletVersion:v1.21.5,KubeProxyVersion:v1.21.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.21.5],SizeBytes:105352393,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506 docker.io/library/nginx:latest],SizeBytes:53799606,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/kopeio/networking-agent@sha256:2d16bdbc3257c42cdc59b05b8fad86653033f19cfafa709f263e93c8f7002932 docker.io/kopeio/networking-agent:1.0.20181028],SizeBytes:25781346,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:51f2dfde5bccac7854b3704689506aeecfb793328427b91115ba253a93e60782 k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0],SizeBytes:20194320,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:d2b357bb02430fee9eaa43b16083981463d260419fe3acb2f560ede5c129f6f5 k8s.gcr.io/sig-storage/hostpathplugin:v1.4.0],SizeBytes:13995876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:e07f914c32f0505e4c470a62a40ee43f84cbf8dc46ff861f31b14457ccbad108 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1],SizeBytes:8415088,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[docker.io/library/busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 docker.io/library/busybox:1.27],SizeBytes:720019,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-5023^03c7469c-2613-11ec-8859-f2a9c091e11c],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct  5 19:49:40.220: INFO: 
... skipping 115 lines ...
• Failure [776.195 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Oct  5 19:49:40.027: Unexpected error:
      <*errors.errorString | 0xc0002be240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":16,"skipped":148,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]}
Oct  5 19:49:41.736: INFO: Running AfterSuite actions on all nodes


[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 1611 lines ...
FailureEerror trying to reach service: dial tcp 100.96.3.50:1080: ... (503; 30.062523501s)
Oct  5 19:50:31.706: INFO: (19) /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/: k8s

v1Statusn

FailureDerror trying to reach service: dial tcp 100.96.3.50:160: i... (503; 30.062772893s)
Oct  5 19:50:31.738: INFO: Pod proxy-service-b9w7t-76gcd has the following error logs: 
Oct  5 19:50:31.739: FAIL: 0 (503; 30.035011007s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.035628563s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.035754154s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.035459894s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036808986s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036882452s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036997219s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036770127s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036925242s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.036691219s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064301882s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064429669s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064293662s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064336754s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064696454s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.064983558s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038667115s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.03872757s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038996393s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038691027s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038824993s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038960838s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038807661s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.038986137s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.039364788s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.03939929s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.060626147s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.063418172s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.063603441s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.065871425s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.066159071s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.06650715s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.033359386s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.034407726s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.038690859s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.038694356s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.038479906s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.038877374s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.038597963s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.041819994s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.041700256s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.041758615s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062496503s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062504975s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062441951s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062497496s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062422872s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.062600444s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040431809s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040151283s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.04011486s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040609718s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040784759s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040730131s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040687155s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.040854479s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.04324311s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.043452931s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.063702756s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.063566458s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.063561983s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.063933631s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.064019956s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.06489685s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.032194182s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.032430354s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.03581745s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.035767211s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.035959634s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.03588386s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.035928962s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.036282721s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.036322591s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.03631183s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.064803042s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.064912875s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.064885231s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.064781549s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.065066964s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.065098568s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.031705139s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.03564891s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035398805s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035730424s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035813702s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.03562007s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035588523s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035733104s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.035808154s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.03611171s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.062934898s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.063020793s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.0629203s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.062937982s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.063338501s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.063175198s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.030647067s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.033012865s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.035168264s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.035322226s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.035311581s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.038844355s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.038950626s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.038957503s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.039799457s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.039727206s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.0604026s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.061503835s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.062526879s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.062937199s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.064001896s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.064504536s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.031732573s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.032033777s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.033675885s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.034721746s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.035670195s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.037980184s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.038143559s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.038228938s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.038278121s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.038348857s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.061312193s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.061388638s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.061362473s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.061453947s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.061820619s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.062909982s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.033304805s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.033265386s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.033773765s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.033785169s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.034316769s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.034780154s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.035222815s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.035083476s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.035157375s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.035202976s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.063276686s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.06351789s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.063333827s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.063277315s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.06758576s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.067406862s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.036981368s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.041425569s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.041386506s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.041539316s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.041610876s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.041465374s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.047179118s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.047175901s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.047305696s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.047288776s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.063334843s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.063419496s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.063516865s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.063413518s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.064818855s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.064902145s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.034493773s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.0347692s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.034967383s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.034672043s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.034910099s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.035074612s): path /api/v1/namespaces/proxy-9164/pods/https:proxy-service-b9w7t-76gcd:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.036630313s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.036618119s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.036664706s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.036727439s): path /api/v1/namespaces/proxy-9164/services/http:proxy-service-b9w7t:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.060968938s): path /api/v1/namespaces/proxy-9164/pods/proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.060955176s): path /api/v1/namespaces/proxy-9164/pods/http:proxy-service-b9w7t-76gcd:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.0621626s): path /api/v1/namespaces/proxy-9164/services/proxy-service-b9w7t:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.062055904s): path /api/v1/namespaces/proxy-9164/services/https:proxy-service-b9w7t:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
10 (503; 30.061996438s): path /api/v1/namespaces/proxy-9164/services/http:proxy-se