This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 68 succeeded
Started2021-10-14 12:23
Elapsed30m18s
Revisionmaster

Test Failures


EBS CSI Migration Suite [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data 6m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EBS\sCSI\sMigration\sSuite\s\[ebs\-csi\-migration\]\sEBS\sCSI\sMigration\s\[Driver\:\saws\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\[Slow\]\svolumes\sshould\sstore\sdata$'
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Oct 14 12:38:32.242: Failed to create client pod: timed out waiting for the condition
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Show 68 Passed Tests

Show 416 Skipped Tests

Error lines from build-log.txt

... skipping 420 lines ...
#6 13.19   Installing : 7:device-mapper-libs-1.02.170-6.amzn2.5.x86_64             23/32 
#6 13.26   Installing : cryptsetup-libs-1.7.4-4.amzn2.x86_64                       24/32 
#6 13.35   Installing : elfutils-libs-0.176-2.amzn2.x86_64                         25/32 
#6 13.45   Installing : systemd-libs-219-78.amzn2.0.15.x86_64                      26/32 
#6 13.51   Installing : 1:dbus-libs-1.10.24-7.amzn2.x86_64                         27/32 
#6 14.57   Installing : systemd-219-78.amzn2.0.15.x86_64                           28/32 
#6 14.94 Failed to get D-Bus connection: Operation not permitted
#6 14.97   Installing : elfutils-default-yama-scope-0.176-2.amzn2.noarch           29/32 
#6 15.09   Installing : 1:dbus-1.10.24-7.amzn2.x86_64                              30/32 
#6 15.25   Installing : e2fsprogs-1.42.9-19.amzn2.x86_64                           31/32
#6 ...

#7 [builder 1/4] FROM docker.io/library/golang:1.16@sha256:26965ce4a2993a4eeecc6d496de9a74c69b0f03badc1de616106888b956c5bdc
... skipping 222 lines ...
## Validating cluster test-cluster-9359.k8s.local
#
Using cluster from kubectl context: test-cluster-9359.k8s.local

Validating cluster test-cluster-9359.k8s.local

W1014 12:25:47.073817    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:25:57.108340    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:26:07.139046    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:26:17.173285    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:26:27.203737    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:26:38.393408    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-9359-k8s-9dhabo-1202239508.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
W1014 12:27:01.055770    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W1014 12:27:22.717671    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W1014 12:27:44.356622    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
W1014 12:28:05.997171    6083 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes)
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
nodes-us-west-2c	Node	c5.large	1	1	us-west-2c
... skipping 6 lines ...
KIND	NAME						MESSAGE
Machine	i-01cf783622f628b24				machine "i-01cf783622f628b24" has not yet joined cluster
Machine	i-039c80b565763ca75				machine "i-039c80b565763ca75" has not yet joined cluster
Machine	i-08c519f61bd3c5eb3				machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Node	ip-172-20-58-205.us-west-2.compute.internal	node "ip-172-20-58-205.us-west-2.compute.internal" of role "master" is not ready

Validation Failed
W1014 12:28:20.984289    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 9 lines ...
Machine	i-039c80b565763ca75				machine "i-039c80b565763ca75" has not yet joined cluster
Machine	i-08c519f61bd3c5eb3				machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Pod	kube-system/coredns-8f5559c9b-g2hw6		system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending
Pod	kube-system/dns-controller-5d59c585d8-s9rh7	system-cluster-critical pod "dns-controller-5d59c585d8-s9rh7" is pending

Validation Failed
W1014 12:28:33.309697    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 9 lines ...
Machine	i-039c80b565763ca75				machine "i-039c80b565763ca75" has not yet joined cluster
Machine	i-08c519f61bd3c5eb3				machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Pod	kube-system/coredns-8f5559c9b-g2hw6		system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending
Pod	kube-system/dns-controller-5d59c585d8-s9rh7	system-cluster-critical pod "dns-controller-5d59c585d8-s9rh7" is pending

Validation Failed
W1014 12:28:45.369447    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 10 lines ...
Machine	i-08c519f61bd3c5eb3								machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Pod	kube-system/coredns-8f5559c9b-g2hw6						system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2					system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending
Pod	kube-system/dns-controller-5d59c585d8-s9rh7					system-cluster-critical pod "dns-controller-5d59c585d8-s9rh7" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-58-205.us-west-2.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-58-205.us-west-2.compute.internal" is pending

Validation Failed
W1014 12:28:57.773892    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 11 lines ...
Pod	kube-system/coredns-8f5559c9b-g2hw6					system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2				system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending
Pod	kube-system/dns-controller-5d59c585d8-s9rh7				system-cluster-critical pod "dns-controller-5d59c585d8-s9rh7" is pending
Pod	kube-system/kube-proxy-ip-172-20-58-205.us-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-58-205.us-west-2.compute.internal" is pending
Pod	kube-system/kube-scheduler-ip-172-20-58-205.us-west-2.compute.internal	system-cluster-critical pod "kube-scheduler-ip-172-20-58-205.us-west-2.compute.internal" is pending

Validation Failed
W1014 12:29:10.878285    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 9 lines ...
Machine	i-039c80b565763ca75				machine "i-039c80b565763ca75" has not yet joined cluster
Machine	i-08c519f61bd3c5eb3				machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Node	ip-172-20-58-205.us-west-2.compute.internal	master "ip-172-20-58-205.us-west-2.compute.internal" is missing kube-apiserver pod
Pod	kube-system/coredns-8f5559c9b-g2hw6		system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending

Validation Failed
W1014 12:29:23.056020    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 8 lines ...
Machine	i-01cf783622f628b24				machine "i-01cf783622f628b24" has not yet joined cluster
Machine	i-039c80b565763ca75				machine "i-039c80b565763ca75" has not yet joined cluster
Machine	i-08c519f61bd3c5eb3				machine "i-08c519f61bd3c5eb3" has not yet joined cluster
Pod	kube-system/coredns-8f5559c9b-g2hw6		system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending

Validation Failed
W1014 12:29:35.217034    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 10 lines ...
KIND	NAME						MESSAGE
Node	ip-172-20-100-132.us-west-2.compute.internal	node "ip-172-20-100-132.us-west-2.compute.internal" of role "node" is not ready
Node	ip-172-20-93-142.us-west-2.compute.internal	node "ip-172-20-93-142.us-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-8f5559c9b-g2hw6		system-cluster-critical pod "coredns-8f5559c9b-g2hw6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6t5l2	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6t5l2" is pending

Validation Failed
W1014 12:29:47.409533    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 7 lines ...
ip-172-20-93-142.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-8f5559c9b-zpt2z	system-cluster-critical pod "coredns-8f5559c9b-zpt2z" is not ready (coredns)

Validation Failed
W1014 12:29:59.585398    6083 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-2a	Master	t3.medium	1	1	us-west-2a
nodes-us-west-2a	Node	c5.large	1	1	us-west-2a
nodes-us-west-2b	Node	c5.large	1	1	us-west-2b
... skipping 86 lines ...

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:184
------------------------------
SSSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:32:42.817: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
Oct 14 12:32:43.276: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Oct 14 12:32:43.408: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 14 12:32:43.960: INFO: Successfully created a new PD: "aws://us-west-2a/vol-018165b29bc3daec8".
Oct 14 12:32:43.960: INFO: Creating resource for pre-provisioned PV
Oct 14 12:32:43.960: INFO: Creating PVC and PV
... skipping 8 lines ...
Oct 14 12:32:54.720: INFO: PersistentVolumeClaim pvc-tcqb7 found but phase is Pending instead of Bound.
Oct 14 12:32:56.783: INFO: PersistentVolumeClaim pvc-tcqb7 found but phase is Pending instead of Bound.
Oct 14 12:32:58.849: INFO: PersistentVolumeClaim pvc-tcqb7 found and phase=Bound (14.524341429s)
Oct 14 12:32:58.849: INFO: Waiting up to 3m0s for PersistentVolume aws-n9rnx to have phase Bound
Oct 14 12:32:58.913: INFO: PersistentVolume aws-n9rnx found and phase=Bound (64.287022ms)
STEP: Creating pod
STEP: Waiting for the pod to fail
Oct 14 12:33:01.296: INFO: Deleting pod "pod-235f0388-d154-4f65-90cd-9e2a17f1e7c7" in namespace "volumemode-7015"
Oct 14 12:33:01.363: INFO: Wait up to 5m0s for pod "pod-235f0388-d154-4f65-90cd-9e2a17f1e7c7" to be fully deleted
STEP: Deleting pv and pvc
Oct 14 12:33:13.491: INFO: Deleting PersistentVolumeClaim "pvc-tcqb7"
Oct 14 12:33:13.556: INFO: Deleting PersistentVolume "aws-n9rnx"
Oct 14 12:33:13.871: INFO: Successfully deleted PD "aws://us-west-2a/vol-018165b29bc3daec8".
... skipping 7 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 14 12:33:14.065: INFO: Driver aws doesn't support CSIInlineVolume -- skipping
[AfterEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 47 lines ...

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:108
------------------------------
SSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:32:42.908: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
Oct 14 12:32:43.360: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
Oct 14 12:32:43.494: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:32:43.494: INFO: Creating resource for dynamic PV
Oct 14 12:32:43.495: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5150xb8ck
STEP: creating a claim
Oct 14 12:32:43.560: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-5dth
STEP: Checking for subpath error in container status
Oct 14 12:33:13.903: INFO: Deleting pod "pod-subpath-test-dynamicpv-5dth" in namespace "provisioning-5150"
Oct 14 12:33:13.970: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-5dth" to be fully deleted
STEP: Deleting pod
Oct 14 12:33:20.101: INFO: Deleting pod "pod-subpath-test-dynamicpv-5dth" in namespace "provisioning-5150"
STEP: Deleting pvc
Oct 14 12:33:20.296: INFO: Deleting PersistentVolumeClaim "awsch728"
... skipping 13 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:33:14.587: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Oct 14 12:33:14.903: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:33:14.903: INFO: Creating resource for dynamic PV
Oct 14 12:33:14.903: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-8607qh2pp
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
Oct 14 12:33:21.356: INFO: Deleting pod "pod-210cac9f-e68b-480b-8758-c905b09d4359" in namespace "volumemode-8607"
Oct 14 12:33:21.421: INFO: Wait up to 5m0s for pod "pod-210cac9f-e68b-480b-8758-c905b09d4359" to be fully deleted
STEP: Deleting pvc
Oct 14 12:33:31.674: INFO: Deleting PersistentVolumeClaim "awshdbtx"
Oct 14 12:33:31.739: INFO: Waiting up to 5m0s for PersistentVolume pvc-6bea38b2-20b7-40ce-8dd6-62477afae679 to get deleted
Oct 14 12:33:31.802: INFO: PersistentVolume pvc-6bea38b2-20b7-40ce-8dd6-62477afae679 found and phase=Released (63.235164ms)
... skipping 11 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
... skipping 6 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 165 lines ...
Oct 14 12:34:10.728: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7277thvkp
STEP: creating a claim
Oct 14 12:34:10.798: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hcb6
STEP: Creating a pod to test atomic-volume-subpath
Oct 14 12:34:11.012: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hcb6" in namespace "provisioning-7277" to be "Succeeded or Failed"
Oct 14 12:34:11.080: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 68.033194ms
Oct 14 12:34:13.149: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13684664s
Oct 14 12:34:15.217: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204924221s
Oct 14 12:34:17.295: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282211127s
Oct 14 12:34:19.363: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.350996334s
Oct 14 12:34:21.432: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.420043999s
... skipping 7 lines ...
Oct 14 12:34:37.988: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Running", Reason="", readiness=true. Elapsed: 26.975892361s
Oct 14 12:34:40.057: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Running", Reason="", readiness=true. Elapsed: 29.044231875s
Oct 14 12:34:42.125: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Running", Reason="", readiness=true. Elapsed: 31.113091393s
Oct 14 12:34:44.194: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Running", Reason="", readiness=true. Elapsed: 33.18176425s
Oct 14 12:34:46.263: INFO: Pod "pod-subpath-test-dynamicpv-hcb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.250264412s
STEP: Saw pod success
Oct 14 12:34:46.263: INFO: Pod "pod-subpath-test-dynamicpv-hcb6" satisfied condition "Succeeded or Failed"
Oct 14 12:34:46.330: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-hcb6 container test-container-subpath-dynamicpv-hcb6: <nil>
STEP: delete the pod
Oct 14 12:34:46.494: INFO: Waiting for pod pod-subpath-test-dynamicpv-hcb6 to disappear
Oct 14 12:34:46.562: INFO: Pod pod-subpath-test-dynamicpv-hcb6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hcb6
Oct 14 12:34:46.562: INFO: Deleting pod "pod-subpath-test-dynamicpv-hcb6" in namespace "provisioning-7277"
... skipping 38 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 184 lines ...

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology 
  should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:35:22.711: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct 14 12:35:23.100: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c]
Oct 14 12:35:23.100: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a]
Oct 14 12:35:23.100: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:35:23.101: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-8236pjrj6      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-8236    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-8236pjrj6,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating sc
... skipping 269 lines ...

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
S
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology 
  should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:35:27.096: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Oct 14 12:35:27.486: INFO: found topology map[topology.kubernetes.io/zone:us-west-2c]
Oct 14 12:35:27.486: INFO: found topology map[topology.kubernetes.io/zone:us-west-2a]
Oct 14 12:35:27.486: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:35:27.486: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-1627tbrh7      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{{[{topology.kubernetes.io/zone [us-west-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-1627    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-1627tbrh7,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating sc
... skipping 208 lines ...
STEP: Deleting pod hostexec-ip-172-20-42-109.us-west-2.compute.internal-fdk78 in namespace volumemode-888
Oct 14 12:35:37.817: INFO: Deleting pod "pod-3f488139-4724-4f12-ab7c-64c3efa7883d" in namespace "volumemode-888"
Oct 14 12:35:37.882: INFO: Wait up to 5m0s for pod "pod-3f488139-4724-4f12-ab7c-64c3efa7883d" to be fully deleted
STEP: Deleting pv and pvc
Oct 14 12:35:40.008: INFO: Deleting PersistentVolumeClaim "pvc-mbs85"
Oct 14 12:35:40.074: INFO: Deleting PersistentVolume "aws-lx4kd"
Oct 14 12:35:40.352: INFO: Couldn't delete PD "aws://us-west-2a/vol-0cf0ac3e54a07d689", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0cf0ac3e54a07d689 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 077a606b-a888-499f-bdde-4e621ed4c4c8
Oct 14 12:35:45.756: INFO: Successfully deleted PD "aws://us-west-2a/vol-0cf0ac3e54a07d689".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:35:45.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-888" for this suite.
... skipping 59 lines ...
Oct 14 12:35:23.047: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 14 12:35:23.621: INFO: Successfully created a new PD: "aws://us-west-2a/vol-020cb6078076fdc60".
Oct 14 12:35:23.621: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-bq8b
STEP: Creating a pod to test exec-volume-test
Oct 14 12:35:23.692: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-bq8b" in namespace "volume-8345" to be "Succeeded or Failed"
Oct 14 12:35:23.760: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 67.739345ms
Oct 14 12:35:25.829: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136371547s
Oct 14 12:35:27.900: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208097692s
Oct 14 12:35:29.969: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276845358s
Oct 14 12:35:32.038: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346123841s
Oct 14 12:35:34.108: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.415530201s
Oct 14 12:35:36.177: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.484522652s
Oct 14 12:35:38.245: INFO: Pod "exec-volume-test-inlinevolume-bq8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.552741006s
STEP: Saw pod success
Oct 14 12:35:38.245: INFO: Pod "exec-volume-test-inlinevolume-bq8b" satisfied condition "Succeeded or Failed"
Oct 14 12:35:38.313: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-inlinevolume-bq8b container exec-container-inlinevolume-bq8b: <nil>
STEP: delete the pod
Oct 14 12:35:38.475: INFO: Waiting for pod exec-volume-test-inlinevolume-bq8b to disappear
Oct 14 12:35:38.542: INFO: Pod exec-volume-test-inlinevolume-bq8b no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-bq8b
Oct 14 12:35:38.543: INFO: Deleting pod "exec-volume-test-inlinevolume-bq8b" in namespace "volume-8345"
Oct 14 12:35:38.810: INFO: Couldn't delete PD "aws://us-west-2a/vol-020cb6078076fdc60", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-020cb6078076fdc60 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 995ae606-1732-4d0d-8b49-607c0641738b
Oct 14 12:35:44.244: INFO: Couldn't delete PD "aws://us-west-2a/vol-020cb6078076fdc60", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-020cb6078076fdc60 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 88a6ec92-0af3-49ca-ad8d-5ea0f905cfd5
Oct 14 12:35:49.659: INFO: Couldn't delete PD "aws://us-west-2a/vol-020cb6078076fdc60", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-020cb6078076fdc60 is currently attached to i-01cf783622f628b24
	status code: 400, request id: d90d71ed-3789-4d97-9c5a-10d37544d325
Oct 14 12:35:55.130: INFO: Successfully deleted PD "aws://us-west-2a/vol-020cb6078076fdc60".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:35:55.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8345" for this suite.
... skipping 30 lines ...
Oct 14 12:35:55.610: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3476lth5c
STEP: creating a claim
Oct 14 12:35:55.683: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-xnnx
STEP: Creating a pod to test subpath
Oct 14 12:35:55.895: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-xnnx" in namespace "provisioning-3476" to be "Succeeded or Failed"
Oct 14 12:35:55.972: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 76.865728ms
Oct 14 12:35:58.041: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145630294s
Oct 14 12:36:00.109: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214470565s
Oct 14 12:36:02.182: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287280093s
Oct 14 12:36:04.251: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356405127s
Oct 14 12:36:06.320: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.424589358s
Oct 14 12:36:08.389: INFO: Pod "pod-subpath-test-dynamicpv-xnnx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.493537107s
STEP: Saw pod success
Oct 14 12:36:08.389: INFO: Pod "pod-subpath-test-dynamicpv-xnnx" satisfied condition "Succeeded or Failed"
Oct 14 12:36:08.457: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-xnnx container test-container-subpath-dynamicpv-xnnx: <nil>
STEP: delete the pod
Oct 14 12:36:08.602: INFO: Waiting for pod pod-subpath-test-dynamicpv-xnnx to disappear
Oct 14 12:36:08.670: INFO: Pod pod-subpath-test-dynamicpv-xnnx no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-xnnx
Oct 14 12:36:08.670: INFO: Deleting pod "pod-subpath-test-dynamicpv-xnnx" in namespace "provisioning-3476"
... skipping 765 lines ...
Oct 14 12:38:28.175: INFO: Waiting for pod aws-client to disappear
Oct 14 12:38:28.243: INFO: Pod aws-client still exists
Oct 14 12:38:30.174: INFO: Waiting for pod aws-client to disappear
Oct 14 12:38:30.242: INFO: Pod aws-client still exists
Oct 14 12:38:32.174: INFO: Waiting for pod aws-client to disappear
Oct 14 12:38:32.242: INFO: Pod aws-client no longer exists
Oct 14 12:38:32.242: FAIL: Failed to create client pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...)
	/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:188 +0x4df
... skipping 29 lines ...
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:07 +0000 UTC - event for aws-injector: {kubelet ip-172-20-93-142.us-west-2.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1"
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:08 +0000 UTC - event for aws-injector: {kubelet ip-172-20-93-142.us-west-2.compute.internal} Started: Started container aws-injector
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:08 +0000 UTC - event for aws-injector: {kubelet ip-172-20-93-142.us-west-2.compute.internal} Created: Created container aws-injector
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:08 +0000 UTC - event for aws-injector: {kubelet ip-172-20-93-142.us-west-2.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 1.103720379s
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:13 +0000 UTC - event for aws-injector: {kubelet ip-172-20-93-142.us-west-2.compute.internal} Killing: Stopping container aws-injector
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:21 +0000 UTC - event for aws-client: {default-scheduler } Scheduled: Successfully assigned volume-9611/aws-client to ip-172-20-93-142.us-west-2.compute.internal
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:42 +0000 UTC - event for aws-client: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-8187cd8a-b1d7-4197-8306-e44b0a42bd30" : rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:33:42 +0000 UTC - event for aws-client: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-8187cd8a-b1d7-4197-8306-e44b0a42bd30" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Oct 14 12:39:08.127: INFO: At 2021-10-14 12:35:24 +0000 UTC - event for aws-client: {kubelet ip-172-20-93-142.us-west-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[aws-volume-0], unattached volumes=[aws-volume-0 default-token-trkcl]: timed out waiting for the condition
Oct 14 12:39:08.193: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 14 12:39:08.193: INFO: 
Oct 14 12:39:08.325: INFO: 
Logging node info for node ip-172-20-100-132.us-west-2.compute.internal
Oct 14 12:39:08.392: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-100-132.us-west-2.compute.internal    ca0984d6-0821-43f0-92c3-2c151afbd0bf 2682 0 2021-10-14 12:29:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-west-2 failure-domain.beta.kubernetes.io/zone:us-west-2c kops.k8s.io/instancegroup:nodes-us-west-2c kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-100-132.us-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-west-2c topology.kubernetes.io/region:us-west-2 topology.kubernetes.io/zone:us-west-2c] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-039c80b565763ca75"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kops-controller Update v1 2021-10-14 12:29:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}}} {kube-controller-manager Update v1 2021-10-14 12:29:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-14 12:30:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-west-2c/i-039c80b565763ca75,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133167038464 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3900354560 0} {<nil>} 3808940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119850334420 0} {<nil>} 119850334420 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3795496960 0} {<nil>} 3706540Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-14 12:29:47 +0000 UTC,LastTransitionTime:2021-10-14 12:29:47 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-14 12:36:09 +0000 UTC,LastTransitionTime:2021-10-14 12:29:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-14 12:36:09 +0000 UTC,LastTransitionTime:2021-10-14 12:29:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-14 12:36:09 +0000 UTC,LastTransitionTime:2021-10-14 12:29:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-14 12:36:09 +0000 UTC,LastTransitionTime:2021-10-14 12:29:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.100.132,},NodeAddress{Type:ExternalIP,Address:54.200.76.84,},NodeAddress{Type:Hostname,Address:ip-172-20-100-132.us-west-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-100-132.us-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-200-76-84.us-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2958a457303cfa68962f6ebca07d37,SystemUUID:ec2958a4-5730-3cfa-6896-2f6ebca07d37,BootID:ecb12b39-b68b-4c1c-91e2-2b9fc9684dc9,KernelVersion:5.11.0-1019-aws,OSImage:Ubuntu 20.04.3 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.20.8,KubeProxyVersion:v1.20.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-ebs-csi-driver@sha256:3552cb1e2696e202886b5823b93b3d2f417f6190f28292223b4f7ecf394a6c06 607362164682.dkr.ecr.us-west-2.amazonaws.com/aws-ebs-csi-driver:9359],SizeBytes:119397790,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:0c867c82a0a8ce6d093595f9d2e4b10517d6c9c26323075de9d82d9f7d056bfc k8s.gcr.io/kube-proxy:v1.20.8],SizeBytes:52056682,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:695505fcfcc69f1cf35665dce487aad447adbb9af69b796d6437f869015d1157 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1],SizeBytes:21212251,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:50c3cfd458fc8e0bf3c8c521eac39172009382fc66dc5044a330d137c6ed0b09 k8s.gcr.io/sig-storage/csi-attacher:v3.1.0],SizeBytes:20103959,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:9af9bf28430b00a0cedeb2ec29acadce45e6afcecd8bdf31c793c624cfa75fa7 k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3],SizeBytes:19500777,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:5a8d85cdd1c80f43fb8fe6dcde1fae707a3177aaf0a786ff4b9f6f20247ec3ff k8s.gcr.io/sig-storage/csi-resizer:v1.0.0],SizeBytes:19466174,},ContainerImage{Names:[k8s.gcr.io/sig-storage/snapshot-controller@sha256:00fcc441ea9f72899c25eed61d602272a2a58c5f0014332bdcb5ac24acef08e4 k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0],SizeBytes:18952261,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 108 lines ...
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (xfs)][Slow] volumes
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Oct 14 12:38:32.242: Failed to create client pod: timed out waiting for the condition

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Oct 14 12:39:10.807: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 37 lines ...
Oct 14 12:38:07.535: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-7533zwqtw      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7533    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7533zwqtw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7533    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7533zwqtw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-7533    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7533zwqtw,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-7533zwqtw    ce3a20ff-14c5-4ebb-a686-cf8c3c540a22 3373 0 2021-10-14 12:38:07 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e-kubernetes.test Update storage.k8s.io/v1 2021-10-14 12:38:07 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-g2k2v pvc- provisioning-7533  0321a4c2-ae97-47ad-824f-2b52bb0c2cbf 3379 0 2021-10-14 12:38:07 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e-kubernetes.test Update v1 2021-10-14 12:38:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-7533zwqtw,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-0e81cbdd-4801-457a-97b0-56aff887d837 in namespace provisioning-7533
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Oct 14 12:38:30.293: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-vbtgj" in namespace "provisioning-7533" to be "Succeeded or Failed"
Oct 14 12:38:30.361: INFO: Pod "pvc-volume-tester-writer-vbtgj": Phase="Pending", Reason="", readiness=false. Elapsed: 67.934068ms
Oct 14 12:38:32.429: INFO: Pod "pvc-volume-tester-writer-vbtgj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136256426s
Oct 14 12:38:34.498: INFO: Pod "pvc-volume-tester-writer-vbtgj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204818818s
Oct 14 12:38:36.567: INFO: Pod "pvc-volume-tester-writer-vbtgj": Phase="Running", Reason="", readiness=true. Elapsed: 6.273992588s
Oct 14 12:38:38.636: INFO: Pod "pvc-volume-tester-writer-vbtgj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.343272033s
STEP: Saw pod success
Oct 14 12:38:38.636: INFO: Pod "pvc-volume-tester-writer-vbtgj" satisfied condition "Succeeded or Failed"
Oct 14 12:38:38.780: INFO: Pod pvc-volume-tester-writer-vbtgj has the following logs: 
Oct 14 12:38:38.780: INFO: Deleting pod "pvc-volume-tester-writer-vbtgj" in namespace "provisioning-7533"
Oct 14 12:38:38.856: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-vbtgj" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-93-142.us-west-2.compute.internal"
Oct 14 12:38:39.145: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-5rxss" in namespace "provisioning-7533" to be "Succeeded or Failed"
Oct 14 12:38:39.213: INFO: Pod "pvc-volume-tester-reader-5rxss": Phase="Pending", Reason="", readiness=false. Elapsed: 67.819022ms
Oct 14 12:38:41.281: INFO: Pod "pvc-volume-tester-reader-5rxss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136470219s
Oct 14 12:38:43.351: INFO: Pod "pvc-volume-tester-reader-5rxss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206099654s
STEP: Saw pod success
Oct 14 12:38:43.351: INFO: Pod "pvc-volume-tester-reader-5rxss" satisfied condition "Succeeded or Failed"
Oct 14 12:38:43.489: INFO: Pod pvc-volume-tester-reader-5rxss has the following logs: hello world

Oct 14 12:38:43.489: INFO: Deleting pod "pvc-volume-tester-reader-5rxss" in namespace "provisioning-7533"
Oct 14 12:38:43.566: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-5rxss" to be fully deleted
Oct 14 12:38:43.634: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-g2k2v] to have phase Bound
Oct 14 12:38:43.702: INFO: PersistentVolumeClaim pvc-g2k2v found and phase=Bound (68.093937ms)
... skipping 41 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 10 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 49 lines ...
Oct 14 12:39:25.023: INFO: PersistentVolumeClaim pvc-ff2ss found but phase is Pending instead of Bound.
Oct 14 12:39:27.092: INFO: PersistentVolumeClaim pvc-ff2ss found and phase=Bound (6.275592476s)
Oct 14 12:39:27.092: INFO: Waiting up to 3m0s for PersistentVolume aws-9lhnt to have phase Bound
Oct 14 12:39:27.160: INFO: PersistentVolume aws-9lhnt found and phase=Bound (67.698626ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8b9r
STEP: Creating a pod to test exec-volume-test
Oct 14 12:39:27.365: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8b9r" in namespace "volume-9157" to be "Succeeded or Failed"
Oct 14 12:39:27.432: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 67.819161ms
Oct 14 12:39:29.501: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136686077s
Oct 14 12:39:31.571: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206280937s
Oct 14 12:39:33.642: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276872258s
Oct 14 12:39:35.711: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346640586s
Oct 14 12:39:37.780: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.415350129s
Oct 14 12:39:39.848: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.483799546s
Oct 14 12:39:41.918: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.553339769s
Oct 14 12:39:43.987: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.622494882s
Oct 14 12:39:46.057: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.691920683s
STEP: Saw pod success
Oct 14 12:39:46.057: INFO: Pod "exec-volume-test-preprovisionedpv-8b9r" satisfied condition "Succeeded or Failed"
Oct 14 12:39:46.125: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-8b9r container exec-container-preprovisionedpv-8b9r: <nil>
STEP: delete the pod
Oct 14 12:39:46.273: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8b9r to disappear
Oct 14 12:39:46.341: INFO: Pod exec-volume-test-preprovisionedpv-8b9r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8b9r
Oct 14 12:39:46.341: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8b9r" in namespace "volume-9157"
STEP: Deleting pv and pvc
Oct 14 12:39:46.409: INFO: Deleting PersistentVolumeClaim "pvc-ff2ss"
Oct 14 12:39:46.479: INFO: Deleting PersistentVolume "aws-9lhnt"
Oct 14 12:39:46.738: INFO: Couldn't delete PD "aws://us-west-2a/vol-026849b4908c4a3f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-026849b4908c4a3f7 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 196d7868-1ab6-4833-83fd-51a15bb55315
Oct 14 12:39:52.199: INFO: Successfully deleted PD "aws://us-west-2a/vol-026849b4908c4a3f7".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:39:52.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9157" for this suite.
... skipping 964 lines ...
Oct 14 12:40:48.077: INFO: Waiting for pod aws-client to disappear
Oct 14 12:40:48.142: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct 14 12:40:48.142: INFO: Deleting PersistentVolumeClaim "pvc-dngqh"
Oct 14 12:40:48.208: INFO: Deleting PersistentVolume "aws-ctlrp"
Oct 14 12:40:48.455: INFO: Couldn't delete PD "aws://us-west-2a/vol-08d0fd63735f25a76", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08d0fd63735f25a76 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 2848d888-112a-406b-aa03-cdddec678c9b
Oct 14 12:40:53.927: INFO: Couldn't delete PD "aws://us-west-2a/vol-08d0fd63735f25a76", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08d0fd63735f25a76 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 667b4de2-1cbe-407a-899e-66f27c9a064f
Oct 14 12:40:59.303: INFO: Couldn't delete PD "aws://us-west-2a/vol-08d0fd63735f25a76", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08d0fd63735f25a76 is currently attached to i-01cf783622f628b24
	status code: 400, request id: ff017a04-0f24-44b2-94fc-810409dfa5d6
Oct 14 12:41:04.720: INFO: Successfully deleted PD "aws://us-west-2a/vol-08d0fd63735f25a76".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:41:04.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9231" for this suite.
... skipping 32 lines ...

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
SSSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:40:56.059: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Oct 14 12:40:56.398: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 14 12:40:56.694: INFO: Successfully created a new PD: "aws://us-west-2a/vol-029458a279c24a012".
Oct 14 12:40:56.694: INFO: Creating resource for pre-provisioned PV
Oct 14 12:40:56.694: INFO: Creating PVC and PV
... skipping 9 lines ...
Oct 14 12:41:09.417: INFO: PersistentVolumeClaim pvc-pwrfh found but phase is Pending instead of Bound.
Oct 14 12:41:11.485: INFO: PersistentVolumeClaim pvc-pwrfh found but phase is Pending instead of Bound.
Oct 14 12:41:13.556: INFO: PersistentVolumeClaim pvc-pwrfh found and phase=Bound (16.620524942s)
Oct 14 12:41:13.556: INFO: Waiting up to 3m0s for PersistentVolume aws-kkjjp to have phase Bound
Oct 14 12:41:13.623: INFO: PersistentVolume aws-kkjjp found and phase=Bound (67.585799ms)
STEP: Creating pod
STEP: Waiting for the pod to fail
Oct 14 12:41:13.976: INFO: Deleting pod "pod-82207a14-6b03-44f4-92f2-a6023d107528" in namespace "volumemode-6783"
Oct 14 12:41:14.045: INFO: Wait up to 5m0s for pod "pod-82207a14-6b03-44f4-92f2-a6023d107528" to be fully deleted
STEP: Deleting pv and pvc
Oct 14 12:41:24.181: INFO: Deleting PersistentVolumeClaim "pvc-pwrfh"
Oct 14 12:41:24.251: INFO: Deleting PersistentVolume "aws-kkjjp"
Oct 14 12:41:24.532: INFO: Successfully deleted PD "aws://us-west-2a/vol-029458a279c24a012".
... skipping 7 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
... skipping 157 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 57 lines ...
Oct 14 12:41:40.489: INFO: Pod aws-client still exists
Oct 14 12:41:42.425: INFO: Waiting for pod aws-client to disappear
Oct 14 12:41:42.490: INFO: Pod aws-client still exists
Oct 14 12:41:44.425: INFO: Waiting for pod aws-client to disappear
Oct 14 12:41:44.490: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Oct 14 12:41:44.675: INFO: Couldn't delete PD "aws://us-west-2a/vol-05bd9661b223e9e1b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05bd9661b223e9e1b is currently attached to i-01cf783622f628b24
	status code: 400, request id: 2a8fee6a-0393-4b7f-ba03-222332a24bd5
Oct 14 12:41:50.148: INFO: Couldn't delete PD "aws://us-west-2a/vol-05bd9661b223e9e1b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05bd9661b223e9e1b is currently attached to i-01cf783622f628b24
	status code: 400, request id: 9ddbf92c-ba13-40dc-adcb-7d4ef2317f76
Oct 14 12:41:55.575: INFO: Successfully deleted PD "aws://us-west-2a/vol-05bd9661b223e9e1b".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:41:55.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4211" for this suite.
... skipping 30 lines ...
Oct 14 12:41:25.031: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5349wxnjb
STEP: creating a claim
Oct 14 12:41:25.101: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-bj27
STEP: Creating a pod to test subpath
Oct 14 12:41:25.309: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bj27" in namespace "provisioning-5349" to be "Succeeded or Failed"
Oct 14 12:41:25.378: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 68.944058ms
Oct 14 12:41:27.447: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13737075s
Oct 14 12:41:29.515: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.20583933s
Oct 14 12:41:31.584: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274392661s
Oct 14 12:41:33.652: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342944951s
Oct 14 12:41:35.721: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 10.411728102s
Oct 14 12:41:37.791: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481603261s
STEP: Saw pod success
Oct 14 12:41:37.791: INFO: Pod "pod-subpath-test-dynamicpv-bj27" satisfied condition "Succeeded or Failed"
Oct 14 12:41:37.859: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-bj27 container test-container-subpath-dynamicpv-bj27: <nil>
STEP: delete the pod
Oct 14 12:41:38.025: INFO: Waiting for pod pod-subpath-test-dynamicpv-bj27 to disappear
Oct 14 12:41:38.092: INFO: Pod pod-subpath-test-dynamicpv-bj27 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-bj27
Oct 14 12:41:38.093: INFO: Deleting pod "pod-subpath-test-dynamicpv-bj27" in namespace "provisioning-5349"
STEP: Creating pod pod-subpath-test-dynamicpv-bj27
STEP: Creating a pod to test subpath
Oct 14 12:41:38.229: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bj27" in namespace "provisioning-5349" to be "Succeeded or Failed"
Oct 14 12:41:38.297: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 68.179678ms
Oct 14 12:41:40.366: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136456088s
Oct 14 12:41:42.434: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204724408s
Oct 14 12:41:44.503: INFO: Pod "pod-subpath-test-dynamicpv-bj27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273507409s
STEP: Saw pod success
Oct 14 12:41:44.503: INFO: Pod "pod-subpath-test-dynamicpv-bj27" satisfied condition "Succeeded or Failed"
Oct 14 12:41:44.571: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-bj27 container test-container-subpath-dynamicpv-bj27: <nil>
STEP: delete the pod
Oct 14 12:41:44.720: INFO: Waiting for pod pod-subpath-test-dynamicpv-bj27 to disappear
Oct 14 12:41:44.787: INFO: Pod pod-subpath-test-dynamicpv-bj27 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-bj27
Oct 14 12:41:44.787: INFO: Deleting pod "pod-subpath-test-dynamicpv-bj27" in namespace "provisioning-5349"
... skipping 157 lines ...
      should access to two volumes with different volume mode and retain data across pod recreation on the same node [LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:214
------------------------------
SSS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:41:36.181: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
Oct 14 12:41:36.501: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:41:36.501: INFO: Creating resource for dynamic PV
Oct 14 12:41:36.501: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-44604z4w6
STEP: creating a claim
Oct 14 12:41:36.566: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-7f9r
STEP: Checking for subpath error in container status
Oct 14 12:42:00.894: INFO: Deleting pod "pod-subpath-test-dynamicpv-7f9r" in namespace "provisioning-4460"
Oct 14 12:42:00.962: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-7f9r" to be fully deleted
STEP: Deleting pod
Oct 14 12:42:11.091: INFO: Deleting pod "pod-subpath-test-dynamicpv-7f9r" in namespace "provisioning-4460"
STEP: Deleting pvc
Oct 14 12:42:11.282: INFO: Deleting PersistentVolumeClaim "aws9bbvp"
... skipping 13 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278
------------------------------
SS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes 
  should allow exec of files on the volume
... skipping 12 lines ...
Oct 14 12:42:26.559: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 14 12:42:27.032: INFO: Successfully created a new PD: "aws://us-west-2a/vol-01cd951c99d744fe1".
Oct 14 12:42:27.032: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-x9cw
STEP: Creating a pod to test exec-volume-test
Oct 14 12:42:27.101: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-x9cw" in namespace "volume-3738" to be "Succeeded or Failed"
Oct 14 12:42:27.168: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 66.557651ms
Oct 14 12:42:29.237: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135454824s
Oct 14 12:42:31.306: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204976503s
Oct 14 12:42:33.375: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.274084881s
Oct 14 12:42:35.443: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342049679s
Oct 14 12:42:37.511: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409910618s
Oct 14 12:42:39.580: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.478196815s
Oct 14 12:42:41.653: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.551976994s
Oct 14 12:42:43.722: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.620244905s
Oct 14 12:42:45.790: INFO: Pod "exec-volume-test-inlinevolume-x9cw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.689033772s
STEP: Saw pod success
Oct 14 12:42:45.790: INFO: Pod "exec-volume-test-inlinevolume-x9cw" satisfied condition "Succeeded or Failed"
Oct 14 12:42:45.857: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-inlinevolume-x9cw container exec-container-inlinevolume-x9cw: <nil>
STEP: delete the pod
Oct 14 12:42:46.023: INFO: Waiting for pod exec-volume-test-inlinevolume-x9cw to disappear
Oct 14 12:42:46.090: INFO: Pod exec-volume-test-inlinevolume-x9cw no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-x9cw
Oct 14 12:42:46.090: INFO: Deleting pod "exec-volume-test-inlinevolume-x9cw" in namespace "volume-3738"
Oct 14 12:42:46.343: INFO: Couldn't delete PD "aws://us-west-2a/vol-01cd951c99d744fe1", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01cd951c99d744fe1 is currently attached to i-01cf783622f628b24
	status code: 400, request id: a7bc5ec0-d26c-4a79-85f7-d0e8759f8ef4
Oct 14 12:42:51.764: INFO: Successfully deleted PD "aws://us-west-2a/vol-01cd951c99d744fe1".
[AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:42:51.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3738" for this suite.
... skipping 341 lines ...
Oct 14 12:43:11.707: INFO: PersistentVolumeClaim pvc-5c28m found but phase is Pending instead of Bound.
Oct 14 12:43:13.772: INFO: PersistentVolumeClaim pvc-5c28m found and phase=Bound (2.130747381s)
Oct 14 12:43:13.772: INFO: Waiting up to 3m0s for PersistentVolume aws-zg257 to have phase Bound
Oct 14 12:43:13.837: INFO: PersistentVolume aws-zg257 found and phase=Bound (64.518704ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-p2gq
STEP: Creating a pod to test exec-volume-test
Oct 14 12:43:14.033: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-p2gq" in namespace "volume-1213" to be "Succeeded or Failed"
Oct 14 12:43:14.102: INFO: Pod "exec-volume-test-preprovisionedpv-p2gq": Phase="Pending", Reason="", readiness=false. Elapsed: 68.719974ms
Oct 14 12:43:16.167: INFO: Pod "exec-volume-test-preprovisionedpv-p2gq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133747469s
Oct 14 12:43:18.234: INFO: Pod "exec-volume-test-preprovisionedpv-p2gq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200715288s
Oct 14 12:43:20.299: INFO: Pod "exec-volume-test-preprovisionedpv-p2gq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265477935s
STEP: Saw pod success
Oct 14 12:43:20.299: INFO: Pod "exec-volume-test-preprovisionedpv-p2gq" satisfied condition "Succeeded or Failed"
Oct 14 12:43:20.363: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-p2gq container exec-container-preprovisionedpv-p2gq: <nil>
STEP: delete the pod
Oct 14 12:43:20.503: INFO: Waiting for pod exec-volume-test-preprovisionedpv-p2gq to disappear
Oct 14 12:43:20.567: INFO: Pod exec-volume-test-preprovisionedpv-p2gq no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-p2gq
Oct 14 12:43:20.567: INFO: Deleting pod "exec-volume-test-preprovisionedpv-p2gq" in namespace "volume-1213"
STEP: Deleting pv and pvc
Oct 14 12:43:20.631: INFO: Deleting PersistentVolumeClaim "pvc-5c28m"
Oct 14 12:43:20.697: INFO: Deleting PersistentVolume "aws-zg257"
Oct 14 12:43:20.923: INFO: Couldn't delete PD "aws://us-west-2a/vol-0fd50e05757c24422", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fd50e05757c24422 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 438429a6-8447-49f2-b94c-628e2be53611
Oct 14 12:43:26.317: INFO: Couldn't delete PD "aws://us-west-2a/vol-0fd50e05757c24422", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fd50e05757c24422 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 75b41df3-d83d-45d4-ada5-1aa2d2a80555
Oct 14 12:43:31.776: INFO: Successfully deleted PD "aws://us-west-2a/vol-0fd50e05757c24422".
[AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:43:31.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1213" for this suite.
... skipping 168 lines ...
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
SS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode 
  should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296

[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:43:14.553: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename volumemode
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to use a volume in a pod with mismatched mode [Slow]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
Oct 14 12:43:14.873: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:43:14.873: INFO: Creating resource for dynamic PV
Oct 14 12:43:14.873: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volumemode-616025qhl
STEP: creating a claim
STEP: Creating pod
STEP: Waiting for the pod to fail
Oct 14 12:43:19.335: INFO: Deleting pod "pod-4bfde3c6-7d77-486c-b768-fe9eba15f5b1" in namespace "volumemode-6160"
Oct 14 12:43:19.401: INFO: Wait up to 5m0s for pod "pod-4bfde3c6-7d77-486c-b768-fe9eba15f5b1" to be fully deleted
STEP: Deleting pvc
Oct 14 12:43:31.657: INFO: Deleting PersistentVolumeClaim "awsl7r8l"
Oct 14 12:43:31.722: INFO: Waiting up to 5m0s for PersistentVolume pvc-66563ea5-39d7-4b62-966d-eb6e7ec2a7dd to get deleted
Oct 14 12:43:31.786: INFO: PersistentVolume pvc-66563ea5-39d7-4b62-966d-eb6e7ec2a7dd found and phase=Released (64.430798ms)
... skipping 11 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to use a volume in a pod with mismatched mode [Slow]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:296
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes 
  should store data
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

... skipping 110 lines ...
Oct 14 12:43:47.496: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4733fc7kq
STEP: creating a claim
Oct 14 12:43:47.560: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-8ltc
STEP: Creating a pod to test subpath
Oct 14 12:43:47.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-8ltc" in namespace "provisioning-4733" to be "Succeeded or Failed"
Oct 14 12:43:47.826: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 64.125258ms
Oct 14 12:43:49.890: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128408771s
Oct 14 12:43:51.955: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193025474s
Oct 14 12:43:54.019: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25754866s
Oct 14 12:43:56.085: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323057565s
Oct 14 12:43:58.151: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389160897s
Oct 14 12:44:00.218: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.456235522s
Oct 14 12:44:02.283: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.521596059s
Oct 14 12:44:04.348: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.586097241s
Oct 14 12:44:06.412: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.650495317s
Oct 14 12:44:08.482: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.72058753s
Oct 14 12:44:10.548: INFO: Pod "pod-subpath-test-dynamicpv-8ltc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.785841515s
STEP: Saw pod success
Oct 14 12:44:10.548: INFO: Pod "pod-subpath-test-dynamicpv-8ltc" satisfied condition "Succeeded or Failed"
Oct 14 12:44:10.611: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-8ltc container test-container-volume-dynamicpv-8ltc: <nil>
STEP: delete the pod
Oct 14 12:44:10.751: INFO: Waiting for pod pod-subpath-test-dynamicpv-8ltc to disappear
Oct 14 12:44:10.814: INFO: Pod pod-subpath-test-dynamicpv-8ltc no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-8ltc
Oct 14 12:44:10.814: INFO: Deleting pod "pod-subpath-test-dynamicpv-8ltc" in namespace "provisioning-4733"
... skipping 44 lines ...
Oct 14 12:43:37.954: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6007qk8jz
STEP: creating a claim
Oct 14 12:43:38.023: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-rjv2
STEP: Creating a pod to test subpath
Oct 14 12:43:38.232: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rjv2" in namespace "provisioning-6007" to be "Succeeded or Failed"
Oct 14 12:43:38.300: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 67.892583ms
Oct 14 12:43:40.369: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136748312s
Oct 14 12:43:42.444: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21182399s
Oct 14 12:43:44.515: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282126046s
Oct 14 12:43:46.583: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35055134s
Oct 14 12:43:48.651: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.419061272s
... skipping 2 lines ...
Oct 14 12:43:54.859: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.626215728s
Oct 14 12:43:56.928: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.695673174s
Oct 14 12:43:58.997: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.764588406s
Oct 14 12:44:01.066: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.833177251s
Oct 14 12:44:03.135: INFO: Pod "pod-subpath-test-dynamicpv-rjv2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.902269091s
STEP: Saw pod success
Oct 14 12:44:03.135: INFO: Pod "pod-subpath-test-dynamicpv-rjv2" satisfied condition "Succeeded or Failed"
Oct 14 12:44:03.203: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-rjv2 container test-container-subpath-dynamicpv-rjv2: <nil>
STEP: delete the pod
Oct 14 12:44:03.355: INFO: Waiting for pod pod-subpath-test-dynamicpv-rjv2 to disappear
Oct 14 12:44:03.423: INFO: Pod pod-subpath-test-dynamicpv-rjv2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rjv2
Oct 14 12:44:03.423: INFO: Deleting pod "pod-subpath-test-dynamicpv-rjv2" in namespace "provisioning-6007"
... skipping 112 lines ...
Oct 14 12:44:26.918: INFO: Creating resource for dynamic PV
Oct 14 12:44:26.918: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8755kdr8q
STEP: creating a claim
Oct 14 12:44:26.983: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod to format volume volume-prep-provisioning-8755
Oct 14 12:44:27.179: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-8755" in namespace "provisioning-8755" to be "Succeeded or Failed"
Oct 14 12:44:27.243: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 63.857263ms
Oct 14 12:44:29.308: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128386278s
Oct 14 12:44:31.373: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193793153s
Oct 14 12:44:33.438: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258219708s
Oct 14 12:44:35.503: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324049873s
Oct 14 12:44:37.568: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388374825s
Oct 14 12:44:39.637: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 12.457408709s
Oct 14 12:44:41.701: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 14.52174912s
Oct 14 12:44:43.769: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 16.589922592s
Oct 14 12:44:45.834: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 18.654295769s
Oct 14 12:44:47.898: INFO: Pod "volume-prep-provisioning-8755": Phase="Pending", Reason="", readiness=false. Elapsed: 20.718721578s
Oct 14 12:44:49.962: INFO: Pod "volume-prep-provisioning-8755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.782931412s
STEP: Saw pod success
Oct 14 12:44:49.962: INFO: Pod "volume-prep-provisioning-8755" satisfied condition "Succeeded or Failed"
Oct 14 12:44:49.962: INFO: Deleting pod "volume-prep-provisioning-8755" in namespace "provisioning-8755"
Oct 14 12:44:50.032: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-8755" to be fully deleted
STEP: Creating pod pod-subpath-test-dynamicpv-nkjl
STEP: Checking for subpath error in container status
Oct 14 12:44:54.291: INFO: Deleting pod "pod-subpath-test-dynamicpv-nkjl" in namespace "provisioning-8755"
Oct 14 12:44:54.361: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-nkjl" to be fully deleted
STEP: Deleting pod
Oct 14 12:44:54.426: INFO: Deleting pod "pod-subpath-test-dynamicpv-nkjl" in namespace "provisioning-8755"
STEP: Deleting pvc
Oct 14 12:44:54.618: INFO: Deleting PersistentVolumeClaim "awshfshz"
... skipping 283 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 18 lines ...

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
SS
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:45:30.416: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
Oct 14 12:45:30.737: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:45:30.737: INFO: Creating resource for dynamic PV
Oct 14 12:45:30.737: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6158bw48n
STEP: creating a claim
Oct 14 12:45:30.801: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vr4t
STEP: Checking for subpath error in container status
Oct 14 12:45:45.139: INFO: Deleting pod "pod-subpath-test-dynamicpv-vr4t" in namespace "provisioning-6158"
Oct 14 12:45:45.206: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-vr4t" to be fully deleted
STEP: Deleting pod
Oct 14 12:45:51.353: INFO: Deleting pod "pod-subpath-test-dynamicpv-vr4t" in namespace "provisioning-6158"
STEP: Deleting pvc
Oct 14 12:45:51.545: INFO: Deleting PersistentVolumeClaim "awsqgb4l"
... skipping 13 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
... skipping 202 lines ...
Oct 14 12:46:14.641: INFO: Creating resource for dynamic PV
Oct 14 12:46:14.641: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-6585jqrrb
STEP: creating a claim
STEP: Expanding non-expandable pvc
Oct 14 12:46:14.846: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 14 12:46:14.981: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:17.114: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:19.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:21.112: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:23.114: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:25.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:27.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:29.115: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:31.114: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:33.115: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:35.114: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:37.114: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:39.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:41.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:43.113: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:45.116: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6585jqrrb",
  	... // 2 identical fields
  }

Oct 14 12:46:45.253: INFO: Error updating pvc awscvtcv: PersistentVolumeClaim "awscvtcv" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 367 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 101 lines ...
Oct 14 12:46:35.136: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-4479t55f
STEP: creating a claim
Oct 14 12:46:35.204: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-6rn7
STEP: Creating a pod to test exec-volume-test
Oct 14 12:46:35.414: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-6rn7" in namespace "volume-447" to be "Succeeded or Failed"
Oct 14 12:46:35.482: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 67.460773ms
Oct 14 12:46:37.549: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13485873s
Oct 14 12:46:39.617: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202869798s
Oct 14 12:46:41.685: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270296036s
Oct 14 12:46:43.753: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338913151s
Oct 14 12:46:45.822: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.407177943s
Oct 14 12:46:47.889: INFO: Pod "exec-volume-test-dynamicpv-6rn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.474840241s
STEP: Saw pod success
Oct 14 12:46:47.889: INFO: Pod "exec-volume-test-dynamicpv-6rn7" satisfied condition "Succeeded or Failed"
Oct 14 12:46:47.956: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod exec-volume-test-dynamicpv-6rn7 container exec-container-dynamicpv-6rn7: <nil>
STEP: delete the pod
Oct 14 12:46:48.124: INFO: Waiting for pod exec-volume-test-dynamicpv-6rn7 to disappear
Oct 14 12:46:48.191: INFO: Pod exec-volume-test-dynamicpv-6rn7 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-6rn7
Oct 14 12:46:48.191: INFO: Deleting pod "exec-volume-test-dynamicpv-6rn7" in namespace "volume-447"
... skipping 172 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 48 lines ...
Oct 14 12:47:11.590: INFO: PersistentVolumeClaim pvc-4f86n found but phase is Pending instead of Bound.
Oct 14 12:47:13.653: INFO: PersistentVolumeClaim pvc-4f86n found and phase=Bound (4.188684s)
Oct 14 12:47:13.653: INFO: Waiting up to 3m0s for PersistentVolume aws-hqq7m to have phase Bound
Oct 14 12:47:13.716: INFO: PersistentVolume aws-hqq7m found and phase=Bound (63.084778ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pwh2
STEP: Creating a pod to test exec-volume-test
Oct 14 12:47:13.906: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pwh2" in namespace "volume-3121" to be "Succeeded or Failed"
Oct 14 12:47:13.969: INFO: Pod "exec-volume-test-preprovisionedpv-pwh2": Phase="Pending", Reason="", readiness=false. Elapsed: 62.681559ms
Oct 14 12:47:16.034: INFO: Pod "exec-volume-test-preprovisionedpv-pwh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128043075s
Oct 14 12:47:18.099: INFO: Pod "exec-volume-test-preprovisionedpv-pwh2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192283135s
Oct 14 12:47:20.163: INFO: Pod "exec-volume-test-preprovisionedpv-pwh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256137667s
STEP: Saw pod success
Oct 14 12:47:20.163: INFO: Pod "exec-volume-test-preprovisionedpv-pwh2" satisfied condition "Succeeded or Failed"
Oct 14 12:47:20.226: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-pwh2 container exec-container-preprovisionedpv-pwh2: <nil>
STEP: delete the pod
Oct 14 12:47:20.367: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pwh2 to disappear
Oct 14 12:47:20.430: INFO: Pod exec-volume-test-preprovisionedpv-pwh2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pwh2
Oct 14 12:47:20.430: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pwh2" in namespace "volume-3121"
STEP: Deleting pv and pvc
Oct 14 12:47:20.493: INFO: Deleting PersistentVolumeClaim "pvc-4f86n"
Oct 14 12:47:20.557: INFO: Deleting PersistentVolume "aws-hqq7m"
Oct 14 12:47:20.783: INFO: Couldn't delete PD "aws://us-west-2a/vol-02753b92afb1bb525", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02753b92afb1bb525 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 4e8de972-0296-476e-a26c-6f59eef3beb4
Oct 14 12:47:26.182: INFO: Couldn't delete PD "aws://us-west-2a/vol-02753b92afb1bb525", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02753b92afb1bb525 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 206eef4a-241d-4374-9b92-530c63a9d9da
Oct 14 12:47:31.633: INFO: Successfully deleted PD "aws://us-west-2a/vol-02753b92afb1bb525".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:47:31.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3121" for this suite.
... skipping 112 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 77 lines ...
Oct 14 12:47:30.369: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Oct 14 12:47:30.880: INFO: Successfully created a new PD: "aws://us-west-2a/vol-032c7b5a329a8a8f4".
Oct 14 12:47:30.880: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-9m4p
STEP: Creating a pod to test exec-volume-test
Oct 14 12:47:30.952: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-9m4p" in namespace "volume-5861" to be "Succeeded or Failed"
Oct 14 12:47:31.020: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Pending", Reason="", readiness=false. Elapsed: 67.861438ms
Oct 14 12:47:33.090: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137163573s
Oct 14 12:47:35.158: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205598157s
Oct 14 12:47:37.226: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273643123s
Oct 14 12:47:39.295: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.343113069s
Oct 14 12:47:41.364: INFO: Pod "exec-volume-test-inlinevolume-9m4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411342856s
STEP: Saw pod success
Oct 14 12:47:41.364: INFO: Pod "exec-volume-test-inlinevolume-9m4p" satisfied condition "Succeeded or Failed"
Oct 14 12:47:41.432: INFO: Trying to get logs from node ip-172-20-42-109.us-west-2.compute.internal pod exec-volume-test-inlinevolume-9m4p container exec-container-inlinevolume-9m4p: <nil>
STEP: delete the pod
Oct 14 12:47:41.575: INFO: Waiting for pod exec-volume-test-inlinevolume-9m4p to disappear
Oct 14 12:47:41.643: INFO: Pod exec-volume-test-inlinevolume-9m4p no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-9m4p
Oct 14 12:47:41.643: INFO: Deleting pod "exec-volume-test-inlinevolume-9m4p" in namespace "volume-5861"
Oct 14 12:47:41.910: INFO: Couldn't delete PD "aws://us-west-2a/vol-032c7b5a329a8a8f4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-032c7b5a329a8a8f4 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 0b355b97-461d-4972-8799-4fb4a4a914ab
Oct 14 12:47:47.296: INFO: Couldn't delete PD "aws://us-west-2a/vol-032c7b5a329a8a8f4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-032c7b5a329a8a8f4 is currently attached to i-01cf783622f628b24
	status code: 400, request id: ab136800-e764-479a-be0d-4f436ae4fb08
Oct 14 12:47:52.765: INFO: Couldn't delete PD "aws://us-west-2a/vol-032c7b5a329a8a8f4", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-032c7b5a329a8a8f4 is currently attached to i-01cf783622f628b24
	status code: 400, request id: 8ed274d1-8f88-4478-b9a0-62a4321928f4
Oct 14 12:47:58.498: INFO: Successfully deleted PD "aws://us-west-2a/vol-032c7b5a329a8a8f4".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:47:58.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5861" for this suite.
... skipping 74 lines ...
Oct 14 12:47:32.080: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-5365jwbv2
STEP: creating a claim
Oct 14 12:47:32.143: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Oct 14 12:47:32.273: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Oct 14 12:47:32.410: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:34.538: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:36.537: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:38.538: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:40.539: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:42.541: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:44.542: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:46.548: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:48.538: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:50.538: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:52.539: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:54.539: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:56.539: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:47:58.538: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:48:00.540: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:48:02.553: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5365jwbv2",
  	... // 2 identical fields
  }

Oct 14 12:48:02.693: INFO: Error updating pvc awspnw4f: PersistentVolumeClaim "awspnw4f" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 263 lines ...
Oct 14 12:47:57.627: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4596xphbg
STEP: creating a claim
Oct 14 12:47:57.695: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-flk9
STEP: Creating a pod to test multi_subpath
Oct 14 12:47:57.899: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-flk9" in namespace "provisioning-4596" to be "Succeeded or Failed"
Oct 14 12:47:57.966: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 66.931782ms
Oct 14 12:48:00.044: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145097978s
Oct 14 12:48:02.114: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215429354s
Oct 14 12:48:04.181: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.282560032s
Oct 14 12:48:06.250: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351003733s
Oct 14 12:48:08.317: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.418636913s
Oct 14 12:48:10.386: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.486980516s
Oct 14 12:48:12.454: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.554999902s
Oct 14 12:48:14.522: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.623233124s
Oct 14 12:48:16.590: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.691041104s
Oct 14 12:48:18.658: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.758973962s
Oct 14 12:48:20.726: INFO: Pod "pod-subpath-test-dynamicpv-flk9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.827354409s
STEP: Saw pod success
Oct 14 12:48:20.726: INFO: Pod "pod-subpath-test-dynamicpv-flk9" satisfied condition "Succeeded or Failed"
Oct 14 12:48:20.793: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-flk9 container test-container-subpath-dynamicpv-flk9: <nil>
STEP: delete the pod
Oct 14 12:48:20.960: INFO: Waiting for pod pod-subpath-test-dynamicpv-flk9 to disappear
Oct 14 12:48:21.029: INFO: Pod pod-subpath-test-dynamicpv-flk9 no longer exists
STEP: Deleting pod
Oct 14 12:48:21.029: INFO: Deleting pod "pod-subpath-test-dynamicpv-flk9" in namespace "provisioning-4596"
... skipping 32 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
S
------------------------------
[ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath 
  should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 14 12:47:58.653: INFO: >>> kubeConfig: /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/hack/e2e/csi-test-artifacts/test-cluster-9359.k8s.local.kops.kubeconfig
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath file is outside the volume [Slow][LinuxOnly]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
Oct 14 12:47:58.992: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Oct 14 12:47:58.992: INFO: Creating resource for dynamic PV
Oct 14 12:47:58.993: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8481c67tf
STEP: creating a claim
Oct 14 12:47:59.062: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-75hh
STEP: Checking for subpath error in container status
Oct 14 12:48:23.405: INFO: Deleting pod "pod-subpath-test-dynamicpv-75hh" in namespace "provisioning-8481"
Oct 14 12:48:23.476: INFO: Wait up to 5m0s for pod "pod-subpath-test-dynamicpv-75hh" to be fully deleted
STEP: Deleting pod
Oct 14 12:48:31.613: INFO: Deleting pod "pod-subpath-test-dynamicpv-75hh" in namespace "provisioning-8481"
STEP: Deleting pvc
Oct 14 12:48:31.817: INFO: Deleting PersistentVolumeClaim "awsbcbhr"
... skipping 13 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256
------------------------------
SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
... skipping 344 lines ...
Oct 14 12:48:37.136: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-811lxp58
STEP: creating a claim
Oct 14 12:48:37.207: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-zrhf
STEP: Creating a pod to test subpath
Oct 14 12:48:37.418: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-zrhf" in namespace "provisioning-811" to be "Succeeded or Failed"
Oct 14 12:48:37.485: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 67.000247ms
Oct 14 12:48:39.557: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138906304s
Oct 14 12:48:41.627: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208935366s
Oct 14 12:48:43.696: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278183451s
Oct 14 12:48:45.764: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345813511s
Oct 14 12:48:47.832: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.413649869s
... skipping 2 lines ...
Oct 14 12:48:54.037: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.619256631s
Oct 14 12:48:56.105: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.687426586s
Oct 14 12:48:58.173: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.755253491s
Oct 14 12:49:00.242: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.823885955s
Oct 14 12:49:02.309: INFO: Pod "pod-subpath-test-dynamicpv-zrhf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.891549351s
STEP: Saw pod success
Oct 14 12:49:02.309: INFO: Pod "pod-subpath-test-dynamicpv-zrhf" satisfied condition "Succeeded or Failed"
Oct 14 12:49:02.377: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-zrhf container test-container-subpath-dynamicpv-zrhf: <nil>
STEP: delete the pod
Oct 14 12:49:02.541: INFO: Waiting for pod pod-subpath-test-dynamicpv-zrhf to disappear
Oct 14 12:49:02.608: INFO: Pod pod-subpath-test-dynamicpv-zrhf no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-zrhf
Oct 14 12:49:02.608: INFO: Deleting pod "pod-subpath-test-dynamicpv-zrhf" in namespace "provisioning-811"
... skipping 44 lines ...
Oct 14 12:49:06.592: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7922dn77q
STEP: creating a claim
Oct 14 12:49:06.658: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-cq92
STEP: Creating a pod to test subpath
Oct 14 12:49:06.857: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cq92" in namespace "provisioning-7922" to be "Succeeded or Failed"
Oct 14 12:49:06.922: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 64.928592ms
Oct 14 12:49:08.988: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130776077s
Oct 14 12:49:11.054: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196425817s
Oct 14 12:49:13.120: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263028009s
Oct 14 12:49:15.186: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329140547s
Oct 14 12:49:17.252: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394548899s
Oct 14 12:49:19.319: INFO: Pod "pod-subpath-test-dynamicpv-cq92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.461679563s
STEP: Saw pod success
Oct 14 12:49:19.319: INFO: Pod "pod-subpath-test-dynamicpv-cq92" satisfied condition "Succeeded or Failed"
Oct 14 12:49:19.384: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod pod-subpath-test-dynamicpv-cq92 container test-container-volume-dynamicpv-cq92: <nil>
STEP: delete the pod
Oct 14 12:49:19.531: INFO: Waiting for pod pod-subpath-test-dynamicpv-cq92 to disappear
Oct 14 12:49:19.596: INFO: Pod pod-subpath-test-dynamicpv-cq92 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cq92
Oct 14 12:49:19.596: INFO: Deleting pod "pod-subpath-test-dynamicpv-cq92" in namespace "provisioning-7922"
... skipping 261 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Inline-volume (default fs)] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256

      Driver supports dynamic provisioning, skipping InlineVolume pattern

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:233
------------------------------
... skipping 10 lines ...
[ebs-csi-migration] EBS CSI Migration
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:85
  [Driver: aws]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e-kubernetes/e2e_test.go:91
    [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath
    /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail if subpath directory is outside the volume [Slow][LinuxOnly] [BeforeEach]
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:240

      Distro debian doesn't support ntfs -- skipping

      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:127
------------------------------
... skipping 18 lines ...
Oct 14 12:49:17.051: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-55205vf9m
STEP: creating a claim
Oct 14 12:49:17.115: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-8frj
STEP: Creating a pod to test exec-volume-test
Oct 14 12:49:17.307: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-8frj" in namespace "volume-5520" to be "Succeeded or Failed"
Oct 14 12:49:17.370: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 62.901346ms
Oct 14 12:49:19.434: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126501702s
Oct 14 12:49:21.497: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189878045s
Oct 14 12:49:23.561: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253798057s
Oct 14 12:49:25.624: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316759589s
Oct 14 12:49:27.688: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.380689764s
Oct 14 12:49:29.752: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.444684773s
Oct 14 12:49:31.816: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.508603266s
Oct 14 12:49:33.880: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.572903007s
Oct 14 12:49:35.944: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.637287381s
Oct 14 12:49:38.009: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Running", Reason="", readiness=true. Elapsed: 20.701828118s
Oct 14 12:49:40.072: INFO: Pod "exec-volume-test-dynamicpv-8frj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.765101098s
STEP: Saw pod success
Oct 14 12:49:40.072: INFO: Pod "exec-volume-test-dynamicpv-8frj" satisfied condition "Succeeded or Failed"
Oct 14 12:49:40.136: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod exec-volume-test-dynamicpv-8frj container exec-container-dynamicpv-8frj: <nil>
STEP: delete the pod
Oct 14 12:49:40.270: INFO: Waiting for pod exec-volume-test-dynamicpv-8frj to disappear
Oct 14 12:49:40.333: INFO: Pod exec-volume-test-dynamicpv-8frj no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-8frj
Oct 14 12:49:40.333: INFO: Deleting pod "exec-volume-test-dynamicpv-8frj" in namespace "volume-5520"
... skipping 40 lines ...
Oct 14 12:49:35.729: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-2127vf8m5
STEP: creating a claim
Oct 14 12:49:35.796: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-b2t2
STEP: Creating a pod to test exec-volume-test
Oct 14 12:49:35.995: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-b2t2" in namespace "volume-2127" to be "Succeeded or Failed"
Oct 14 12:49:36.060: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Pending", Reason="", readiness=false. Elapsed: 65.019947ms
Oct 14 12:49:38.126: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130527753s
Oct 14 12:49:40.193: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197320864s
Oct 14 12:49:42.261: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265196568s
Oct 14 12:49:44.326: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331006234s
Oct 14 12:49:46.393: INFO: Pod "exec-volume-test-dynamicpv-b2t2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.397393952s
STEP: Saw pod success
Oct 14 12:49:46.393: INFO: Pod "exec-volume-test-dynamicpv-b2t2" satisfied condition "Succeeded or Failed"
Oct 14 12:49:46.460: INFO: Trying to get logs from node ip-172-20-93-142.us-west-2.compute.internal pod exec-volume-test-dynamicpv-b2t2 container exec-container-dynamicpv-b2t2: <nil>
STEP: delete the pod
Oct 14 12:49:46.605: INFO: Waiting for pod exec-volume-test-dynamicpv-b2t2 to disappear
Oct 14 12:49:46.670: INFO: Pod exec-volume-test-dynamicpv-b2t2 no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-b2t2
Oct 14 12:49:46.670: INFO: Deleting pod "exec-volume-test-dynamicpv-b2t2" in namespace "volume-2127"
... skipping 104 lines ...
Oct 14 12:50:25.102: INFO: Waiting for pod aws-client to disappear
Oct 14 12:50:25.169: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Oct 14 12:50:25.169: INFO: Deleting PersistentVolumeClaim "pvc-pjvvh"
Oct 14 12:50:25.238: INFO: Deleting PersistentVolume "aws-z6wph"
Oct 14 12:50:25.508: INFO: Couldn't delete PD "aws://us-west-2a/vol-0c43ee0ef18c00c2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c43ee0ef18c00c2a is currently attached to i-01cf783622f628b24
	status code: 400, request id: 18479887-e6f4-42ae-8b6f-02a2c8959349
Oct 14 12:50:30.902: INFO: Couldn't delete PD "aws://us-west-2a/vol-0c43ee0ef18c00c2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0c43ee0ef18c00c2a is currently attached to i-01cf783622f628b24
	status code: 400, request id: 4ac112f8-38ec-489c-988f-9809a7ade50a
Oct 14 12:50:36.324: INFO: Successfully deleted PD "aws://us-west-2a/vol-0c43ee0ef18c00c2a".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 14 12:50:36.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-3493" for this suite.
... skipping 10 lines ...
      /home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------


Summarizing 1 Failure:

[Fail] [ebs-csi-migration] EBS CSI Migration [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes [It] should store data 
/home/prow/go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/vendor/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505

Ran 69 of 485 Specs in 1073.653 seconds
FAIL! -- 68 Passed | 1 Failed | 0 Pending | 416 Skipped


Ginkgo ran 1 suite in 19m53.166898421s
Test Suite Failed
+ TEST_PASSED=1
+ set -e
+ set +x
###
## TEST_PASSED: 1
#
... skipping 33 lines ...
#
0
###
## MIGRATION_PASSED: 0
#
###
## One of test or migration failed
#
###
## Printing pod ebs-csi-controller-544d8bd56d-2xsmr ebs-plugin container logs
#
I1014 12:30:30.962157       1 driver.go:72] Driver: ebs.csi.aws.com Version: v1.4.0
I1014 12:30:30.962225       1 controller.go:80] [Debug] Retrieving region from metadata service
... skipping 348 lines ...
}
I1014 12:33:41.125591       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=attaching, desired=attached
I1014 12:33:41.825129       1 cloud.go:606] Waiting for volume "vol-0d1fa9e65eeef3add" state: actual=detaching, desired=detached
I1014 12:33:42.029589       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:33:42.205273       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-025b3da6441bd5c5a
I1014 12:33:42.205293       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-025b3da6441bd5c5a attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
E1014 12:33:42.327628       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
I1014 12:33:43.029675       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:33:43.431932       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
I1014 12:33:43.703256       1 cloud.go:606] Waiting for volume "vol-0d1fa9e65eeef3add" state: actual=detaching, desired=detached
W1014 12:33:44.728826       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:33:47.061130       1 cloud.go:606] Waiting for volume "vol-0d1fa9e65eeef3add" state: actual=detaching, desired=detached
I1014 12:33:47.437918       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:33:47.719154       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
W1014 12:33:50.717750       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:33:52.238085       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0cfa87eec4d52ff85 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:33:52.735815       1 cloud.go:433] [Debug] AttachVolume volume="vol-0cfa87eec4d52ff85" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:33:52.728 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-0cfa87eec4d52ff85"
}
I1014 12:33:53.002637       1 cloud.go:606] Waiting for volume "vol-0cfa87eec4d52ff85" state: actual=attaching, desired=attached
E1014 12:33:53.064681       1 manager.go:44] Error releasing device: release on device "/dev/xvdbc" assigned to different volume: "vol-0d1fa9e65eeef3add" vs "vol-0cfa87eec4d52ff85"
I1014 12:33:53.064701       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0d1fa9e65eeef3add detached from node i-08c519f61bd3c5eb3
I1014 12:33:54.168871       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0cfa87eec4d52ff85
I1014 12:33:54.168894       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cfa87eec4d52ff85 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:33:55.726603       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:33:56.001671       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
W1014 12:34:03.624899       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:34:09.614601       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:34:10.396791       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-025b3da6441bd5c5a NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:34:10.736444       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:34:11.820539       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:34:12.008840       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:34:12.227215       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
I1014 12:34:13.757124       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:34:15.123255       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-024e5a7a9a2d90124 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:34:15.608656       1 cloud.go:433] [Debug] AttachVolume volume="vol-024e5a7a9a2d90124" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:34:15.604 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 38 lines ...
  State: "attaching",
  VolumeId: "vol-0cfa87eec4d52ff85"
}
I1014 12:34:33.952269       1 cloud.go:606] Waiting for volume "vol-0cfa87eec4d52ff85" state: actual=attaching, desired=attached
I1014 12:34:35.068167       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0cfa87eec4d52ff85
I1014 12:34:35.068188       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cfa87eec4d52ff85 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:34:37.637371       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:34:43.627747       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:34:44.233410       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:34:45.159607       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
I1014 12:34:50.363811       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0cfa87eec4d52ff85 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:34:50.372779       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-024e5a7a9a2d90124 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:34:50.377954       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-025b3da6441bd5c5a NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:34:50.716810       1 cloud.go:606] Waiting for volume "vol-0cfa87eec4d52ff85" state: actual=detaching, desired=detached
I1014 12:34:50.748000       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:34:50.749598       1 cloud.go:606] Waiting for volume "vol-024e5a7a9a2d90124" state: actual=detaching, desired=detached
... skipping 14 lines ...
I1014 12:35:05.383987       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-024e5a7a9a2d90124 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:05.388473       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-025b3da6441bd5c5a NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:35:05.501160       1 cloud.go:480] DetachDisk called on non-attached volume: vol-024e5a7a9a2d90124
I1014 12:35:05.717894       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:35:06.820940       1 cloud.go:606] Waiting for volume "vol-025b3da6441bd5c5a" state: actual=detaching, desired=detached
I1014 12:35:08.687182       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-025b3da6441bd5c5a detached from node i-08c519f61bd3c5eb3
W1014 12:35:13.482726       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:35:13.491764       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:35:23.689418       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-020cb6078076fdc60 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:24.164977       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-020cb6078076fdc60
E1014 12:35:24.165012       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-020cb6078076fdc60" to node "i-01cf783622f628b24": could not attach volume "vol-020cb6078076fdc60" to node "i-01cf783622f628b24": IncorrectState: vol-020cb6078076fdc60 is not 'available'.
	status code: 400, request id: 12d3c893-2a09-4f6b-8396-8036b5cfa566
I1014 12:35:24.170752       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-020cb6078076fdc60 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:24.581769       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-020cb6078076fdc60
E1014 12:35:24.581821       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-020cb6078076fdc60" to node "i-01cf783622f628b24": could not attach volume "vol-020cb6078076fdc60" to node "i-01cf783622f628b24": IncorrectState: vol-020cb6078076fdc60 is not 'available'.
	status code: 400, request id: 35e140b6-a168-4859-8414-0cbfa674289d
I1014 12:35:25.170844       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-020cb6078076fdc60 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:25.701824       1 cloud.go:433] [Debug] AttachVolume volume="vol-020cb6078076fdc60" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:35:25.682 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
... skipping 13 lines ...
}
I1014 12:35:28.603149       1 cloud.go:606] Waiting for volume "vol-0cf0ac3e54a07d689" state: actual=attaching, desired=attached
I1014 12:35:29.727587       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0cf0ac3e54a07d689
I1014 12:35:29.727607       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cf0ac3e54a07d689 attached to node i-01cf783622f628b24 through device /dev/xvdbb
I1014 12:35:29.732583       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0cf0ac3e54a07d689 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:29.955184       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cf0ac3e54a07d689 attached to node i-01cf783622f628b24 through device /dev/xvdbb
W1014 12:35:32.379537       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:35:32.387579       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:35:33.336327       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04fde3b3352268d20 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:33.848325       1 cloud.go:433] [Debug] AttachVolume volume="vol-04fde3b3352268d20" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:35:33.831 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-04fde3b3352268d20"
}
I1014 12:35:33.979221       1 cloud.go:606] Waiting for volume "vol-04fde3b3352268d20" state: actual=attaching, desired=attached
I1014 12:35:35.081491       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-04fde3b3352268d20
I1014 12:35:35.081513       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-04fde3b3352268d20 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:35:38.667487       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0cf0ac3e54a07d689 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:35:38.860237       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:35:39.018011       1 cloud.go:606] Waiting for volume "vol-0cf0ac3e54a07d689" state: actual=detaching, desired=detached
I1014 12:35:40.110487       1 cloud.go:606] Waiting for volume "vol-0cf0ac3e54a07d689" state: actual=detaching, desired=detached
I1014 12:35:42.096584       1 cloud.go:606] Waiting for volume "vol-0cf0ac3e54a07d689" state: actual=detaching, desired=detached
W1014 12:35:44.850019       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:35:45.401772       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0cf0ac3e54a07d689 detached from node i-01cf783622f628b24
I1014 12:35:46.494465       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-020cb6078076fdc60 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:46.863282       1 cloud.go:606] Waiting for volume "vol-020cb6078076fdc60" state: actual=attached, desired=detached
I1014 12:35:47.943809       1 cloud.go:606] Waiting for volume "vol-020cb6078076fdc60" state: actual=detaching, desired=detached
I1014 12:35:49.165688       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04eee30a6592749c1 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:35:49.419608       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-04eee30a6592749c1" to node "i-08c519f61bd3c5eb3": attachment of disk "vol-04eee30a6592749c1" failed, expected device to be attached but was attaching
I1014 12:35:49.800641       1 cloud.go:606] Waiting for volume "vol-020cb6078076fdc60" state: actual=detaching, desired=detached
I1014 12:35:50.922357       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-03f94d13fc067d5ec NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:35:51.398490       1 cloud.go:433] [Debug] AttachVolume volume="vol-03f94d13fc067d5ec" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:35:51.392 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 28 lines ...
I1014 12:36:00.600821       1 cloud.go:606] Waiting for volume "vol-01ec2ffdf6fa2f4f7" state: actual=attaching, desired=attached
I1014 12:36:00.894020       1 cloud.go:606] Waiting for volume "vol-04fde3b3352268d20" state: actual=detaching, desired=detached
I1014 12:36:01.702735       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-01ec2ffdf6fa2f4f7
I1014 12:36:01.702757       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-01ec2ffdf6fa2f4f7 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:36:02.006216       1 cloud.go:606] Waiting for volume "vol-04fde3b3352268d20" state: actual=detaching, desired=detached
I1014 12:36:03.890872       1 cloud.go:606] Waiting for volume "vol-04fde3b3352268d20" state: actual=detaching, desired=detached
W1014 12:36:06.392767       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:36:06.400814       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:36:07.241541       1 cloud.go:606] Waiting for volume "vol-04fde3b3352268d20" state: actual=detaching, desired=detached
I1014 12:36:10.454224       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-01ec2ffdf6fa2f4f7 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:36:11.219144       1 cloud.go:606] Waiting for volume "vol-01ec2ffdf6fa2f4f7" state: actual=detaching, desired=detached
I1014 12:36:12.293732       1 cloud.go:606] Waiting for volume "vol-01ec2ffdf6fa2f4f7" state: actual=detaching, desired=detached
I1014 12:36:13.133435       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-04fde3b3352268d20 detached from node i-08c519f61bd3c5eb3
I1014 12:36:13.148489       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-04fde3b3352268d20 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 75 lines ...
I1014 12:37:03.851859       1 cloud.go:606] Waiting for volume "vol-0f2a6156fe489953c" state: actual=detaching, desired=detached
I1014 12:37:03.920120       1 cloud.go:606] Waiting for volume "vol-0ab4c08ed2ed8e26e" state: actual=detaching, desired=detached
I1014 12:37:03.978482       1 cloud.go:606] Waiting for volume "vol-0c3521b26405691ed" state: actual=detaching, desired=detached
I1014 12:37:07.243619       1 cloud.go:606] Waiting for volume "vol-0ab4c08ed2ed8e26e" state: actual=detaching, desired=detached
I1014 12:37:07.314262       1 cloud.go:606] Waiting for volume "vol-0c3521b26405691ed" state: actual=detaching, desired=detached
I1014 12:37:07.384617       1 cloud.go:606] Waiting for volume "vol-0f2a6156fe489953c" state: actual=detaching, desired=detached
W1014 12:37:07.615692       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:37:07.623777       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:37:11.354595       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0a94a62459ebb6d7f NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:11.976246       1 cloud.go:433] [Debug] AttachVolume volume="vol-0a94a62459ebb6d7f" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:37:11.908 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-0a94a62459ebb6d7f"
}
I1014 12:37:12.076568       1 cloud.go:606] Waiting for volume "vol-0a94a62459ebb6d7f" state: actual=attaching, desired=attached
I1014 12:37:13.137467       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0ab4c08ed2ed8e26e detached from node i-08c519f61bd3c5eb3
I1014 12:37:13.190930       1 cloud.go:606] Waiting for volume "vol-0a94a62459ebb6d7f" state: actual=attaching, desired=attached
I1014 12:37:13.234530       1 cloud.go:606] Waiting for volume "vol-0c3521b26405691ed" state: actual=detaching, desired=detached
I1014 12:37:13.253288       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0ab4c08ed2ed8e26e NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:37:13.289102       1 manager.go:44] Error releasing device: release on device "/dev/xvdbc" assigned to different volume: "vol-0f2a6156fe489953c" vs "vol-0a94a62459ebb6d7f"
I1014 12:37:13.289120       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0f2a6156fe489953c detached from node i-08c519f61bd3c5eb3
I1014 12:37:13.353758       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0f2a6156fe489953c NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:13.780318       1 cloud.go:433] [Debug] AttachVolume volume="vol-0ab4c08ed2ed8e26e" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:37:13.775 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 25 lines ...
I1014 12:37:20.315240       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-0ab4c08ed2ed8e26e
I1014 12:37:20.315262       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0ab4c08ed2ed8e26e attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:37:20.718623       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbe -> volume vol-0f2a6156fe489953c
I1014 12:37:20.718644       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0f2a6156fe489953c attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbe
I1014 12:37:20.731045       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0f2a6156fe489953c NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:20.943192       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0f2a6156fe489953c attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbe
W1014 12:37:23.732926       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:37:29.062300       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:37:35.050614       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:37:41.382182       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0b9ad2e726a15c853 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:42.172459       1 cloud.go:433] [Debug] AttachVolume volume="vol-0b9ad2e726a15c853" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:37:42.16 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-0b9ad2e726a15c853"
}
I1014 12:37:42.279016       1 cloud.go:606] Waiting for volume "vol-0b9ad2e726a15c853" state: actual=attaching, desired=attached
W1014 12:37:42.629688       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:37:43.383752       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0b9ad2e726a15c853
I1014 12:37:43.383787       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0b9ad2e726a15c853 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:37:43.390839       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0b9ad2e726a15c853 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:43.632808       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0b9ad2e726a15c853 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:37:48.624104       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0b9ad2e726a15c853 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:37:48.978951       1 cloud.go:606] Waiting for volume "vol-0b9ad2e726a15c853" state: actual=detaching, desired=detached
... skipping 51 lines ...
I1014 12:38:13.556665       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-07dfb3d1cbeafa19f attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:38:13.565985       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07dfb3d1cbeafa19f NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:13.695099       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-04d4a48bf9dc03e15
I1014 12:38:13.695844       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-04d4a48bf9dc03e15 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:38:13.703031       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04d4a48bf9dc03e15 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" mount_flags:"debug" mount_flags:"nouid32" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:13.805702       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-07dfb3d1cbeafa19f attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
W1014 12:38:13.816717       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:38:13.969226       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-04d4a48bf9dc03e15 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:38:16.642978       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:38:31.155823       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0ed5a2fa09447427b NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:31.155823       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0cf63a8ab27745e8c NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:31.630131       1 cloud.go:433] [Debug] AttachVolume volume="vol-0ed5a2fa09447427b" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:38:31.625 +0000 UTC,
  Device: "/dev/xvdbe",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 6 lines ...
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-0cf63a8ab27745e8c"
}
I1014 12:38:31.736585       1 cloud.go:606] Waiting for volume "vol-0ed5a2fa09447427b" state: actual=attaching, desired=attached
I1014 12:38:31.750207       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=attaching, desired=attached
W1014 12:38:32.712773       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:38:32.838148       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbe -> volume vol-0ed5a2fa09447427b
I1014 12:38:32.838169       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0ed5a2fa09447427b attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbe
I1014 12:38:32.856852       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-0cf63a8ab27745e8c
I1014 12:38:32.856871       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cf63a8ab27745e8c attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:38:32.861291       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0cf63a8ab27745e8c NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:33.068391       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cf63a8ab27745e8c attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
... skipping 5 lines ...
I1014 12:38:50.597983       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-04d4a48bf9dc03e15 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:50.981597       1 cloud.go:606] Waiting for volume "vol-04d4a48bf9dc03e15" state: actual=detaching, desired=detached
I1014 12:38:52.061915       1 cloud.go:606] Waiting for volume "vol-04d4a48bf9dc03e15" state: actual=detaching, desired=detached
I1014 12:38:53.354731       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-04eee30a6592749c1 detached from node i-08c519f61bd3c5eb3
I1014 12:38:53.941251       1 cloud.go:606] Waiting for volume "vol-04d4a48bf9dc03e15" state: actual=detaching, desired=detached
I1014 12:38:57.316889       1 cloud.go:606] Waiting for volume "vol-04d4a48bf9dc03e15" state: actual=detaching, desired=detached
W1014 12:38:57.816301       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:38:57.824367       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:39:00.650828       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0ed5a2fa09447427b NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:39:00.656045       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0cf63a8ab27745e8c NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:39:01.029554       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=detaching, desired=detached
I1014 12:39:01.055507       1 cloud.go:606] Waiting for volume "vol-0ed5a2fa09447427b" state: actual=detaching, desired=detached
I1014 12:39:02.107394       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=detaching, desired=detached
I1014 12:39:02.139948       1 cloud.go:606] Waiting for volume "vol-0ed5a2fa09447427b" state: actual=detaching, desired=detached
I1014 12:39:03.214962       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-04d4a48bf9dc03e15 detached from node i-08c519f61bd3c5eb3
I1014 12:39:03.993277       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=detaching, desired=detached
I1014 12:39:04.028933       1 cloud.go:606] Waiting for volume "vol-0ed5a2fa09447427b" state: actual=detaching, desired=detached
W1014 12:39:06.726088       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:39:07.355929       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=detaching, desired=detached
I1014 12:39:07.387259       1 cloud.go:606] Waiting for volume "vol-0ed5a2fa09447427b" state: actual=detaching, desired=detached
I1014 12:39:13.256619       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0cf63a8ab27745e8c detached from node i-08c519f61bd3c5eb3
I1014 12:39:13.271681       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0cf63a8ab27745e8c NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:39:13.279867       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0ed5a2fa09447427b detached from node i-08c519f61bd3c5eb3
W1014 12:39:13.313359       1 cloud.go:480] DetachDisk called on non-attached volume: vol-0cf63a8ab27745e8c
... skipping 16 lines ...
}
I1014 12:39:14.142490       1 cloud.go:606] Waiting for volume "vol-0cf63a8ab27745e8c" state: actual=attaching, desired=attached
I1014 12:39:15.095348       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0ed5a2fa09447427b
I1014 12:39:15.095370       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0ed5a2fa09447427b attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbb
I1014 12:39:15.254186       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0cf63a8ab27745e8c
I1014 12:39:15.254206       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0cf63a8ab27745e8c attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:39:17.865238       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:39:27.438266       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-026849b4908c4a3f7 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:39:28.044141       1 cloud.go:433] [Debug] AttachVolume volume="vol-026849b4908c4a3f7" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:39:28.039 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
  State: "attaching",
... skipping 53 lines ...
}
I1014 12:39:57.616457       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=attaching, desired=attached
I1014 12:39:58.722652       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-02b7a2c665df0731d
I1014 12:39:58.722675       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-02b7a2c665df0731d attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbb
I1014 12:40:06.697893       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-027831df3f144b423 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:07.206609       1 cloud.go:606] Waiting for volume "vol-027831df3f144b423" state: actual=detaching, desired=detached
W1014 12:40:07.948354       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:40:08.283556       1 cloud.go:606] Waiting for volume "vol-027831df3f144b423" state: actual=detaching, desired=detached
I1014 12:40:10.192223       1 cloud.go:606] Waiting for volume "vol-027831df3f144b423" state: actual=detaching, desired=detached
I1014 12:40:10.727063       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02b7a2c665df0731d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:10.738518       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-07dfb3d1cbeafa19f NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:11.088084       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=detaching, desired=detached
I1014 12:40:11.105556       1 cloud.go:606] Waiting for volume "vol-07dfb3d1cbeafa19f" state: actual=detaching, desired=detached
... skipping 65 lines ...
I1014 12:40:40.783858       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02b7a2c665df0731d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:41.170946       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=detaching, desired=detached
I1014 12:40:42.252010       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=detaching, desired=detached
I1014 12:40:44.163466       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=detaching, desired=detached
I1014 12:40:46.700906       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-050c070ccf5f9e101 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:47.066504       1 cloud.go:606] Waiting for volume "vol-050c070ccf5f9e101" state: actual=detaching, desired=detached
W1014 12:40:47.422721       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:40:47.512855       1 cloud.go:606] Waiting for volume "vol-02b7a2c665df0731d" state: actual=detaching, desired=detached
I1014 12:40:48.139365       1 cloud.go:606] Waiting for volume "vol-050c070ccf5f9e101" state: actual=detaching, desired=detached
I1014 12:40:50.055373       1 cloud.go:606] Waiting for volume "vol-050c070ccf5f9e101" state: actual=detaching, desired=detached
I1014 12:40:53.347437       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-050c070ccf5f9e101 detached from node i-01cf783622f628b24
W1014 12:40:53.410433       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:40:53.411701       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-02b7a2c665df0731d detached from node i-08c519f61bd3c5eb3
I1014 12:40:53.424102       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02b7a2c665df0731d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:40:53.470545       1 cloud.go:480] DetachDisk called on non-attached volume: vol-02b7a2c665df0731d
I1014 12:40:56.755869       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-08d0fd63735f25a76 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:40:57.207702       1 cloud.go:606] Waiting for volume "vol-08d0fd63735f25a76" state: actual=detaching, desired=detached
I1014 12:40:58.275807       1 cloud.go:606] Waiting for volume "vol-08d0fd63735f25a76" state: actual=detaching, desired=detached
... skipping 2 lines ...
I1014 12:41:03.497125       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-08d0fd63735f25a76 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:41:03.535602       1 cloud.go:480] DetachDisk called on non-attached volume: vol-08d0fd63735f25a76
I1014 12:41:05.554278       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05bd9661b223e9e1b NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:05.766463       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05aa8cb73a31aa800 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:05.767564       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0825a832e76f5170d NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:05.943387       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-05bd9661b223e9e1b
E1014 12:41:05.943441       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-05bd9661b223e9e1b" to node "i-01cf783622f628b24": could not attach volume "vol-05bd9661b223e9e1b" to node "i-01cf783622f628b24": IncorrectState: vol-05bd9661b223e9e1b is not 'available'.
	status code: 400, request id: 5c92086c-f9c8-4783-b899-9a2e28763cf5
I1014 12:41:05.948160       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05bd9661b223e9e1b NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:06.300401       1 cloud.go:433] [Debug] AttachVolume volume="vol-0825a832e76f5170d" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:41:06.289 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 6 lines ...
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-05aa8cb73a31aa800"
}
I1014 12:41:06.420784       1 cloud.go:606] Waiting for volume "vol-05aa8cb73a31aa800" state: actual=attaching, desired=attached
I1014 12:41:06.526711       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-05bd9661b223e9e1b
E1014 12:41:06.526757       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-05bd9661b223e9e1b" to node "i-01cf783622f628b24": could not attach volume "vol-05bd9661b223e9e1b" to node "i-01cf783622f628b24": IncorrectState: vol-05bd9661b223e9e1b is not 'available'.
	status code: 400, request id: 36fba2d4-49d4-4ae7-954e-59648a8df6ab
I1014 12:41:06.530121       1 cloud.go:606] Waiting for volume "vol-0825a832e76f5170d" state: actual=attaching, desired=attached
I1014 12:41:06.531617       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05bd9661b223e9e1b NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:07.004472       1 cloud.go:433] [Debug] AttachVolume volume="vol-05bd9661b223e9e1b" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:41:06.992 +0000 UTC,
  Device: "/dev/xvdba",
... skipping 7 lines ...
I1014 12:41:07.557777       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05aa8cb73a31aa800 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:07.634879       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0825a832e76f5170d
I1014 12:41:07.634901       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0825a832e76f5170d attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:41:07.641243       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0825a832e76f5170d NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:07.779320       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-05aa8cb73a31aa800 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:41:07.905943       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0825a832e76f5170d attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:41:08.065190       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:41:08.201955       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-05bd9661b223e9e1b
I1014 12:41:08.201975       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-05bd9661b223e9e1b attached to node i-01cf783622f628b24 through device /dev/xvdba
I1014 12:41:08.209632       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-05bd9661b223e9e1b NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:08.487494       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-05bd9661b223e9e1b attached to node i-01cf783622f628b24 through device /dev/xvdba
I1014 12:41:13.840694       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-029458a279c24a012 NodeId:i-01cf783622f628b24 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:41:14.363651       1 cloud.go:433] [Debug] AttachVolume volume="vol-029458a279c24a012" instance="i-01cf783622f628b24" request returned {
... skipping 89 lines ...
I1014 12:41:52.369681       1 cloud.go:606] Waiting for volume "vol-0a6bb7bc2a8ffd47b" state: actual=detaching, desired=detached
I1014 12:41:53.401758       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-05bd9661b223e9e1b detached from node i-01cf783622f628b24
I1014 12:41:53.414550       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-05bd9661b223e9e1b NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:41:53.454731       1 cloud.go:480] DetachDisk called on non-attached volume: vol-05bd9661b223e9e1b
I1014 12:41:54.248802       1 cloud.go:606] Waiting for volume "vol-0a6bb7bc2a8ffd47b" state: actual=detaching, desired=detached
I1014 12:41:57.608253       1 cloud.go:606] Waiting for volume "vol-0a6bb7bc2a8ffd47b" state: actual=detaching, desired=detached
W1014 12:41:58.148640       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:42:00.340140       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0bc0ff9fc2a3756ed NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:00.851781       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0825a832e76f5170d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:00.856192       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-05aa8cb73a31aa800 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:00.877037       1 cloud.go:433] [Debug] AttachVolume volume="vol-0bc0ff9fc2a3756ed" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:42:00.873 +0000 UTC,
  Device: "/dev/xvdbb",
... skipping 20 lines ...
I1014 12:42:13.531909       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-05aa8cb73a31aa800 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:42:13.579460       1 cloud.go:480] DetachDisk called on non-attached volume: vol-05aa8cb73a31aa800
I1014 12:42:13.858961       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0825a832e76f5170d detached from node i-08c519f61bd3c5eb3
I1014 12:42:13.868227       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0825a832e76f5170d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:42:13.913406       1 cloud.go:480] DetachDisk called on non-attached volume: vol-0825a832e76f5170d
I1014 12:42:14.190749       1 cloud.go:606] Waiting for volume "vol-0fe1ca022ad4f5c28" state: actual=detaching, desired=detached
W1014 12:42:16.176527       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:42:16.184583       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:42:17.539472       1 cloud.go:606] Waiting for volume "vol-0fe1ca022ad4f5c28" state: actual=detaching, desired=detached
I1014 12:42:23.431259       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0fe1ca022ad4f5c28 detached from node i-08c519f61bd3c5eb3
I1014 12:42:23.445016       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0fe1ca022ad4f5c28 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:42:23.488351       1 cloud.go:480] DetachDisk called on non-attached volume: vol-0fe1ca022ad4f5c28
I1014 12:42:25.623834       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0a1d6cd39257861de NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:26.092053       1 cloud.go:433] [Debug] AttachVolume volume="vol-0a1d6cd39257861de" instance="i-08c519f61bd3c5eb3" request returned {
... skipping 7 lines ...
I1014 12:42:27.121955       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-01cd951c99d744fe1 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:27.315842       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0a1d6cd39257861de
I1014 12:42:27.315865       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0a1d6cd39257861de attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:42:27.323382       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0a1d6cd39257861de NodeId:i-08c519f61bd3c5eb3 VolumeCapability:block:<> access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:27.580383       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0a1d6cd39257861de attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:42:27.704468       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-01cd951c99d744fe1
E1014 12:42:27.704515       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-01cd951c99d744fe1" to node "i-01cf783622f628b24": could not attach volume "vol-01cd951c99d744fe1" to node "i-01cf783622f628b24": IncorrectState: vol-01cd951c99d744fe1 is not 'available'.
	status code: 400, request id: e46719a3-3f34-4d5c-ab25-53e27c8ff1c9
I1014 12:42:27.710304       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-01cd951c99d744fe1 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:27.834123       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-08100157089d7a47d NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:28.266767       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-08100157089d7a47d
E1014 12:42:28.266801       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-08100157089d7a47d" to node "i-01cf783622f628b24": could not attach volume "vol-08100157089d7a47d" to node "i-01cf783622f628b24": IncorrectState: vol-08100157089d7a47d is not 'available'.
	status code: 400, request id: fc0ad5e8-3864-4cb4-9213-802be6a811e3
I1014 12:42:28.272015       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-08100157089d7a47d NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:28.535918       1 cloud.go:433] [Debug] AttachVolume volume="vol-01cd951c99d744fe1" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:42:28.53 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-01cf783622f628b24",
  State: "attaching",
  VolumeId: "vol-01cd951c99d744fe1"
}
I1014 12:42:28.614759       1 cloud.go:606] Waiting for volume "vol-01cd951c99d744fe1" state: actual=attaching, desired=attached
I1014 12:42:28.824736       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-08100157089d7a47d
E1014 12:42:28.824769       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-08100157089d7a47d" to node "i-01cf783622f628b24": could not attach volume "vol-08100157089d7a47d" to node "i-01cf783622f628b24": IncorrectState: vol-08100157089d7a47d is not 'available'.
	status code: 400, request id: 93fae48b-4a46-4e5f-9653-a254caa7854e
I1014 12:42:28.831993       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-08100157089d7a47d NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:42:29.304335       1 cloud.go:433] [Debug] AttachVolume volume="vol-08100157089d7a47d" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:42:29.299 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
... skipping 164 lines ...
  State: "attaching",
  VolumeId: "vol-028136b96edb35fff"
}
I1014 12:43:42.863871       1 cloud.go:606] Waiting for volume "vol-028136b96edb35fff" state: actual=attaching, desired=attached
I1014 12:43:43.968219       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-028136b96edb35fff
I1014 12:43:43.968239       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-028136b96edb35fff attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:43:44.163030       1 cloud.go:547] Ignoring error from describe volume for volume "vol-05a86ebfff7ab6861"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:43:51.790744       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-02f85e75aad504d4e NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:43:52.286976       1 cloud.go:433] [Debug] AttachVolume volume="vol-02f85e75aad504d4e" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:43:52.281 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
... skipping 2 lines ...
I1014 12:43:52.362675       1 cloud.go:606] Waiting for volume "vol-02f85e75aad504d4e" state: actual=attaching, desired=attached
I1014 12:43:53.466055       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-02f85e75aad504d4e
I1014 12:43:53.466081       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-02f85e75aad504d4e attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:44:01.007426       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-069954665666abf58 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:44:01.586654       1 cloud.go:606] Waiting for volume "vol-069954665666abf58" state: actual=detaching, desired=detached
I1014 12:44:02.696023       1 cloud.go:606] Waiting for volume "vol-069954665666abf58" state: actual=detaching, desired=detached
W1014 12:44:03.059236       1 cloud.go:547] Ignoring error from describe volume for volume "vol-05a86ebfff7ab6861"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:44:04.586666       1 cloud.go:606] Waiting for volume "vol-069954665666abf58" state: actual=detaching, desired=detached
I1014 12:44:07.895955       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-069954665666abf58 detached from node i-08c519f61bd3c5eb3
I1014 12:44:10.937966       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-028136b96edb35fff NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:44:10.949034       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02f85e75aad504d4e NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:44:11.338460       1 cloud.go:606] Waiting for volume "vol-028136b96edb35fff" state: actual=detaching, desired=detached
I1014 12:44:11.470760       1 cloud.go:606] Waiting for volume "vol-02f85e75aad504d4e" state: actual=detaching, desired=detached
... skipping 19 lines ...
I1014 12:44:23.620843       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-028136b96edb35fff NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:44:23.666780       1 cloud.go:480] DetachDisk called on non-attached volume: vol-028136b96edb35fff
I1014 12:44:23.670773       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-02f85e75aad504d4e detached from node i-08c519f61bd3c5eb3
I1014 12:44:23.679622       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02f85e75aad504d4e NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:44:23.723293       1 cloud.go:480] DetachDisk called on non-attached volume: vol-02f85e75aad504d4e
I1014 12:44:23.777147       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0e7f37e51ca983e1d attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
W1014 12:44:26.424732       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:44:31.193054       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07202b37260e83742 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:44:31.679650       1 cloud.go:433] [Debug] AttachVolume volume="vol-07202b37260e83742" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:44:31.675 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-07202b37260e83742"
}
I1014 12:44:31.781853       1 cloud.go:606] Waiting for volume "vol-07202b37260e83742" state: actual=attaching, desired=attached
I1014 12:44:32.892399       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-07202b37260e83742
I1014 12:44:32.892423       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-07202b37260e83742 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:44:37.071780       1 cloud.go:547] Ignoring error from describe volume for volume "vol-05a86ebfff7ab6861"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:44:44.227796       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-095602a7370e040f9 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:44:44.721052       1 cloud.go:433] [Debug] AttachVolume volume="vol-095602a7370e040f9" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:44:44.712 +0000 UTC,
  Device: "/dev/xvdbd",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
... skipping 35 lines ...
I1014 12:45:07.319961       1 cloud.go:606] Waiting for volume "vol-0e7f37e51ca983e1d" state: actual=attaching, desired=attached
I1014 12:45:07.761857       1 cloud.go:606] Waiting for volume "vol-095602a7370e040f9" state: actual=detaching, desired=detached
I1014 12:45:08.303843       1 cloud.go:606] Waiting for volume "vol-07202b37260e83742" state: actual=detaching, desired=detached
I1014 12:45:10.637300       1 cloud.go:606] Waiting for volume "vol-0e7f37e51ca983e1d" state: actual=attaching, desired=attached
I1014 12:45:13.676994       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-095602a7370e040f9 detached from node i-08c519f61bd3c5eb3
I1014 12:45:13.771502       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-095602a7370e040f9 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:45:14.219296       1 manager.go:44] Error releasing device: release on device "/dev/xvdbc" assigned to different volume: "vol-07202b37260e83742" vs "vol-095602a7370e040f9"
I1014 12:45:14.219319       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-07202b37260e83742 detached from node i-08c519f61bd3c5eb3
I1014 12:45:14.229469       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-07202b37260e83742 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:14.258395       1 cloud.go:433] [Debug] AttachVolume volume="vol-095602a7370e040f9" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:45:14.252 +0000 UTC,
  Device: "/dev/xvdbc",
  InstanceId: "i-08c519f61bd3c5eb3",
... skipping 3 lines ...
W1014 12:45:14.271989       1 cloud.go:480] DetachDisk called on non-attached volume: vol-07202b37260e83742
I1014 12:45:14.364652       1 cloud.go:606] Waiting for volume "vol-095602a7370e040f9" state: actual=attaching, desired=attached
I1014 12:45:15.479915       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-095602a7370e040f9
I1014 12:45:15.479937       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-095602a7370e040f9 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:45:15.493700       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-095602a7370e040f9 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:15.827488       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-095602a7370e040f9 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
W1014 12:45:16.508685       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0f2a6156fe489953c"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:45:16.553410       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0e7f37e51ca983e1d
I1014 12:45:16.553430       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0e7f37e51ca983e1d attached to node i-08c519f61bd3c5eb3 through device /dev/xvdba
I1014 12:45:31.055214       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-09a6bf73d81af8bc6 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:31.069416       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0e7f37e51ca983e1d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:31.434290       1 cloud.go:606] Waiting for volume "vol-09a6bf73d81af8bc6" state: actual=detaching, desired=detached
I1014 12:45:31.547652       1 cloud.go:606] Waiting for volume "vol-0e7f37e51ca983e1d" state: actual=detaching, desired=detached
... skipping 10 lines ...
  VolumeId: "vol-043f7fd9f4ca092d6"
}
I1014 12:45:35.664330       1 cloud.go:606] Waiting for volume "vol-043f7fd9f4ca092d6" state: actual=attaching, desired=attached
I1014 12:45:36.769985       1 cloud.go:606] Waiting for volume "vol-043f7fd9f4ca092d6" state: actual=attaching, desired=attached
I1014 12:45:37.774717       1 cloud.go:606] Waiting for volume "vol-09a6bf73d81af8bc6" state: actual=detaching, desired=detached
I1014 12:45:37.839386       1 cloud.go:606] Waiting for volume "vol-0e7f37e51ca983e1d" state: actual=detaching, desired=detached
W1014 12:45:38.294754       1 cloud.go:547] Ignoring error from describe volume for volume "vol-05a86ebfff7ab6861"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:45:38.643372       1 cloud.go:606] Waiting for volume "vol-043f7fd9f4ca092d6" state: actual=attaching, desired=attached
I1014 12:45:41.090942       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-095602a7370e040f9 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:41.437895       1 cloud.go:606] Waiting for volume "vol-095602a7370e040f9" state: actual=detaching, desired=detached
I1014 12:45:41.988678       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-043f7fd9f4ca092d6
I1014 12:45:41.988767       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-043f7fd9f4ca092d6 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:45:41.997691       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-043f7fd9f4ca092d6 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 6 lines ...
I1014 12:45:43.749122       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0e7f37e51ca983e1d NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
W1014 12:45:43.789536       1 cloud.go:480] DetachDisk called on non-attached volume: vol-0e7f37e51ca983e1d
I1014 12:45:44.454112       1 cloud.go:606] Waiting for volume "vol-095602a7370e040f9" state: actual=detaching, desired=detached
I1014 12:45:47.817799       1 cloud.go:606] Waiting for volume "vol-095602a7370e040f9" state: actual=detaching, desired=detached
I1014 12:45:47.822113       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07b0b1275d051304f NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:48.243920       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-07b0b1275d051304f
E1014 12:45:48.243999       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": IncorrectState: vol-07b0b1275d051304f is not 'available'.
	status code: 400, request id: 4b586a44-ff69-4c5c-847a-793fcb134d3f
I1014 12:45:48.252450       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07b0b1275d051304f NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:48.616675       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-07b0b1275d051304f
E1014 12:45:48.616709       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": IncorrectState: vol-07b0b1275d051304f is not 'available'.
	status code: 400, request id: 8e59b0d4-9287-433c-9f6d-ab2704d48e0f
I1014 12:45:48.624240       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07b0b1275d051304f NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:49.022041       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-07b0b1275d051304f
E1014 12:45:49.022075       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": could not attach volume "vol-07b0b1275d051304f" to node "i-01cf783622f628b24": IncorrectState: vol-07b0b1275d051304f is not 'available'.
	status code: 400, request id: 6e0ed07f-3108-4f28-8405-101897c91ca8
I1014 12:45:49.252690       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-07b0b1275d051304f NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:45:49.798637       1 cloud.go:433] [Debug] AttachVolume volume="vol-07b0b1275d051304f" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:45:49.782 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
... skipping 52 lines ...
I1014 12:46:41.236841       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbc -> volume vol-0bfbe6a7ca7183d02
I1014 12:46:41.236862       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0bfbe6a7ca7183d02 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:46:41.242341       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0bfbe6a7ca7183d02 NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:46:41.536897       1 cloud.go:606] Waiting for volume "vol-0c9fe1a4735fb7fb4" state: actual=detaching, desired=detached
I1014 12:46:41.544734       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0bfbe6a7ca7183d02 attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbc
I1014 12:46:42.605352       1 cloud.go:606] Waiting for volume "vol-0c9fe1a4735fb7fb4" state: actual=detaching, desired=detached
W1014 12:46:44.469670       1 cloud.go:547] Ignoring error from describe volume for volume "vol-09ff520f4abadae03"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:46:44.471811       1 cloud.go:606] Waiting for volume "vol-0c9fe1a4735fb7fb4" state: actual=detaching, desired=detached
I1014 12:46:47.811636       1 cloud.go:606] Waiting for volume "vol-0c9fe1a4735fb7fb4" state: actual=detaching, desired=detached
W1014 12:46:50.458612       1 cloud.go:547] Ignoring error from describe volume for volume "vol-04eee30a6592749c1"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:46:51.156318       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0bfbe6a7ca7183d02 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:46:51.532950       1 cloud.go:606] Waiting for volume "vol-0bfbe6a7ca7183d02" state: actual=detaching, desired=detached
I1014 12:46:52.610299       1 cloud.go:606] Waiting for volume "vol-0bfbe6a7ca7183d02" state: actual=detaching, desired=detached
I1014 12:46:53.715464       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-0c9fe1a4735fb7fb4 detached from node i-08c519f61bd3c5eb3
I1014 12:46:54.472964       1 cloud.go:606] Waiting for volume "vol-0bfbe6a7ca7183d02" state: actual=detaching, desired=detached
I1014 12:46:57.837422       1 cloud.go:606] Waiting for volume "vol-0bfbe6a7ca7183d02" state: actual=detaching, desired=detached
... skipping 66 lines ...
W1014 12:47:23.816066       1 cloud.go:480] DetachDisk called on non-attached volume: vol-04e08e0865bb7fe2c
I1014 12:47:26.049330       1 controller.go:436] ControllerExpandVolume: called with args {VolumeId:vol-07e071f5e79228a6b CapacityRange:required_bytes:2147483648  Secrets:map[] VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:26.183609       1 cloud.go:1063] expanding volume "vol-07e071f5e79228a6b" to size 2
I1014 12:47:27.038048       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-02753b92afb1bb525 NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:27.401083       1 cloud.go:606] Waiting for volume "vol-02753b92afb1bb525" state: actual=detaching, desired=detached
I1014 12:47:28.475733       1 cloud.go:606] Waiting for volume "vol-02753b92afb1bb525" state: actual=detaching, desired=detached
W1014 12:47:28.494792       1 cloud.go:547] Ignoring error from describe volume for volume "vol-05a86ebfff7ab6861"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:47:29.991776       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-0906dcebeabac81ed NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:30.400225       1 cloud.go:606] Waiting for volume "vol-02753b92afb1bb525" state: actual=detaching, desired=detached
I1014 12:47:30.493417       1 cloud.go:433] [Debug] AttachVolume volume="vol-0906dcebeabac81ed" instance="i-08c519f61bd3c5eb3" request returned {
  AttachTime: 2021-10-14 12:47:30.488 +0000 UTC,
  Device: "/dev/xvdbb",
  InstanceId: "i-08c519f61bd3c5eb3",
  State: "attaching",
  VolumeId: "vol-0906dcebeabac81ed"
}
I1014 12:47:30.596988       1 cloud.go:606] Waiting for volume "vol-0906dcebeabac81ed" state: actual=attaching, desired=attached
I1014 12:47:30.952504       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-032c7b5a329a8a8f4 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:31.400448       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-032c7b5a329a8a8f4
E1014 12:47:31.400498       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-032c7b5a329a8a8f4" to node "i-01cf783622f628b24": could not attach volume "vol-032c7b5a329a8a8f4" to node "i-01cf783622f628b24": IncorrectState: vol-032c7b5a329a8a8f4 is not 'available'.
	status code: 400, request id: 11c1d5bb-a73d-4071-897b-e85ca77b2ce2
I1014 12:47:31.406755       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-032c7b5a329a8a8f4 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:31.746390       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbb -> volume vol-0906dcebeabac81ed
I1014 12:47:31.746410       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0906dcebeabac81ed attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbb
I1014 12:47:31.810827       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-032c7b5a329a8a8f4
E1014 12:47:31.810854       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not attach volume "vol-032c7b5a329a8a8f4" to node "i-01cf783622f628b24": could not attach volume "vol-032c7b5a329a8a8f4" to node "i-01cf783622f628b24": IncorrectState: vol-032c7b5a329a8a8f4 is not 'available'.
	status code: 400, request id: 92c62265-ad66-4056-9ab3-8201a0319f08
I1014 12:47:32.406987       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-032c7b5a329a8a8f4 NodeId:i-01cf783622f628b24 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:32.878690       1 cloud.go:433] [Debug] AttachVolume volume="vol-032c7b5a329a8a8f4" instance="i-01cf783622f628b24" request returned {
  AttachTime: 2021-10-14 12:47:32.862 +0000 UTC,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
... skipping 7 lines ...
  DeleteOnTermination: false,
  Device: "/dev/xvdba",
  InstanceId: "i-01cf783622f628b24",
  State: "detaching",
  VolumeId: "vol-02753b92afb1bb525"
}
E1014 12:47:33.699422       1 manager.go:44] Error releasing device: release on device "/dev/xvdba" assigned to different volume: "vol-02753b92afb1bb525" vs "vol-032c7b5a329a8a8f4"
I1014 12:47:33.699431       1 controller.go:364] [Debug] ControllerUnpublishVolume: volume vol-02753b92afb1bb525 detached from node i-01cf783622f628b24
I1014 12:47:34.139888       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-032c7b5a329a8a8f4
I1014 12:47:34.139911       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-032c7b5a329a8a8f4 attached to node i-01cf783622f628b24 through device /dev/xvdba
I1014 12:47:37.597972       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0906dcebeabac81ed NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:47:37.947207       1 cloud.go:606] Waiting for volume "vol-0906dcebeabac81ed" state: actual=detaching, desired=detached
I1014 12:47:39.026519       1 cloud.go:606] Waiting for volume "vol-0906dcebeabac81ed" state: actual=detaching, desired=detached
... skipping 48 lines ...
}
I1014 12:48:09.218240       1 cloud.go:606] Waiting for volume "vol-04507c1e399c2120b" state: actual=attaching, desired=attached
I1014 12:48:10.325918       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdbd -> volume vol-04507c1e399c2120b
I1014 12:48:10.325939       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-04507c1e399c2120b attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
I1014 12:48:10.333407       1 controller.go:291] ControllerPublishVolume: called with args {VolumeId:vol-04507c1e399c2120b NodeId:i-08c519f61bd3c5eb3 VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[partition:0] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:48:10.539285       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-04507c1e399c2120b attached to node i-08c519f61bd3c5eb3 through device /dev/xvdbd
W1014 12:48:13.224398       1 cloud.go:547] Ignoring error from describe volume for volume "vol-025b3da6441bd5c5a"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
W1014 12:48:13.231457       1 cloud.go:547] Ignoring error from describe volume for volume "vol-024e5a7a9a2d90124"; will retry: "RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
I1014 12:48:21.278554       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0262a5215251f0fbb NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:48:21.638123       1 cloud.go:606] Waiting for volume "vol-0262a5215251f0fbb" state: actual=detaching, desired=detached
I1014 12:48:22.713746       1 cloud.go:606] Waiting for volume "vol-0262a5215251f0fbb" state: actual=detaching, desired=detached
I1014 12:48:24.578797       1 cloud.go:606] Waiting for volume "vol-0262a5215251f0fbb" state: actual=detaching, desired=detached
I1014 12:48:27.916415       1 cloud.go:606] Waiting for volume "vol-0262a5215251f0fbb" state: actual=detaching, desired=detached
I1014 12:48:31.313695       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-06e4781701bb91cb2 NodeId:i-08c519f61bd3c5eb3 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
... skipping 138 lines ...
  State: "attaching",
  VolumeId: "vol-0c43ee0ef18c00c2a"
}
I1014 12:50:11.377584       1 cloud.go:606] Waiting for volume "vol-0c43ee0ef18c00c2a" state: actual=attaching, desired=attached
I1014 12:50:12.493941       1 manager.go:197] [Debug] Releasing in-process attachment entry: /dev/xvdba -> volume vol-0c43ee0ef18c00c2a
I1014 12:50:12.493960       1 controller.go:319] [Debug] ControllerPublishVolume: volume vol-0c43ee0ef18c00c2a attached to node i-01cf783622f628b24 through device /dev/xvdba
W1014 12:50:23.471648       1 cloud.go:547] Ignoring error from describe volume for volume "vol-0c3521b26405691ed"; will retry: "RequestCanceled: request context canceled\ncaused by: context canceled"
I1014 12:50:28.012735       1 controller.go:350] ControllerUnpublishVolume: called with args {VolumeId:vol-0c43ee0ef18c00c2a NodeId:i-01cf783622f628b24 Secrets:map[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:50:28.378721       1 cloud.go:606] Waiting for volume "vol-0c43ee0ef18c00c2a" state: actual=detaching, desired=detached
I1014 12:50:29.454309       1 cloud.go:606] Waiting for volume "vol-0c43ee0ef18c00c2a" state: actual=detaching, desired=detached
I1014 12:50:31.349787       1 cloud.go:606] Waiting for volume "vol-0c43ee0ef18c00c2a" state: actual=detaching, desired=detached
I1014 12:50:34.693700       1 cloud.go:606] Waiting for volume "vol-0c43ee0ef18c00c2a" state: actual=detaching, desired=detached
W1014 12:50:40.608369       1 cloud.go:537] Waiting for volume "vol-0c43ee0ef18c00c2a" to be detached but the volume does not exist
... skipping 535 lines ...
I1014 12:38:32.676883       1 node.go:682] NodePublishVolume: mounting /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/globalmount at /var/lib/kubelet/pods/23e19d10-ad91-4aaf-8b55-c2713aef3e23/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount with option [bind debug nouid32] as fstype ext4
I1014 12:38:32.679970       1 node.go:380] NodePublishVolume: volume="vol-04d4a48bf9dc03e15" operation finished
I1014 12:38:32.679985       1 inflight.go:73] Node Service: volume="vol-04d4a48bf9dc03e15" operation finished
I1014 12:38:34.697448       1 node.go:404] NodeUnpublishVolume: called with args {VolumeId:vol-04d4a48bf9dc03e15 TargetPath:/var/lib/kubelet/pods/23e19d10-ad91-4aaf-8b55-c2713aef3e23/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:34.697493       1 node.go:422] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/23e19d10-ad91-4aaf-8b55-c2713aef3e23/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount
I1014 12:38:34.697693       1 node.go:404] NodeUnpublishVolume: called with args {VolumeId:vol-04d4a48bf9dc03e15 TargetPath:/var/lib/kubelet/pods/ecd5c60c-5bf6-42a8-9104-ae0bb7aa0a15/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E1014 12:38:34.697729       1 driver.go:119] GRPC error: rpc error: code = Aborted desc = An operation with the given volume="vol-04d4a48bf9dc03e15" is already in progress
I1014 12:38:34.699314       1 node.go:418] NodeUnPublishVolume: volume="vol-04d4a48bf9dc03e15" operation finished
I1014 12:38:34.699327       1 inflight.go:73] Node Service: volume="vol-04d4a48bf9dc03e15" operation finished
I1014 12:38:35.298747       1 node.go:404] NodeUnpublishVolume: called with args {VolumeId:vol-04d4a48bf9dc03e15 TargetPath:/var/lib/kubelet/pods/ecd5c60c-5bf6-42a8-9104-ae0bb7aa0a15/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1014 12:38:35.298811       1 node.go:422] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/ecd5c60c-5bf6-42a8-9104-ae0bb7aa0a15/volumes/kubernetes.io~csi/pvc-0321a4c2-ae97-47ad-824f-2b52bb0c2cbf/mount
I1014 12:38:35.303060       1 node.go:418] NodeUnPublishVolume: volume="vol-04d4a48bf9dc03e15" operation finished
I1014 12:38:35.303079       1 inflight.go:73] Node Service: volume="vol-04d4a48bf9dc03e15" operation finished
... skipping 2163 lines ...

Deleted cluster: "test-cluster-9359.k8s.local"
###
## OVERALL_TEST_PASSED: 1
#
###
## FAIL!
#
make: *** [Makefile:178: test-e2e-migration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...