This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwongma7: Run upstream e2e test suites with migration
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-08-23 20:34
Elapsed17m40s
Revision27abc15ed6312f72a6a6407d78565afeefa46165
Refs 341

No Test Failures!


Show 37 Skipped Tests

Error lines from build-log.txt

... skipping 1833 lines ...

Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-6549-k8s-2tvucb-1407117237.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local

INSTANCE GROUPS
... skipping 8 lines ...
KIND	NAME			MESSAGE
Machine	i-07c86187d75990336	machine "i-07c86187d75990336" has not yet joined cluster
Machine	i-0cda914e7bebaecf8	machine "i-0cda914e7bebaecf8" has not yet joined cluster
Machine	i-0eae49f9afb2c1947	machine "i-0eae49f9afb2c1947" has not yet joined cluster
Machine	i-0f9d72399827e45f8	machine "i-0f9d72399827e45f8" has not yet joined cluster

Validation Failed
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local

INSTANCE GROUPS
... skipping 10 lines ...
Machine	i-07c86187d75990336					machine "i-07c86187d75990336" has not yet joined cluster
Machine	i-0cda914e7bebaecf8					machine "i-0cda914e7bebaecf8" has not yet joined cluster
Machine	i-0f9d72399827e45f8					machine "i-0f9d72399827e45f8" has not yet joined cluster
Pod	kube-system/kube-dns-66b6848cf6-8zctf			kube-system pod "kube-dns-66b6848cf6-8zctf" is pending
Pod	kube-system/kube-dns-autoscaler-577b4774b5-k4h27	kube-system pod "kube-dns-autoscaler-577b4774b5-k4h27" is pending

Validation Failed
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local

INSTANCE GROUPS
... skipping 11 lines ...
VALIDATION ERRORS
KIND	NAME							MESSAGE
Node	ip-172-20-38-177.us-west-2.compute.internal		node "ip-172-20-38-177.us-west-2.compute.internal" is not ready
Pod	kube-system/kube-dns-66b6848cf6-8zctf			kube-system pod "kube-dns-66b6848cf6-8zctf" is pending
Pod	kube-system/kube-dns-autoscaler-577b4774b5-k4h27	kube-system pod "kube-dns-autoscaler-577b4774b5-k4h27" is pending

Validation Failed
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local

INSTANCE GROUPS
... skipping 10 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-38-177.us-west-2.compute.internal	kube-system pod "kube-proxy-ip-172-20-38-177.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-53-86.us-west-2.compute.internal	kube-system pod "kube-proxy-ip-172-20-53-86.us-west-2.compute.internal" is pending

Validation Failed
Waiting cluster to be created
Using cluster from kubectl context: test-cluster-6549.k8s.local

Validating cluster test-cluster-6549.k8s.local

INSTANCE GROUPS
... skipping 682 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/cloud	0.020s
flag provided but not defined: -kubeconfig
Usage of /tmp/go-build982450430/b220/devicemanager.test:
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
    	print memory allocations for benchmarks
... skipping 36 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/cloud/devicemanager	0.019s
?   	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/cloud/mocks	[no test files]
flag provided but not defined: -kubeconfig
Usage of /tmp/go-build982450430/b223/driver.test:
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
... skipping 37 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver	0.020s
flag provided but not defined: -kubeconfig
Usage of /tmp/go-build982450430/b227/internal.test:
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
    	print memory allocations for benchmarks
... skipping 36 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/internal	0.051s
?   	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/driver/mocks	[no test files]
flag provided but not defined: -kubeconfig
Usage of /tmp/go-build982450430/b230/util.test:
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
... skipping 37 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/pkg/util	0.013s
Aug 23 20:50:37.442: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
=== RUN   TestE2E
2019/08/23 20:50:37 Starting e2e run "4a2cc61e-91c3-4087-9532-2b52362c94f0" on Ginkgo node 1
Running Suite: AWS EBS CSI Driver End-to-End Tests
==================================================
Random Seed: 1566593437 - Will randomize all specs
Will run 0 of 37 specs

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
Ran 0 of 37 Specs in 0.000 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 37 Skipped
--- PASS: TestE2E (0.00s)
PASS
ok  	github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e	0.082s
?   	github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e/driver	[no test files]
?   	github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/e2e/testsuites	[no test files]
flag provided but not defined: -kubeconfig
... skipping 2 lines ...
    	If set, ginkgo will emit node output to files when running in parallel.
  -ginkgo.dryRun
    	If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
  -ginkgo.failFast
    	If set, ginkgo will stop running a test suite after a failure occurs.
  -ginkgo.failOnPending
    	If set, ginkgo will mark the test suite as failed if any specs are pending.
  -ginkgo.flakeAttempts int
    	Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
  -ginkgo.focus string
    	If set, ginkgo will only run specs that match this regular expression.
  -ginkgo.noColor
    	If set, suppress color output in default reporter.
  -ginkgo.noisyPendings
    	If set, default reporter will shout about pending tests. (default true)
... skipping 70 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/integration	0.013s
flag provided but not defined: -kubeconfig
Usage of /tmp/go-build982450430/b842/sanity.test:
  -ginkgo.debug
    	If set, ginkgo will emit node output to files when running in parallel.
  -ginkgo.dryRun
    	If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
  -ginkgo.failFast
    	If set, ginkgo will stop running a test suite after a failure occurs.
  -ginkgo.failOnPending
    	If set, ginkgo will mark the test suite as failed if any specs are pending.
  -ginkgo.flakeAttempts int
    	Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
  -ginkgo.focus string
    	If set, ginkgo will only run specs that match this regular expression.
  -ginkgo.noColor
    	If set, suppress color output in default reporter.
  -ginkgo.noisyPendings
    	If set, default reporter will shout about pending tests. (default true)
... skipping 70 lines ...
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
FAIL	github.com/kubernetes-sigs/aws-ebs-csi-driver/tests/sanity	0.031s
./hack/run-e2e-test: line 115: popd: ./test/e2e-migration: invalid argument
popd: usage: popd [-n] [+N | -N]
TODO: gather controller-manager metrics to verify no in-tree calls were made.
Removing driver
release "aws-ebs-csi-driver" deleted
Deleting cluster test-cluster-6549
... skipping 113 lines ...
	vpc:vpc-072f945b98792d7ce
	route-table:rtb-0698b9560777a8f6a
	subnet:subnet-019e3e7a9b02d6256
	dhcp-options:dopt-0c18676478a2450d5
	internet-gateway:igw-021a51aa50504f8c7
subnet:subnet-019e3e7a9b02d6256	still has dependencies, will retry
W0823 20:51:37.907736   27219 retry_handler.go:99] Got RequestLimitExceeded error on AWS request (ec2::DescribeSecurityGroups)
internet-gateway:igw-021a51aa50504f8c7	still has dependencies, will retry
security-group:sg-07f2d98cd761b46fd	still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
	dhcp-options:dopt-0c18676478a2450d5
	route-table:rtb-0698b9560777a8f6a
	subnet:subnet-019e3e7a9b02d6256
... skipping 22 lines ...
route-table:rtb-0698b9560777a8f6a	ok
vpc:vpc-072f945b98792d7ce	ok
dhcp-options:dopt-0c18676478a2450d5	ok
Deleted kubectl config for test-cluster-6549.k8s.local

Deleted cluster: "test-cluster-6549.k8s.local"
Makefile:57: recipe for target 'test-e2e-migration' failed
make: *** [test-e2e-migration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/08/23 20:52:19 Cleaning up Docker data root...
[Barnacle] 2019/08/23 20:52:19 Removing all containers.
... skipping 25 lines ...