This job view page is being replaced by Spyglass soon. Check out the new job view.
PRleakingtapan: Switch to use new test framework
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-07 01:13
Elapsed21m5s
Revision89dc52e578834dbebb37d0951b304f64165521d5
Refs 1009

No Test Failures!


Error lines from build-log.txt

... skipping 1129 lines ...

Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-3688-k8s-dvkrum-1840575406.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 9 lines ...
Machine	i-0213e0a860b6f7cd0					machine "i-0213e0a860b6f7cd0" has not yet joined cluster
Machine	i-0c169c9b17d548020					machine "i-0c169c9b17d548020" has not yet joined cluster
Machine	i-0fffe016091a1aeb0					machine "i-0fffe016091a1aeb0" has not yet joined cluster
Pod	kube-system/kube-dns-685fbb458-xkvmv			kube-system pod "kube-dns-685fbb458-xkvmv" is pending
Pod	kube-system/kube-dns-autoscaler-74887878cc-jt4rh	kube-system pod "kube-dns-autoscaler-74887878cc-jt4rh" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-3688.k8s.local

Validating cluster test-cluster-3688.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 297 lines ...
Ingress with multi-path echo backend
/home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/multi_path_echo.go:190
  [mod-ip] should work [It]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/multi_path_echo.go:212

  all targets in arn:aws:elasticloadbalancing:us-west-2:607362164682:targetgroup/714cd65e-adc904fd0ca7809c89e/08791ab169d5bf30 should be healthy
  Expected error:
      <*awserr.baseError | 0xc000824340>: {
          code: "RequestCanceled",
          message: "request context canceled",
          errs: [{}],
      }
      RequestCanceled: request context canceled
... skipping 4 lines ...
------------------------------
Sep  7 01:33:25.970: INFO: Running AfterSuite actions on all nodes


Summarizing 1 Failure:

[Fail] Ingress with multi-path echo backend [It] [mod-ip] should work 
/home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/shared/targets.go:36

Ran 2 of 2 Specs in 441.187 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestE2E (441.19s)
FAIL

Ginkgo ran 1 suite in 8m51.694651953s
Test Suite Failed
2019/09/07 01:33:26 Deleting cluster test-cluster-3688.k8s.local
TYPE			NAME											ID
autoscaling-config	master-us-west-2a.masters.test-cluster-3688.k8s.local-20190907011809			master-us-west-2a.masters.test-cluster-3688.k8s.local-20190907011809
autoscaling-config	nodes.test-cluster-3688.k8s.local-20190907011809					nodes.test-cluster-3688.k8s.local-20190907011809
autoscaling-group	master-us-west-2a.masters.test-cluster-3688.k8s.local					master-us-west-2a.masters.test-cluster-3688.k8s.local
autoscaling-group	nodes.test-cluster-3688.k8s.local							nodes.test-cluster-3688.k8s.local
... skipping 142 lines ...
dhcp-options:dopt-020b708ff42a36358	ok
Deleted kubectl config for test-cluster-3688.k8s.local

Deleted cluster: "test-cluster-3688.k8s.local"
2019/09/07 01:34:34 exit status 1
exit status 1
Makefile:58: recipe for target 'e2e-test' failed
make: *** [e2e-test] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/09/07 01:34:34 Cleaning up Docker data root...
[Barnacle] 2019/09/07 01:34:34 Removing all containers.
... skipping 25 lines ...