This job view page is being replaced by Spyglass soon. Check out the new job view.
PRleakingtapan: Switch to use new test framework
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-09-07 00:44
Elapsed23m5s
Revisiona2e14fee7f33544e135ae23bde3e8b40f50102cb
Refs 1009

No Test Failures!


Error lines from build-log.txt

... skipping 1109 lines ...

Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: dial tcp: lookup api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local


unexpected error during validation: error listing nodes: Get https://api-test-cluster-7823-k8s-mv6jo0-851034182.us-west-2.elb.amazonaws.com/api/v1/nodes: EOF
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 7 lines ...
KIND	NAME			MESSAGE
Machine	i-067699cecbe5d18f8	machine "i-067699cecbe5d18f8" has not yet joined cluster
Machine	i-07d6535476d0619ae	machine "i-07d6535476d0619ae" has not yet joined cluster
Machine	i-0a49ba8c542362277	machine "i-0a49ba8c542362277" has not yet joined cluster
Machine	i-0d89f076900b0dff6	machine "i-0d89f076900b0dff6" has not yet joined cluster

Validation Failed
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 9 lines ...
Machine	i-07d6535476d0619ae					machine "i-07d6535476d0619ae" has not yet joined cluster
Machine	i-0a49ba8c542362277					machine "i-0a49ba8c542362277" has not yet joined cluster
Machine	i-0d89f076900b0dff6					machine "i-0d89f076900b0dff6" has not yet joined cluster
Pod	kube-system/kube-dns-66b6848cf6-drtld			kube-system pod "kube-dns-66b6848cf6-drtld" is pending
Pod	kube-system/kube-dns-autoscaler-577b4774b5-48t6b	kube-system pod "kube-dns-autoscaler-577b4774b5-48t6b" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 9 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-82-83.us-west-2.compute.internal	kube-system pod "kube-proxy-ip-172-20-82-83.us-west-2.compute.internal" is pending
Pod	kube-system/kube-proxy-ip-172-20-98-76.us-west-2.compute.internal	kube-system pod "kube-proxy-ip-172-20-98-76.us-west-2.compute.internal" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 8 lines ...
ip-172-20-98-76.us-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-46-104.us-west-2.compute.internal	kube-system pod "kube-proxy-ip-172-20-46-104.us-west-2.compute.internal" is pending

Validation Failed
Using cluster from kubectl context: test-cluster-7823.k8s.local

Validating cluster test-cluster-7823.k8s.local

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
... skipping 298 lines ...
Ingress with multi-path echo backend
/home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/multi_path_echo.go:190
  [mod-ip] should work [It]
  /home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/multi_path_echo.go:212

  all targets in arn:aws:elasticloadbalancing:us-west-2:607362164682:targetgroup/5b103ad0-38ccdfa24a786b9519a/4cdec6c1ead39073 should be healthy
  Expected error:
      <*awserr.baseError | 0xc0004d4340>: {
          code: "RequestCanceled",
          message: "request context canceled",
          errs: [{}],
      }
      RequestCanceled: request context canceled
... skipping 4 lines ...
------------------------------
Sep  7 01:05:59.910: INFO: Running AfterSuite actions on all nodes


Summarizing 1 Failure:

[Fail] Ingress with multi-path echo backend [It] [mod-ip] should work 
/home/prow/go/src/github.com/kubernetes-sigs/aws-alb-ingress-controller/test/e2e/ingress/shared/targets.go:36

Ran 2 of 2 Specs in 435.547 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestE2E (435.55s)
FAIL

Ginkgo ran 1 suite in 8m3.880924626s
Test Suite Failed
2019/09/07 01:05:59 Deleting cluster test-cluster-7823.k8s.local
TYPE			NAME											ID
autoscaling-config	master-us-west-2a.masters.test-cluster-7823.k8s.local-20190907005035			master-us-west-2a.masters.test-cluster-7823.k8s.local-20190907005035
autoscaling-config	nodes.test-cluster-7823.k8s.local-20190907005035					nodes.test-cluster-7823.k8s.local-20190907005035
autoscaling-group	master-us-west-2a.masters.test-cluster-7823.k8s.local					master-us-west-2a.masters.test-cluster-7823.k8s.local
autoscaling-group	nodes.test-cluster-7823.k8s.local							nodes.test-cluster-7823.k8s.local
... skipping 166 lines ...
dhcp-options:dopt-05b8579e1a52deda0	ok
Deleted kubectl config for test-cluster-7823.k8s.local

Deleted cluster: "test-cluster-7823.k8s.local"
2019/09/07 01:07:29 exit status 1
exit status 1
Makefile:58: recipe for target 'e2e-test' failed
make: *** [e2e-test] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/09/07 01:07:29 Cleaning up Docker data root...
[Barnacle] 2019/09/07 01:07:29 Removing all containers.
... skipping 25 lines ...