This job view page is being replaced by Spyglass soon. Check out the new job view.
PRfabriziopandini: 📖 Improve implementers guide
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2020-07-08 12:18
Elapsed45m36s
Revision1de9e41f276c1ae243f5f21fe97b84667180f908
Refs 3307
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/26da925d-a26b-47f1-88bd-c8b3816999f9/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/26da925d-a26b-47f1-88bd-c8b3816999f9/targets/test

Test Failures


capi-e2e When testing unhealthy machines remediation Should successfully remediate unhealthy machines with MachineHealthCheck 10m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\stesting\sunhealthy\smachines\sremediation\sShould\ssuccessfully\sremediate\sunhealthy\smachines\swith\sMachineHealthCheck$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/mhc_remediations.go:69
Timed out after 600.010s.
Expected
    <bool>: false
to be true
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlpane_helpers.go:144
				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 1278 lines ...
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-56889bc4b7-mqsh9, container manager
STEP: waiting for deployment capi-webhook-system/capi-kubeadm-control-plane-controller-manager to be available
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt, container kube-rbac-proxy
INFO: Creating log watcher for controller capi-webhook-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt, container manager
STEP: Moving the cluster to self hosted
STEP: Moving workload clusters
INFO: Error starting logs stream for pod capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-6c85dd75c6-nrxs9, container manager: Get https://172.17.0.8:10250/containerLogs/capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager-6c85dd75c6-nrxs9/manager?follow=true: net/http: TLS handshake timeout
INFO: Error starting logs stream for pod capi-webhook-system/capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt, container manager: Get https://172.17.0.8:10250/containerLogs/capi-webhook-system/capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt/manager?follow=true: net/http: TLS handshake timeout
INFO: Error starting logs stream for pod capi-webhook-system/capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt, container kube-rbac-proxy: Get https://172.17.0.8:10250/containerLogs/capi-webhook-system/capi-kubeadm-control-plane-controller-manager-77cf5bfdcf-h82qt/kube-rbac-proxy?follow=true: net/http: TLS handshake timeout
INFO: Waiting for the cluster infrastructure to be provisioned
STEP: waiting for cluster to enter the provisioned phase
STEP: PASSED!
STEP: Moving the cluster back to bootstrap
STEP: Moving workload clusters
INFO: Waiting for the cluster infrastructure to be provisioned
... skipping 384 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/md_upgrades_test.go:27
  Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/md_upgrades.go:72
------------------------------
STEP: Tearing down the management cluster
W0708 13:03:30.552425   19770 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.7/tools/cache/reflector.go:105: watch of *v1.Event ended with: very short watch: pkg/mod/k8s.io/client-go@v0.17.7/tools/cache/reflector.go:105: Unexpected watch close - watch lasted less than a second and no items received
E0708 13:03:31.555425   19770 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.7/tools/cache/reflector.go:105: Failed to list *v1.Event: Get "https://127.0.0.1:35203/api/v1/namespaces/mhc-remediation-fooupc/events?limit=500&resourceVersion=0": dial tcp 127.0.0.1:35203: connect: connection refused
E0708 13:03:32.556363   19770 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.7/tools/cache/reflector.go:105: Failed to list *v1.Event: Get "https://127.0.0.1:35203/api/v1/namespaces/mhc-remediation-fooupc/events?limit=500&resourceVersion=0": dial tcp 127.0.0.1:35203: connect: connection refused



Summarizing 1 Failure:

[Fail] When testing unhealthy machines remediation [It] Should successfully remediate unhealthy machines with MachineHealthCheck 
/home/prow/go/src/sigs.k8s.io/cluster-api/test/framework/controlpane_helpers.go:144

Ran 8 of 8 Specs in 2279.404 seconds
FAIL! -- 7 Passed | 1 Failed | 0 Pending | 0 Skipped


Ginkgo ran 1 suite in 39m3.542833252s
Test Suite Failed
make: *** [Makefile:62: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
... skipping 7 lines ...