This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdevigned: do not include customData in AzureMachinePool hash calculation
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-03-03 19:32
Elapsed35m35s
Revision3aeef19c2ccda264105106f4c9574085b5aa187e
Refs 1197

No Test Failures!


Error lines from build-log.txt

... skipping 452 lines ...
STEP: Fetching activity logs took 928.867976ms
STEP: Dumping all the Cluster API resources in the "create-workload-cluster-uggvdj" namespace
STEP: Deleting all clusters in the create-workload-cluster-uggvdj namespace
STEP: Deleting cluster capz-e2e-d3i9m0
INFO: Waiting for the Cluster create-workload-cluster-uggvdj/capz-e2e-d3i9m0 to be deleted
STEP: Waiting for cluster capz-e2e-d3i9m0 to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d3i9m0-control-plane-rshlh, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d3i9m0-control-plane-ngckn, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-8l44t, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-d3i9m0-control-plane-65ttq, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d3i9m0-control-plane-ngckn, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bzt69, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-gbw6x, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d3i9m0-control-plane-65ttq, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d3i9m0-control-plane-ngckn, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d3i9m0-control-plane-rshlh, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-djwdf, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-wpsx8, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-d3i9m0-control-plane-65ttq, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-d3i9m0-control-plane-rshlh, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-d3i9m0-control-plane-ngckn, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-2lnps, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-d3i9m0-control-plane-65ttq, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-kpg8d, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-wkmk5, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-dgvn6, container calico-node: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-f9fd979d6-l4cvx, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-kube-controllers-8f59968d4-gfml9, container calico-kube-controllers: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace create-workload-cluster-uggvdj
STEP: Redacting sensitive information from logs
INFO: "With ipv6 worker node" ran for 23m3s on Ginkgo node 2 of 3


... skipping 2 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:39
  Creating a ipv6 control-plane cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:217
    With ipv6 worker node
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:218
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-03-03T20:07:58Z"}