This job view page is being replaced by Spyglass soon. Check out the new job view.
PRkaitoii11: Fix simple typo
ResultFAILURE
Tests 1 failed / 1 succeeded
Started2021-11-24 21:25
Elapsed39m5s
Revision9acef8b6be6a7c73a18440223aa100b487bf20de
Refs 1888

Test Failures


capz-e2e Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 29m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sWorkload\scluster\screation\sCreating\sa\sWindows\senabled\sVMSS\scluster\swith\sdockershim\swith\sa\ssingle\scontrol\splane\snode\sand\san\sLinux\sAzureMachinePool\swith\s1\snodes\sand\sWindows\sAzureMachinePool\swith\s1\snode$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579
Timed out after 900.001s.
Expected
    <int>: 0
to equal
    <int>: 1
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinepool_helpers.go:85
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 1 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 435 lines ...
  With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:532

INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" started at Wed, 24 Nov 2021 21:34:49 UTC on Ginkgo node 2 of 3
STEP: Creating namespace "capz-e2e-gnll5g" for hosting the cluster
Nov 24 21:34:49.748: INFO: starting to create namespace for hosting the "capz-e2e-gnll5g" test spec
2021/11/24 21:34:49 failed trying to get namespace (capz-e2e-gnll5g):namespaces "capz-e2e-gnll5g" not found
INFO: Creating namespace capz-e2e-gnll5g
INFO: Creating event watcher for namespace "capz-e2e-gnll5g"
Nov 24 21:34:49.801: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-gnll5g-win-ha
INFO: Creating the workload cluster with name "capz-e2e-gnll5g-win-ha" using the "windows" template (Kubernetes v1.22.4, 3 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 151 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-gnll5g-win-ha-control-plane-z5lfw, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-8cqjq, container kube-proxy
STEP: Creating log watcher for controller kube-system/etcd-capz-e2e-gnll5g-win-ha-control-plane-tffcx, container etcd
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-e2e-gnll5g-win-ha-control-plane-q5nhc, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-e2e-gnll5g-win-ha-control-plane-q5nhc, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/kube-flannel-ds-windows-amd64-ljvm6, container kube-flannel
STEP: Got error while iterating over activity logs for resource group capz-e2e-gnll5g-win-ha: insights.ActivityLogsClient#listNextResults: Failure sending next results request: StatusCode=500 -- Original Error: context deadline exceeded
STEP: Fetching activity logs took 30.000809845s
STEP: Dumping all the Cluster API resources in the "capz-e2e-gnll5g" namespace
STEP: Deleting all clusters in the capz-e2e-gnll5g namespace
STEP: Deleting cluster capz-e2e-gnll5g-win-ha
INFO: Waiting for the Cluster capz-e2e-gnll5g/capz-e2e-gnll5g-win-ha to be deleted
STEP: Waiting for cluster capz-e2e-gnll5g-win-ha to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-gnll5g-win-ha-control-plane-z5lfw, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-gnll5g-win-ha-control-plane-tffcx, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-5xx9t, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-4tnq8, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-vrn8h, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-gnll5g-win-ha-control-plane-z5lfw, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-jdpkg, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-gnll5g-win-ha-control-plane-z5lfw, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-jcgxn, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-t2csg, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-cvrmq, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-gg9nb, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-gnll5g-win-ha-control-plane-tffcx, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-gnll5g-win-ha-control-plane-tffcx, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-windows-amd64-ljvm6, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-tlrnv, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-gnll5g-win-ha-control-plane-z5lfw, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-gnll5g-win-ha-control-plane-tffcx, container kube-apiserver: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-gnll5g
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "With 3 control-plane nodes and 1 Linux worker node and 1 Windows worker node" ran for 24m40s on Ginkgo node 2 of 3

... skipping 10 lines ...
  with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:579

INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" started at Wed, 24 Nov 2021 21:34:50 UTC on Ginkgo node 3 of 3
STEP: Creating namespace "capz-e2e-2efiqh" for hosting the cluster
Nov 24 21:34:50.116: INFO: starting to create namespace for hosting the "capz-e2e-2efiqh" test spec
2021/11/24 21:34:50 failed trying to get namespace (capz-e2e-2efiqh):namespaces "capz-e2e-2efiqh" not found
INFO: Creating namespace capz-e2e-2efiqh
INFO: Creating event watcher for namespace "capz-e2e-2efiqh"
Nov 24 21:34:50.162: INFO: Creating cluster identity secret
%!(EXTRA string=cluster-identity-secret)INFO: Cluster name is capz-e2e-2efiqh-win-vmss
INFO: Creating the workload cluster with name "capz-e2e-2efiqh-win-vmss" using the "machine-pool-windows" template (Kubernetes v1.22.4, 1 control-plane machines, 1 worker machines)
INFO: Getting the cluster template yaml
... skipping 51 lines ...
STEP: Fetching activity logs took 562.248627ms
STEP: Dumping all the Cluster API resources in the "capz-e2e-2efiqh" namespace
STEP: Deleting all clusters in the capz-e2e-2efiqh namespace
STEP: Deleting cluster capz-e2e-2efiqh-win-vmss
INFO: Waiting for the Cluster capz-e2e-2efiqh/capz-e2e-2efiqh-win-vmss to be deleted
STEP: Waiting for cluster capz-e2e-2efiqh-win-vmss to be deleted
STEP: Got error while streaming logs for pod kube-system/etcd-capz-e2e-2efiqh-win-vmss-control-plane-8kths, container etcd: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-qzj4q, container kube-flannel: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-bwj79, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-controller-manager-capz-e2e-2efiqh-win-vmss-control-plane-8kths, container kube-controller-manager: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-scheduler-capz-e2e-2efiqh-win-vmss-control-plane-8kths, container kube-scheduler: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-proxy-v5626, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-6pzjk, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/coredns-78fcd69978-rlqsd, container coredns: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-apiserver-capz-e2e-2efiqh-win-vmss-control-plane-8kths, container kube-apiserver: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/kube-flannel-ds-amd64-h2vnx, container kube-flannel: http2: client connection lost
STEP: Deleting namespace used for hosting the "create-workload-cluster" test spec
INFO: Deleting namespace capz-e2e-2efiqh
STEP: Checking if any resources are left over in Azure for spec "create-workload-cluster"
STEP: Redacting sensitive information from logs
INFO: "with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node" ran for 29m22s on Ginkgo node 3 of 3

... skipping 55 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a Windows enabled VMSS cluster with dockershim [It] with a single control plane node and an Linux AzureMachinePool with 1 nodes and Windows AzureMachinePool with 1 node 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.1/framework/machinepool_helpers.go:85

Ran 2 of 24 Specs in 2007.486 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 22 Skipped


Ginkgo ran 1 suite in 34m57.646337865s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:176: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:184: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...