This job view page is being replaced by Spyglass soon. Check out the new job view.
PRshysank: v1beta1 cluster upgrade tests (using clusterctl upgrade)
ResultFAILURE
Tests 1 failed / 0 succeeded
Started2021-10-22 17:30
Elapsed32m11s
Revision5ca3ec6d902725d81532c7ce0569775a300dbd26
Refs 1771

Test Failures


capz-e2e Running the Cluster API E2E tests upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 Should create a management cluster and then upgrade all the providers 24m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sRunning\sthe\sCluster\sAPI\sE2E\stests\supgrade\sfrom\sv1alpha4\sto\sv1beta1\,\sand\sscale\sworkload\sclusters\screated\sin\sv1alpha4\sShould\screate\sa\smanagement\scluster\sand\sthen\supgrade\sall\sthe\sproviders$'
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145
failed to run clusterctl init:
Fetching providers
Installing cert-manager Version="v1.5.3"
Waiting for cert-manager to be available...
Error: timed out waiting for the condition

Unexpected error:
    <*exec.ExitError | 0xc00055a260>: {
        ProcessState: {
            pid: 28341,
            status: 256,
            rusage: {
                Utime: {Sec: 3, Usec: 29862},
                Stime: {Sec: 0, Usec: 871500},
                Maxrss: 76412,
                Ixrss: 0,
                Idrss: 0,
                Isrss: 0,
                Minflt: 15539,
                Majflt: 0,
                Nswap: 0,
                Inblock: 0,
                Oublock: 0,
                Msgsnd: 0,
                Msgrcv: 0,
                Nsignals: 0,
                Nvcsw: 48302,
                Nivcsw: 1707,
            },
        },
        Stderr: nil,
    }
    exit status 1
occurred
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:108
				
				Click to see stdout/stderrfrom junit.e2e_suite.3.xml

Filter through log files | View test history on testgrid


Show 10 Skipped Tests

Error lines from build-log.txt

... skipping 476 lines ...
Oct 22 17:55:27.825: INFO: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-hftbmq-md-0-thslt

Oct 22 17:55:28.140: INFO: INFO: Collecting logs for node 10.1.0.5 in cluster clusterctl-upgrade-hftbmq in namespace clusterctl-upgrade-hg3dik

Oct 22 17:55:54.828: INFO: INFO: Collecting boot logs for AzureMachine clusterctl-upgrade-hftbmq-md-win-5w5rf

Failed to get logs for machine clusterctl-upgrade-hftbmq-md-win-78c9ccd977-rv4d4, cluster clusterctl-upgrade-hg3dik/clusterctl-upgrade-hftbmq: [running command "get-eventlog -LogName Application -Source Docker | Select-Object Index, TimeGenerated, EntryType, Message | Sort-Object Index | Format-Table -Wrap -Autosize": Process exited with status 1, running command "docker ps -a": Process exited with status 1]
STEP: Dumping workload cluster clusterctl-upgrade-hg3dik/clusterctl-upgrade-hftbmq kube-system pod logs
STEP: Fetching kube-system pod logs took 223.021961ms
STEP: Creating log watcher for controller kube-system/calico-node-windows-8j7pn, container calico-node-startup
STEP: Creating log watcher for controller kube-system/calico-node-cdrvn, container calico-node
STEP: Creating log watcher for controller kube-system/kube-controller-manager-clusterctl-upgrade-hftbmq-control-plane-qgc4g, container kube-controller-manager
STEP: Dumping workload cluster clusterctl-upgrade-hg3dik/clusterctl-upgrade-hftbmq Azure activity log
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/etcd-clusterctl-upgrade-hftbmq-control-plane-qgc4g, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-7zqg8, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-7blnr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-windows-h4k8k, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-clusterctl-upgrade-hftbmq-control-plane-qgc4g, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-scheduler-clusterctl-upgrade-hftbmq-control-plane-qgc4g, container kube-scheduler
STEP: Error fetching activity logs for resource group : insights.ActivityLogsClient#List: Failure responding to request: StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="BadRequest" Message="Query parameter cannot be null empty or whitespace: resourceGroupName."
STEP: Fetching activity logs took 211.705352ms
STEP: Dumping all the Cluster API resources in the "clusterctl-upgrade-hg3dik" namespace
STEP: Deleting cluster clusterctl-upgrade-hg3dik/clusterctl-upgrade-hftbmq
STEP: Deleting cluster clusterctl-upgrade-hftbmq
INFO: Waiting for the Cluster clusterctl-upgrade-hg3dik/clusterctl-upgrade-hftbmq to be deleted
STEP: Waiting for cluster clusterctl-upgrade-hftbmq to be deleted
STEP: Got error while streaming logs for pod kube-system/kube-proxy-windows-h4k8k, container kube-proxy: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8j7pn, container calico-node-startup: http2: client connection lost
STEP: Got error while streaming logs for pod kube-system/calico-node-windows-8j7pn, container calico-node-felix: http2: client connection lost
STEP: Deleting namespace used for hosting the "clusterctl-upgrade" test spec
INFO: Deleting namespace clusterctl-upgrade-hg3dik
STEP: Redacting sensitive information from logs


• Failure [1498.814 seconds]
Running the Cluster API E2E tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:39
  upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/capi_test.go:115
    Should create a management cluster and then upgrade all the providers [It]
    /home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/e2e/clusterctl_upgrade.go:145

    failed to run clusterctl init:
    Fetching providers
    Installing cert-manager Version="v1.5.3"
    Waiting for cert-manager to be available...
    Error: timed out waiting for the condition
    
    Unexpected error:
        <*exec.ExitError | 0xc00055a260>: {
            ProcessState: {
                pid: 28341,
                status: 256,
                rusage: {
                    Utime: {Sec: 3, Usec: 29862},
... skipping 60 lines ...
STEP: Tearing down the management cluster



Summarizing 1 Failure:

[Fail] Running the Cluster API E2E tests upgrade from v1alpha4 to v1beta1, and scale workload clusters created in v1alpha4 [It] Should create a management cluster and then upgrade all the providers 
/home/prow/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.0.0/framework/clusterctl/client.go:108

Ran 1 of 11 Specs in 1616.302 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 10 Skipped


Ginkgo ran 1 suite in 28m15.530230266s
Test Suite Failed

Ginkgo 2.0 is coming soon!
==========================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
  - For instructions on using the Release Candidate visit https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md#using-the-beta
  - To comment, chime in at https://github.com/onsi/ginkgo/issues/711

To silence this notice, set the environment variable: ACK_GINKGO_RC=true
Alternatively you can: touch $HOME/.ack-ginkgo-rc
make[1]: *** [Makefile:173: test-e2e-run] Error 1
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:181: test-e2e] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 5 lines ...