This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 19 succeeded
Started2020-10-23 19:00
Elapsed27m12s
Revision
Builder01b88349-1562-11eb-b256-6ee25ea2e440
infra-commit18bef7827
job-versionv1.18.11-rc.0.13+806617c8cf1f0d
master_os_imagecos-81-12871-59-0
node_os_imagegke-1134-gke-rc5-cos-69-10895-138-0-v190320-pre-nvda-gpu
revisionv1.18.11-rc.0.13+806617c8cf1f0d

Test Failures


GPU master upgrade [sig-node] gpu-master-upgrade 4m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GPU\smaster\supgrade\s\[sig\-node\]\sgpu\-master\-upgrade$'
Oct 23 19:17:34.827: Unexpected error:
    <*errors.errorString | 0xc0029ae050>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.19.4-rc.0.22+9e8ad8ce9d8a30]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-1hgv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-gqbh\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-mvhm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.19.4-rc.0.22+9e8ad8ce9d8a30/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n.............................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.19.4-rc.0.22+9e8ad8ce9d8a30\\nbootstrap-e2e-minion-group-1hgv   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nbootstrap-e2e-minion-group-gqbh   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nbootstrap-e2e-minion-group-mvhm   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\ncontroller-manager   Healthy   ok                  \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.19.4-rc.0.22+9e8ad8ce9d8a30\\\"\\nname: \\\"bootstrap-e2e-minion-group-1hgv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-gqbh\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-mvhm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\n\", stderr \"Project: k8s-jkns-e2e-gce-gpus-beta\\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-1hgv bootstrap-e2e-minion-group-gqbh bootstrap-e2e-minion-group-mvhm\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.233.214.177; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-81-12871-59-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-81-12871-69-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.214.177  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-e2e-gce-gpus-beta\\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\\nZone: us-west1-b\\nWarning: v1 ComponentStatus is deprecated in v1.19+\\nWarning: v1 ComponentStatus is deprecated in v1.19+\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 465: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.19.4-rc.0.22+9e8ad8ce9d8a30]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-1hgv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-gqbh\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-mvhm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.19.4-rc.0.22+9e8ad8ce9d8a30/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n.............................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.19.4-rc.0.22+9e8ad8ce9d8a30\nbootstrap-e2e-minion-group-1hgv   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nbootstrap-e2e-minion-group-gqbh   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nbootstrap-e2e-minion-group-mvhm   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.19.4-rc.0.22+9e8ad8ce9d8a30\"\nname: \"bootstrap-e2e-minion-group-1hgv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-gqbh\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-mvhm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\n", stderr "Project: k8s-jkns-e2e-gce-gpus-beta\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-1hgv bootstrap-e2e-minion-group-gqbh bootstrap-e2e-minion-group-mvhm\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.233.214.177; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-81-12871-59-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-81-12871-69-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.214.177  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-e2e-gce-gpus-beta\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\nZone: us-west1-b\nWarning: v1 ComponentStatus is deprecated in v1.19+\nWarning: v1 ComponentStatus is deprecated in v1.19+\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 465: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func5.1.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:238 +0x13c
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc001d05258)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38d
k8s.io/kubernetes/test/e2e/cloud/gcp.runUpgradeSuite(0xc001180c60, 0x75175e0, 0x1, 0x1, 0xc001159230, 0xc0033b2000, 0x0, 0xc002a78360)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:516 +0x454
k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func5.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:241 +0x1fc
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00262de00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345
k8s.io/kubernetes/test/e2e.TestE2E(0xc00262de00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b
testing.tRunner(0xc00262de00, 0x4dca3e8)
	/usr/local/go/src/testing/testing.go:1127 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1178 +0x386
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


GPU master upgrade nvidia-gpu-upgrade [sig-node] [sig-scheduling] 9m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GPU\smaster\supgrade\snvidia\-gpu\-upgrade\s\[sig\-node\]\s\[sig\-scheduling\]$'
Oct 23 19:17:35.091: Job pods failed during master upgrade: 0
Expected
    <int32>: 0
to equal
    <int>: 0

k8s.io/kubernetes/test/e2e/upgrades.(*NvidiaGPUUpgradeTest).Test(0x77198a0, 0xc001180dc0, 0xc0030726c0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/nvidia-gpu.go:56 +0x1d5
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc00114b680, 0xc003007b20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:431 +0x36a
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc003007b20, 0xc002fa3660)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x6d
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xc9
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade] 9m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sgpu\sUpgrade\s\[Feature\:GPUUpgrade\]\smaster\supgrade\sshould\sNOT\sdisrupt\sgpu\spod\s\[Feature\:GPUMasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:227
Oct 23 19:17:34.827: Unexpected error:
    <*errors.errorString | 0xc0029ae050>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.19.4-rc.0.22+9e8ad8ce9d8a30]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-1hgv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-gqbh\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-mvhm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.19.4-rc.0.22+9e8ad8ce9d8a30/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n.............................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.19.4-rc.0.22+9e8ad8ce9d8a30\\nbootstrap-e2e-minion-group-1hgv   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nbootstrap-e2e-minion-group-gqbh   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nbootstrap-e2e-minion-group-mvhm   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\ncontroller-manager   Healthy   ok                  \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.19.4-rc.0.22+9e8ad8ce9d8a30\\\"\\nname: \\\"bootstrap-e2e-minion-group-1hgv\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-gqbh\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\nname: \\\"bootstrap-e2e-minion-group-mvhm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.13+806617c8cf1f0d\\\"\\n\", stderr \"Project: k8s-jkns-e2e-gce-gpus-beta\\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-1hgv bootstrap-e2e-minion-group-gqbh bootstrap-e2e-minion-group-mvhm\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.233.214.177; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-81-12871-59-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-81-12871-69-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.214.177  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-e2e-gce-gpus-beta\\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\\nZone: us-west1-b\\nWarning: v1 ComponentStatus is deprecated in v1.19+\\nWarning: v1 ComponentStatus is deprecated in v1.19+\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 465: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.19.4-rc.0.22+9e8ad8ce9d8a30]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-1hgv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-gqbh\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-mvhm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.19.4-rc.0.22+9e8ad8ce9d8a30/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n.............................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.19.4-rc.0.22+9e8ad8ce9d8a30\nbootstrap-e2e-minion-group-1hgv   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nbootstrap-e2e-minion-group-gqbh   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nbootstrap-e2e-minion-group-mvhm   Ready                      <none>   10m   v1.18.11-rc.0.13+806617c8cf1f0d\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.19.4-rc.0.22+9e8ad8ce9d8a30\"\nname: \"bootstrap-e2e-minion-group-1hgv\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-gqbh\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\nname: \"bootstrap-e2e-minion-group-mvhm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.13+806617c8cf1f0d\"\n", stderr "Project: k8s-jkns-e2e-gce-gpus-beta\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-1hgv bootstrap-e2e-minion-group-gqbh bootstrap-e2e-minion-group-mvhm\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.233.214.177; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-gpus-beta/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-81-12871-59-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-81-12871-69-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.214.177  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-e2e-gce-gpus-beta\nNetwork Project: k8s-jkns-e2e-gce-gpus-beta\nZone: us-west1-b\nWarning: v1 ComponentStatus is deprecated in v1.19+\nWarning: v1 ComponentStatus is deprecated in v1.19+\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 465: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:238
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


UpgradeTest 10m2s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:GPUMasterUpgrade\] --upgrade-target=ci/k8s-beta --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Show 10226 Skipped Tests