This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 22 failed / 1182 succeeded
Started2020-10-23 00:43
Elapsed14h43m
Revision
Builderc0de9b11-14c8-11eb-8a3c-66d0ef5d093b
infra-commit085d616b4
job-versionv1.17.14-rc.0.7+a70c45e3b1926e
master_os_imagecos-77-12371-175-0
node_os_imagecos-77-12371-175-0
revisionv1.17.14-rc.0.7+a70c45e3b1926e

Test Failures


Cluster upgrade [sig-cloud-provider-gcp] cluster-upgrade 4m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-cloud\-provider\-gcp\]\scluster\-upgrade$'
Oct 23 00:57:28.718: Unexpected error:
    <*errors.errorString | 0xc001e8b630>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.11-rc.0.7+c194df43db09e4]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-gn0p\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-k64x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-nrh8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.11-rc.0.7+c194df43db09e4/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n........................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   8m19s   v1.18.11-rc.0.7+c194df43db09e4\\nbootstrap-e2e-minion-group-gn0p   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\\nbootstrap-e2e-minion-group-k64x   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\\nbootstrap-e2e-minion-group-nrh8   Ready                      <none>   8m21s   v1.17.14-rc.0.7+a70c45e3b1926e\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.7+c194df43db09e4\\\"\\nname: \\\"bootstrap-e2e-minion-group-gn0p\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-k64x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-nrh8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\n\", stderr \"Project: k8s-jkns-e2e-kubeadm-per-1-6\\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-gn0p bootstrap-e2e-minion-group-k64x bootstrap-e2e-minion-group-nrh8\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.227.184.183; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.227.184.183  RUNNING\\nWarning: Permanently added 'compute.7791357980359759646' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-e2e-kubeadm-per-1-6\\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.11-rc.0.7+c194df43db09e4]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-gn0p\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-k64x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-nrh8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.11-rc.0.7+c194df43db09e4/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n........................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   8m19s   v1.18.11-rc.0.7+c194df43db09e4\nbootstrap-e2e-minion-group-gn0p   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\nbootstrap-e2e-minion-group-k64x   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\nbootstrap-e2e-minion-group-nrh8   Ready                      <none>   8m21s   v1.17.14-rc.0.7+a70c45e3b1926e\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.7+c194df43db09e4\"\nname: \"bootstrap-e2e-minion-group-gn0p\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-k64x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-nrh8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\n", stderr "Project: k8s-jkns-e2e-kubeadm-per-1-6\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-gn0p bootstrap-e2e-minion-group-k64x bootstrap-e2e-minion-group-nrh8\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.227.184.183; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.227.184.183  RUNNING\nWarning: Permanently added 'compute.7791357980359759646' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-e2e-kubeadm-per-1-6\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:153 +0x132
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc001b0d1e8)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/cloud/gcp.runUpgradeSuite(0xc000879b80, 0x799a240, 0xc, 0xc, 0xc000458e40, 0xc0033921e0, 0x2, 0xc003399c60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:485 +0x47a
k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:158 +0x222
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000401a00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc000401a00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc000401a00, 0x4af5d58)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 17m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:142
Oct 23 00:57:28.718: Unexpected error:
    <*errors.errorString | 0xc001e8b630>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.11-rc.0.7+c194df43db09e4]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-gn0p\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-k64x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-nrh8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.11-rc.0.7+c194df43db09e4/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n........................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   8m19s   v1.18.11-rc.0.7+c194df43db09e4\\nbootstrap-e2e-minion-group-gn0p   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\\nbootstrap-e2e-minion-group-k64x   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\\nbootstrap-e2e-minion-group-nrh8   Ready                      <none>   8m21s   v1.17.14-rc.0.7+a70c45e3b1926e\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.11-rc.0.7+c194df43db09e4\\\"\\nname: \\\"bootstrap-e2e-minion-group-gn0p\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-k64x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\nname: \\\"bootstrap-e2e-minion-group-nrh8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.14-rc.0.7+a70c45e3b1926e\\\"\\n\", stderr \"Project: k8s-jkns-e2e-kubeadm-per-1-6\\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-gn0p bootstrap-e2e-minion-group-k64x bootstrap-e2e-minion-group-nrh8\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.227.184.183; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.227.184.183  RUNNING\\nWarning: Permanently added 'compute.7791357980359759646' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-e2e-kubeadm-per-1-6\\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.11-rc.0.7+c194df43db09e4]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-gn0p\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-k64x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-nrh8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.11-rc.0.7+c194df43db09e4/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n........................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   8m19s   v1.18.11-rc.0.7+c194df43db09e4\nbootstrap-e2e-minion-group-gn0p   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\nbootstrap-e2e-minion-group-k64x   Ready                      <none>   8m19s   v1.17.14-rc.0.7+a70c45e3b1926e\nbootstrap-e2e-minion-group-nrh8   Ready                      <none>   8m21s   v1.17.14-rc.0.7+a70c45e3b1926e\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.11-rc.0.7+c194df43db09e4\"\nname: \"bootstrap-e2e-minion-group-gn0p\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-k64x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\nname: \"bootstrap-e2e-minion-group-nrh8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.14-rc.0.7+a70c45e3b1926e\"\n", stderr "Project: k8s-jkns-e2e-kubeadm-per-1-6\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-gn0p bootstrap-e2e-minion-group-k64x bootstrap-e2e-minion-group-nrh8\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.227.184.183; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-kubeadm-per-1-6/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.227.184.183  RUNNING\nWarning: Permanently added 'compute.7791357980359759646' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-e2e-kubeadm-per-1-6\nNetwork Project: k8s-jkns-e2e-kubeadm-per-1-6\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:153
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json" 0.12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sclient\-go\sshould\snegotiate\swatch\sand\sreport\serrors\swith\saccept\s\"application\/json\"$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
unexpected error: &v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"too old resource version: 149 (1863)", Reason:"Expired", Details:(*v1.StatusDetails)(nil), Code:410}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf" 0.14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sclient\-go\sshould\snegotiate\swatch\sand\sreport\serrors\swith\saccept\s\"application\/json\,application\/vnd\.kubernetes\.protobuf\"$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
unexpected error: &v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"too old resource version: 149 (1863)", Reason:"Expired", Details:(*v1.StatusDetails)(nil), Code:410}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf" 0.12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sclient\-go\sshould\snegotiate\swatch\sand\sreport\serrors\swith\saccept\s\"application\/vnd\.kubernetes\.protobuf\"$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
unexpected error: &v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"too old resource version: 149 (195115)", Reason:"Expired", Details:(*v1.StatusDetails)(nil), Code:410}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json" 0.11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sclient\-go\sshould\snegotiate\swatch\sand\sreport\serrors\swith\saccept\s\"application\/vnd\.kubernetes\.protobuf\,application\/json\"$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:45
unexpected error: &v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"too old resource version: 149 (33198)", Reason:"Expired", Details:(*v1.StatusDetails)(nil), Code:410}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:74
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob [Deprecated] 2.08s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\salpha\sclient\sKubectl\srun\sCronJob\sshould\screate\sa\sCronJob\s\[Deprecated\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:230
Oct 23 08:38:15.417: Failed getting CronJob e2e-test-echo-cronjob-alpha: cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:239
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance] 2.31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\srolling\-update\sshould\ssupport\srolling\-update\sto\ssame\simage\s\[Deprecated\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Oct 23 02:59:07.796: Failed getting rc e2e-test-httpd-rc: replicationcontrollers "e2e-test-httpd-rc" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1609
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob [Deprecated] 2.28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\srun\sCronJob\sshould\screate\sa\sCronJob\s\[Deprecated\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1732
Oct 23 03:33:46.125: Failed getting CronJob e2e-test-echo-cronjob-beta: cronjobs.batch "e2e-test-echo-cronjob-beta" not found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1741
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] 1.95s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\srun\sdefault\sshould\screate\san\src\sor\sdeployment\sfrom\san\simage\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Oct 23 02:48:38.250: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running ../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.227.184.183 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2661:\nCommand stdout:\n\nstderr:\nError from server (NotFound): deployments.apps \"e2e-test-httpd-deployment\" not found\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running ../../../../kubernetes_skew/cluster/kubectl.sh --server=https://35.227.184.183 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2661:
    Command stdout:
    
    stderr:
    Error from server (NotFound): deployments.apps "e2e-test-httpd-deployment" not found
    
    error:
    exit status 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:750