This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 770 succeeded
Started2020-10-13 07:47
Elapsed54m47s
Revision
Builder4203bc59-0d28-11eb-acdd-663f8ab42c5e
infra-commitfbd8fdf50
job-versionv1.18.10-rc.0.34+92bf8a2b53f9bd
master_os_imagecos-77-12371-175-0
node_os_imagecos-77-12371-175-0
revisionv1.18.10-rc.0.34+92bf8a2b53f9bd

Test Failures


Cluster downgrade [sig-cloud-provider-gcp] cluster-downgrade 10m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\s\[sig\-cloud\-provider\-gcp\]\scluster\-downgrade$'
Oct 13 08:09:34.192: Unexpected error:
    <*errors.errorString | 0xc000cd47a0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.17.13-rc.0.23+744dec9d5d21b5]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-662s\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-7v0t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-pn5n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\\nnode/bootstrap-e2e-minion-group-662s cordoned\\nevicting pod \\\"coredns-7876554b79-66cpc\\\"\\nevicting pod \\\"coredns-7876554b79-f7tlj\\\"\\nevicting pod \\\"test-apparmor-2rk2m\\\"\\nevicting pod \\\"test-apparmor-rxnrq\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"kube-dns-autoscaler-579dbcdc47-qqb9d\\\"\\nevicting pod \\\"service-test-l4gg5\\\"\\nevicting pod \\\"l7-default-backend-f947d4dd5-5cjqw\\\"\\nevicting pod \\\"res-cons-upgrade-h5s4h\\\"\\nevicting pod \\\"apparmor-loader-2jfgd\\\"\\nevicting pod \\\"kubernetes-dashboard-864d864f44-bbk9s\\\"\\nevicting pod \\\"ss-1\\\"\\npod/test-apparmor-rxnrq evicted\\npod/ss-1 evicted\\npod/l7-default-backend-f947d4dd5-5cjqw evicted\\npod/volume-snapshot-controller-0 evicted\\npod/kube-dns-autoscaler-579dbcdc47-qqb9d evicted\\npod/kubernetes-dashboard-864d864f44-bbk9s evicted\\npod/res-cons-upgrade-h5s4h evicted\\npod/service-test-l4gg5 evicted\\npod/apparmor-loader-2jfgd evicted\\npod/coredns-7876554b79-f7tlj evicted\\npod/coredns-7876554b79-66cpc evicted\\npod/test-apparmor-2rk2m evicted\\nnode/bootstrap-e2e-minion-group-662s evicted\\n.........................................................................................................Node bootstrap-e2e-minion-group-662s recreated.\\nNode bootstrap-e2e-minion-group-662s Ready=True\\nnode/bootstrap-e2e-minion-group-662s uncordoned\\nnode/bootstrap-e2e-minion-group-7v0t cordoned\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-hcdgj\\\"\\nevicting pod \\\"res-cons-upgrade-ttn5m\\\"\\nevicting pod \\\"foo-v47kg\\\"\\nevicting pod \\\"dp-57bb6bd67b-wvzjk\\\"\\nevicting pod \\\"service-test-drtn2\\\"\\nevicting pod \\\"rs-ss5rf\\\"\\nevicting pod \\\"res-cons-upgrade-lpb4k\\\"\\nevicting pod \\\"coredns-7876554b79-qsb4l\\\"\\nevicting pod \\\"ss-0\\\"\\npod/dp-57bb6bd67b-wvzjk evicted\\npod/volume-snapshot-controller-0 evicted\\npod/service-test-drtn2 evicted\\npod/res-cons-upgrade-ttn5m evicted\\npod/res-cons-upgrade-lpb4k evicted\\npod/rs-ss5rf evicted\\npod/res-cons-upgrade-ctrl-hcdgj evicted\\npod/ss-0 evicted\\npod/coredns-7876554b79-qsb4l evicted\\npod/foo-v47kg evicted\\nnode/bootstrap-e2e-minion-group-7v0t evicted\\n...........................................................................................Node bootstrap-e2e-minion-group-7v0t recreated.\\nNode bootstrap-e2e-minion-group-7v0t Ready=True\\nnode/bootstrap-e2e-minion-group-7v0t uncordoned\\nnode/bootstrap-e2e-minion-group-pn5n cordoned\\nevicting pod \\\"apparmor-loader-lg7z6\\\"\\nevicting pod \\\"kubernetes-dashboard-864d864f44-5hxk7\\\"\\nevicting pod \\\"res-cons-upgrade-9nx4s\\\"\\nevicting pod \\\"res-cons-upgrade-lc4r4\\\"\\nevicting pod \\\"res-cons-upgrade-vw2k9\\\"\\nevicting pod \\\"coredns-7876554b79-9lw7c\\\"\\nevicting pod \\\"kube-dns-autoscaler-579dbcdc47-d9z8k\\\"\\nevicting pod \\\"service-test-rz672\\\"\\nevicting pod \\\"l7-default-backend-f947d4dd5-zpdp2\\\"\\nevicting pod \\\"metrics-server-v0.3.6-7d85574868-dwmgl\\\"\\nevicting pod \\\"foo-kgmcf\\\"\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"ss-2\\\"\\npod/res-cons-upgrade-vw2k9 evicted\\npod/res-cons-upgrade-9nx4s evicted\\npod/metrics-server-v0.3.6-7d85574868-dwmgl evicted\\npod/l7-default-backend-f947d4dd5-zpdp2 evicted\\npod/kube-dns-autoscaler-579dbcdc47-d9z8k evicted\\npod/ss-1 evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-864d864f44-5hxk7 evicted\\npod/apparmor-loader-lg7z6 evicted\\npod/service-test-rz672 evicted\\npod/res-cons-upgrade-lc4r4 evicted\\npod/coredns-7876554b79-9lw7c evicted\\npod/foo-kgmcf evicted\\nnode/bootstrap-e2e-minion-group-pn5n evicted\\n......................................................................................................Node bootstrap-e2e-minion-group-pn5n recreated.\\nNode bootstrap-e2e-minion-group-pn5n Ready=True\\nnode/bootstrap-e2e-minion-group-pn5n uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.18.10-rc.0.34+92bf8a2b53f9bd\\nbootstrap-e2e-minion-group-662s   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nbootstrap-e2e-minion-group-7v0t   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nbootstrap-e2e-minion-group-pn5n   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-662s\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-7v0t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-pn5n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\n\", stderr \"Project: gce-gci-upg-1-4-1-5-ctl-skew\\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-662s bootstrap-e2e-minion-group-7v0t bootstrap-e2e-minion-group-pn5n\\n== Preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5].\\nNAME                                                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5  n1-standard-2               2020-10-13T00:59:02.702-07:00\\n== Finished preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\\n== Upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-662s. == \\nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5901/test-apparmor-2rk2m\\n== Recreating instance bootstrap-e2e-minion-group-662s. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-662s to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-662s. == \\n== Draining bootstrap-e2e-minion-group-7v0t. == \\n== Recreating instance bootstrap-e2e-minion-group-7v0t. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-7v0t to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-7v0t. == \\n== Draining bootstrap-e2e-minion-group-pn5n. == \\n== Recreating instance bootstrap-e2e-minion-group-pn5n. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-pn5n to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-pn5n. == \\n== Deleting old templates in gce-gci-upg-1-4-1-5-ctl-skew. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5. ==\\nWarning: Permanently added 'compute.7142419891158049391' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-gci-upg-1-4-1-5-ctl-skew\\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.17.13-rc.0.23+744dec9d5d21b5]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-662s\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-7v0t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-pn5n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\nnode/bootstrap-e2e-minion-group-662s cordoned\nevicting pod \"coredns-7876554b79-66cpc\"\nevicting pod \"coredns-7876554b79-f7tlj\"\nevicting pod \"test-apparmor-2rk2m\"\nevicting pod \"test-apparmor-rxnrq\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"kube-dns-autoscaler-579dbcdc47-qqb9d\"\nevicting pod \"service-test-l4gg5\"\nevicting pod \"l7-default-backend-f947d4dd5-5cjqw\"\nevicting pod \"res-cons-upgrade-h5s4h\"\nevicting pod \"apparmor-loader-2jfgd\"\nevicting pod \"kubernetes-dashboard-864d864f44-bbk9s\"\nevicting pod \"ss-1\"\npod/test-apparmor-rxnrq evicted\npod/ss-1 evicted\npod/l7-default-backend-f947d4dd5-5cjqw evicted\npod/volume-snapshot-controller-0 evicted\npod/kube-dns-autoscaler-579dbcdc47-qqb9d evicted\npod/kubernetes-dashboard-864d864f44-bbk9s evicted\npod/res-cons-upgrade-h5s4h evicted\npod/service-test-l4gg5 evicted\npod/apparmor-loader-2jfgd evicted\npod/coredns-7876554b79-f7tlj evicted\npod/coredns-7876554b79-66cpc evicted\npod/test-apparmor-2rk2m evicted\nnode/bootstrap-e2e-minion-group-662s evicted\n.........................................................................................................Node bootstrap-e2e-minion-group-662s recreated.\nNode bootstrap-e2e-minion-group-662s Ready=True\nnode/bootstrap-e2e-minion-group-662s uncordoned\nnode/bootstrap-e2e-minion-group-7v0t cordoned\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"res-cons-upgrade-ctrl-hcdgj\"\nevicting pod \"res-cons-upgrade-ttn5m\"\nevicting pod \"foo-v47kg\"\nevicting pod \"dp-57bb6bd67b-wvzjk\"\nevicting pod \"service-test-drtn2\"\nevicting pod \"rs-ss5rf\"\nevicting pod \"res-cons-upgrade-lpb4k\"\nevicting pod \"coredns-7876554b79-qsb4l\"\nevicting pod \"ss-0\"\npod/dp-57bb6bd67b-wvzjk evicted\npod/volume-snapshot-controller-0 evicted\npod/service-test-drtn2 evicted\npod/res-cons-upgrade-ttn5m evicted\npod/res-cons-upgrade-lpb4k evicted\npod/rs-ss5rf evicted\npod/res-cons-upgrade-ctrl-hcdgj evicted\npod/ss-0 evicted\npod/coredns-7876554b79-qsb4l evicted\npod/foo-v47kg evicted\nnode/bootstrap-e2e-minion-group-7v0t evicted\n...........................................................................................Node bootstrap-e2e-minion-group-7v0t recreated.\nNode bootstrap-e2e-minion-group-7v0t Ready=True\nnode/bootstrap-e2e-minion-group-7v0t uncordoned\nnode/bootstrap-e2e-minion-group-pn5n cordoned\nevicting pod \"apparmor-loader-lg7z6\"\nevicting pod \"kubernetes-dashboard-864d864f44-5hxk7\"\nevicting pod \"res-cons-upgrade-9nx4s\"\nevicting pod \"res-cons-upgrade-lc4r4\"\nevicting pod \"res-cons-upgrade-vw2k9\"\nevicting pod \"coredns-7876554b79-9lw7c\"\nevicting pod \"kube-dns-autoscaler-579dbcdc47-d9z8k\"\nevicting pod \"service-test-rz672\"\nevicting pod \"l7-default-backend-f947d4dd5-zpdp2\"\nevicting pod \"metrics-server-v0.3.6-7d85574868-dwmgl\"\nevicting pod \"foo-kgmcf\"\nevicting pod \"ss-1\"\nevicting pod \"ss-2\"\npod/res-cons-upgrade-vw2k9 evicted\npod/res-cons-upgrade-9nx4s evicted\npod/metrics-server-v0.3.6-7d85574868-dwmgl evicted\npod/l7-default-backend-f947d4dd5-zpdp2 evicted\npod/kube-dns-autoscaler-579dbcdc47-d9z8k evicted\npod/ss-1 evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-864d864f44-5hxk7 evicted\npod/apparmor-loader-lg7z6 evicted\npod/service-test-rz672 evicted\npod/res-cons-upgrade-lc4r4 evicted\npod/coredns-7876554b79-9lw7c evicted\npod/foo-kgmcf evicted\nnode/bootstrap-e2e-minion-group-pn5n evicted\n......................................................................................................Node bootstrap-e2e-minion-group-pn5n recreated.\nNode bootstrap-e2e-minion-group-pn5n Ready=True\nnode/bootstrap-e2e-minion-group-pn5n uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.18.10-rc.0.34+92bf8a2b53f9bd\nbootstrap-e2e-minion-group-662s   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nbootstrap-e2e-minion-group-7v0t   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nbootstrap-e2e-minion-group-pn5n   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-662s\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-7v0t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-pn5n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\n", stderr "Project: gce-gci-upg-1-4-1-5-ctl-skew\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-662s bootstrap-e2e-minion-group-7v0t bootstrap-e2e-minion-group-pn5n\n== Preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5].\nNAME                                                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5  n1-standard-2               2020-10-13T00:59:02.702-07:00\n== Finished preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\n== Upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-662s. == \nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5901/test-apparmor-2rk2m\n== Recreating instance bootstrap-e2e-minion-group-662s. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-662s to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-662s. == \n== Draining bootstrap-e2e-minion-group-7v0t. == \n== Recreating instance bootstrap-e2e-minion-group-7v0t. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-7v0t to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-7v0t. == \n== Draining bootstrap-e2e-minion-group-pn5n. == \n== Recreating instance bootstrap-e2e-minion-group-pn5n. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-pn5n to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-pn5n. == \n== Deleting old templates in gce-gci-upg-1-4-1-5-ctl-skew. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5. ==\nWarning: Permanently added 'compute.7142419891158049391' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-gci-upg-1-4-1-5-ctl-skew\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func3.1.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:178 +0x14a
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc002453200)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/cloud/gcp.runUpgradeSuite(0xc000c74dc0, 0x7773ca0, 0xc, 0xc, 0xc00075a720, 0xc002e41b30, 0x2, 0xc003376080)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:479 +0x47a
k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:183 +0x222
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022f4100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0022f4100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0022f4100, 0x4a0a310)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Find coredns-7876554b79-66cpcnevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade] 17m42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sDowngrade\s\[Feature\:Downgrade\]\scluster\sdowngrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:165
Oct 13 08:09:34.192: Unexpected error:
    <*errors.errorString | 0xc000cd47a0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.17.13-rc.0.23+744dec9d5d21b5]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-662s\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-7v0t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-pn5n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\\nnode/bootstrap-e2e-minion-group-662s cordoned\\nevicting pod \\\"coredns-7876554b79-66cpc\\\"\\nevicting pod \\\"coredns-7876554b79-f7tlj\\\"\\nevicting pod \\\"test-apparmor-2rk2m\\\"\\nevicting pod \\\"test-apparmor-rxnrq\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"kube-dns-autoscaler-579dbcdc47-qqb9d\\\"\\nevicting pod \\\"service-test-l4gg5\\\"\\nevicting pod \\\"l7-default-backend-f947d4dd5-5cjqw\\\"\\nevicting pod \\\"res-cons-upgrade-h5s4h\\\"\\nevicting pod \\\"apparmor-loader-2jfgd\\\"\\nevicting pod \\\"kubernetes-dashboard-864d864f44-bbk9s\\\"\\nevicting pod \\\"ss-1\\\"\\npod/test-apparmor-rxnrq evicted\\npod/ss-1 evicted\\npod/l7-default-backend-f947d4dd5-5cjqw evicted\\npod/volume-snapshot-controller-0 evicted\\npod/kube-dns-autoscaler-579dbcdc47-qqb9d evicted\\npod/kubernetes-dashboard-864d864f44-bbk9s evicted\\npod/res-cons-upgrade-h5s4h evicted\\npod/service-test-l4gg5 evicted\\npod/apparmor-loader-2jfgd evicted\\npod/coredns-7876554b79-f7tlj evicted\\npod/coredns-7876554b79-66cpc evicted\\npod/test-apparmor-2rk2m evicted\\nnode/bootstrap-e2e-minion-group-662s evicted\\n.........................................................................................................Node bootstrap-e2e-minion-group-662s recreated.\\nNode bootstrap-e2e-minion-group-662s Ready=True\\nnode/bootstrap-e2e-minion-group-662s uncordoned\\nnode/bootstrap-e2e-minion-group-7v0t cordoned\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-hcdgj\\\"\\nevicting pod \\\"res-cons-upgrade-ttn5m\\\"\\nevicting pod \\\"foo-v47kg\\\"\\nevicting pod \\\"dp-57bb6bd67b-wvzjk\\\"\\nevicting pod \\\"service-test-drtn2\\\"\\nevicting pod \\\"rs-ss5rf\\\"\\nevicting pod \\\"res-cons-upgrade-lpb4k\\\"\\nevicting pod \\\"coredns-7876554b79-qsb4l\\\"\\nevicting pod \\\"ss-0\\\"\\npod/dp-57bb6bd67b-wvzjk evicted\\npod/volume-snapshot-controller-0 evicted\\npod/service-test-drtn2 evicted\\npod/res-cons-upgrade-ttn5m evicted\\npod/res-cons-upgrade-lpb4k evicted\\npod/rs-ss5rf evicted\\npod/res-cons-upgrade-ctrl-hcdgj evicted\\npod/ss-0 evicted\\npod/coredns-7876554b79-qsb4l evicted\\npod/foo-v47kg evicted\\nnode/bootstrap-e2e-minion-group-7v0t evicted\\n...........................................................................................Node bootstrap-e2e-minion-group-7v0t recreated.\\nNode bootstrap-e2e-minion-group-7v0t Ready=True\\nnode/bootstrap-e2e-minion-group-7v0t uncordoned\\nnode/bootstrap-e2e-minion-group-pn5n cordoned\\nevicting pod \\\"apparmor-loader-lg7z6\\\"\\nevicting pod \\\"kubernetes-dashboard-864d864f44-5hxk7\\\"\\nevicting pod \\\"res-cons-upgrade-9nx4s\\\"\\nevicting pod \\\"res-cons-upgrade-lc4r4\\\"\\nevicting pod \\\"res-cons-upgrade-vw2k9\\\"\\nevicting pod \\\"coredns-7876554b79-9lw7c\\\"\\nevicting pod \\\"kube-dns-autoscaler-579dbcdc47-d9z8k\\\"\\nevicting pod \\\"service-test-rz672\\\"\\nevicting pod \\\"l7-default-backend-f947d4dd5-zpdp2\\\"\\nevicting pod \\\"metrics-server-v0.3.6-7d85574868-dwmgl\\\"\\nevicting pod \\\"foo-kgmcf\\\"\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"ss-2\\\"\\npod/res-cons-upgrade-vw2k9 evicted\\npod/res-cons-upgrade-9nx4s evicted\\npod/metrics-server-v0.3.6-7d85574868-dwmgl evicted\\npod/l7-default-backend-f947d4dd5-zpdp2 evicted\\npod/kube-dns-autoscaler-579dbcdc47-d9z8k evicted\\npod/ss-1 evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-864d864f44-5hxk7 evicted\\npod/apparmor-loader-lg7z6 evicted\\npod/service-test-rz672 evicted\\npod/res-cons-upgrade-lc4r4 evicted\\npod/coredns-7876554b79-9lw7c evicted\\npod/foo-kgmcf evicted\\nnode/bootstrap-e2e-minion-group-pn5n evicted\\n......................................................................................................Node bootstrap-e2e-minion-group-pn5n recreated.\\nNode bootstrap-e2e-minion-group-pn5n Ready=True\\nnode/bootstrap-e2e-minion-group-pn5n uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.18.10-rc.0.34+92bf8a2b53f9bd\\nbootstrap-e2e-minion-group-662s   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nbootstrap-e2e-minion-group-7v0t   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nbootstrap-e2e-minion-group-pn5n   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.18.10-rc.0.34+92bf8a2b53f9bd\\\"\\nname: \\\"bootstrap-e2e-minion-group-662s\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-7v0t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-pn5n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\n\", stderr \"Project: gce-gci-upg-1-4-1-5-ctl-skew\\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-662s bootstrap-e2e-minion-group-7v0t bootstrap-e2e-minion-group-pn5n\\n== Preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5].\\nNAME                                                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5  n1-standard-2               2020-10-13T00:59:02.702-07:00\\n== Finished preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\\n== Upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-662s. == \\nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5901/test-apparmor-2rk2m\\n== Recreating instance bootstrap-e2e-minion-group-662s. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-662s to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-662s. == \\n== Draining bootstrap-e2e-minion-group-7v0t. == \\n== Recreating instance bootstrap-e2e-minion-group-7v0t. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-7v0t to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-7v0t. == \\n== Draining bootstrap-e2e-minion-group-pn5n. == \\n== Recreating instance bootstrap-e2e-minion-group-pn5n. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-pn5n to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-pn5n. == \\n== Deleting old templates in gce-gci-upg-1-4-1-5-ctl-skew. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5. ==\\nWarning: Permanently added 'compute.7142419891158049391' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-gci-upg-1-4-1-5-ctl-skew\\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.17.13-rc.0.23+744dec9d5d21b5]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-662s\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-7v0t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-pn5n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\nnode/bootstrap-e2e-minion-group-662s cordoned\nevicting pod \"coredns-7876554b79-66cpc\"\nevicting pod \"coredns-7876554b79-f7tlj\"\nevicting pod \"test-apparmor-2rk2m\"\nevicting pod \"test-apparmor-rxnrq\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"kube-dns-autoscaler-579dbcdc47-qqb9d\"\nevicting pod \"service-test-l4gg5\"\nevicting pod \"l7-default-backend-f947d4dd5-5cjqw\"\nevicting pod \"res-cons-upgrade-h5s4h\"\nevicting pod \"apparmor-loader-2jfgd\"\nevicting pod \"kubernetes-dashboard-864d864f44-bbk9s\"\nevicting pod \"ss-1\"\npod/test-apparmor-rxnrq evicted\npod/ss-1 evicted\npod/l7-default-backend-f947d4dd5-5cjqw evicted\npod/volume-snapshot-controller-0 evicted\npod/kube-dns-autoscaler-579dbcdc47-qqb9d evicted\npod/kubernetes-dashboard-864d864f44-bbk9s evicted\npod/res-cons-upgrade-h5s4h evicted\npod/service-test-l4gg5 evicted\npod/apparmor-loader-2jfgd evicted\npod/coredns-7876554b79-f7tlj evicted\npod/coredns-7876554b79-66cpc evicted\npod/test-apparmor-2rk2m evicted\nnode/bootstrap-e2e-minion-group-662s evicted\n.........................................................................................................Node bootstrap-e2e-minion-group-662s recreated.\nNode bootstrap-e2e-minion-group-662s Ready=True\nnode/bootstrap-e2e-minion-group-662s uncordoned\nnode/bootstrap-e2e-minion-group-7v0t cordoned\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"res-cons-upgrade-ctrl-hcdgj\"\nevicting pod \"res-cons-upgrade-ttn5m\"\nevicting pod \"foo-v47kg\"\nevicting pod \"dp-57bb6bd67b-wvzjk\"\nevicting pod \"service-test-drtn2\"\nevicting pod \"rs-ss5rf\"\nevicting pod \"res-cons-upgrade-lpb4k\"\nevicting pod \"coredns-7876554b79-qsb4l\"\nevicting pod \"ss-0\"\npod/dp-57bb6bd67b-wvzjk evicted\npod/volume-snapshot-controller-0 evicted\npod/service-test-drtn2 evicted\npod/res-cons-upgrade-ttn5m evicted\npod/res-cons-upgrade-lpb4k evicted\npod/rs-ss5rf evicted\npod/res-cons-upgrade-ctrl-hcdgj evicted\npod/ss-0 evicted\npod/coredns-7876554b79-qsb4l evicted\npod/foo-v47kg evicted\nnode/bootstrap-e2e-minion-group-7v0t evicted\n...........................................................................................Node bootstrap-e2e-minion-group-7v0t recreated.\nNode bootstrap-e2e-minion-group-7v0t Ready=True\nnode/bootstrap-e2e-minion-group-7v0t uncordoned\nnode/bootstrap-e2e-minion-group-pn5n cordoned\nevicting pod \"apparmor-loader-lg7z6\"\nevicting pod \"kubernetes-dashboard-864d864f44-5hxk7\"\nevicting pod \"res-cons-upgrade-9nx4s\"\nevicting pod \"res-cons-upgrade-lc4r4\"\nevicting pod \"res-cons-upgrade-vw2k9\"\nevicting pod \"coredns-7876554b79-9lw7c\"\nevicting pod \"kube-dns-autoscaler-579dbcdc47-d9z8k\"\nevicting pod \"service-test-rz672\"\nevicting pod \"l7-default-backend-f947d4dd5-zpdp2\"\nevicting pod \"metrics-server-v0.3.6-7d85574868-dwmgl\"\nevicting pod \"foo-kgmcf\"\nevicting pod \"ss-1\"\nevicting pod \"ss-2\"\npod/res-cons-upgrade-vw2k9 evicted\npod/res-cons-upgrade-9nx4s evicted\npod/metrics-server-v0.3.6-7d85574868-dwmgl evicted\npod/l7-default-backend-f947d4dd5-zpdp2 evicted\npod/kube-dns-autoscaler-579dbcdc47-d9z8k evicted\npod/ss-1 evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-864d864f44-5hxk7 evicted\npod/apparmor-loader-lg7z6 evicted\npod/service-test-rz672 evicted\npod/res-cons-upgrade-lc4r4 evicted\npod/coredns-7876554b79-9lw7c evicted\npod/foo-kgmcf evicted\nnode/bootstrap-e2e-minion-group-pn5n evicted\n......................................................................................................Node bootstrap-e2e-minion-group-pn5n recreated.\nNode bootstrap-e2e-minion-group-pn5n Ready=True\nnode/bootstrap-e2e-minion-group-pn5n uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.18.10-rc.0.34+92bf8a2b53f9bd\nbootstrap-e2e-minion-group-662s   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nbootstrap-e2e-minion-group-7v0t   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nbootstrap-e2e-minion-group-pn5n   Ready                      <none>   15m   v1.17.13-rc.0.23+744dec9d5d21b5\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.18.10-rc.0.34+92bf8a2b53f9bd\"\nname: \"bootstrap-e2e-minion-group-662s\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-7v0t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-pn5n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\n", stderr "Project: gce-gci-upg-1-4-1-5-ctl-skew\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-662s bootstrap-e2e-minion-group-7v0t bootstrap-e2e-minion-group-pn5n\n== Preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5].\nNAME                                                           MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-17-13-rc-0-23-744dec9d5d21b5  n1-standard-2               2020-10-13T00:59:02.702-07:00\n== Finished preparing node upgrade (to v1.17.13-rc.0.23+744dec9d5d21b5). ==\n== Upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-662s. == \nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5901/test-apparmor-2rk2m\n== Recreating instance bootstrap-e2e-minion-group-662s. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-662s to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-662s. == \n== Draining bootstrap-e2e-minion-group-7v0t. == \n== Recreating instance bootstrap-e2e-minion-group-7v0t. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-7v0t to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-7v0t. == \n== Draining bootstrap-e2e-minion-group-pn5n. == \n== Recreating instance bootstrap-e2e-minion-group-pn5n. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-pn5n to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-pn5n. == \n== Deleting old templates in gce-gci-upg-1-4-1-5-ctl-skew. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-1-4-1-5-ctl-skew/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.17.13-rc.0.23+744dec9d5d21b5. ==\nWarning: Permanently added 'compute.7142419891158049391' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-gci-upg-1-4-1-5-ctl-skew\nNetwork Project: gce-gci-upg-1-4-1-5-ctl-skew\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:178