This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 443 succeeded
Started2020-05-24 05:19
Elapsed11h57m
Revision
Builder0d93574e-9d7e-11ea-b32e-1a4cbe0cfe79
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/1155ae08-8737-479c-9b1e-4568d252b900/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/1155ae08-8737-479c-9b1e-4568d252b900/targets/test
infra-commit2259624c5
job-versionv1.17.7-rc.0.3+34646fff129742
master_os_imagecos-77-12371-175-0
node_os_imagecos-73-11647-163-0
revisionv1.17.7-rc.0.3+34646fff129742

Test Failures


Cluster downgrade [sig-cluster-lifecycle] cluster-downgrade 11m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\s\[sig\-cluster\-lifecycle\]\scluster\-downgrade$'
May 24 05:41:27.143: Unexpected error:
    <*errors.errorString | 0xc0004801c0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.11-rc.0.1+a33df4b740b54a]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-9l2t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-gsd8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-r0sj\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\\nnode/bootstrap-e2e-minion-group-9l2t cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"res-cons-upgrade-j9ddf\\\"\\nevicting pod \\\"res-cons-upgrade-mt69x\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"foo-7ssvc\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-5krq7\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-kdd8w\\\"\\npod/res-cons-upgrade-j9ddf evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-mt69x evicted\\npod/kubernetes-dashboard-7778f8b456-kdd8w evicted\\npod/volume-snapshot-controller-0 evicted\\npod/kube-dns-autoscaler-65bc6d4889-5krq7 evicted\\npod/foo-7ssvc evicted\\nnode/bootstrap-e2e-minion-group-9l2t evicted\\n.............................................................................Node bootstrap-e2e-minion-group-9l2t recreated.\\nNode bootstrap-e2e-minion-group-9l2t Ready=True\\nnode/bootstrap-e2e-minion-group-9l2t uncordoned\\nnode/bootstrap-e2e-minion-group-gsd8 cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"res-cons-upgrade-bn9g4\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-qj48n\\\"\\nevicting pod \\\"res-cons-upgrade-wgt4d\\\"\\nevicting pod \\\"res-cons-upgrade-zdqb4\\\"\\nevicting pod \\\"coredns-65567c7b57-q7dmf\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-kl4nb\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-qpm8w\\\"\\nevicting pod \\\"service-test-q8rbq\\\"\\nevicting pod \\\"foo-6tzcs\\\"\\nevicting pod \\\"rs-75f8g\\\"\\nevicting pod \\\"ss-0\\\"\\npod/service-test-q8rbq evicted\\npod/res-cons-upgrade-bn9g4 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-qpm8w evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-ctrl-qj48n evicted\\npod/res-cons-upgrade-zdqb4 evicted\\npod/res-cons-upgrade-wgt4d evicted\\npod/rs-75f8g evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-7778f8b456-kl4nb evicted\\npod/coredns-65567c7b57-q7dmf evicted\\npod/foo-6tzcs evicted\\nnode/bootstrap-e2e-minion-group-gsd8 evicted\\n.........................................................................................................................Node bootstrap-e2e-minion-group-gsd8 recreated.\\nNode bootstrap-e2e-minion-group-gsd8 Ready=True\\nnode/bootstrap-e2e-minion-group-gsd8 uncordoned\\nnode/bootstrap-e2e-minion-group-r0sj cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"test-apparmor-ntrqg\\\"\\nevicting pod \\\"l7-default-backend-678889f899-gsh7r\\\"\\nevicting pod \\\"apparmor-loader-d46k9\\\"\\nevicting pod \\\"test-apparmor-mks5s\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-k5m9f\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"coredns-65567c7b57-2mbz9\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-6vpjc\\\"\\nevicting pod \\\"service-test-5n8rg\\\"\\nevicting pod \\\"foo-xzg96\\\"\\nevicting pod \\\"dp-657fc4b57d-r7wjr\\\"\\nevicting pod \\\"res-cons-upgrade-2jvjb\\\"\\npod/test-apparmor-mks5s evicted\\npod/dp-657fc4b57d-r7wjr evicted\\npod/apparmor-loader-d46k9 evicted\\npod/volume-snapshot-controller-0 evicted\\npod/service-test-5n8rg evicted\\npod/ss-1 evicted\\npod/l7-default-backend-678889f899-gsh7r evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-k5m9f evicted\\npod/coredns-65567c7b57-2mbz9 evicted\\npod/res-cons-upgrade-2jvjb evicted\\npod/kube-dns-autoscaler-65bc6d4889-6vpjc evicted\\npod/test-apparmor-ntrqg evicted\\npod/foo-xzg96 evicted\\nnode/bootstrap-e2e-minion-group-r0sj evicted\\n................................................................................................................................................Node bootstrap-e2e-minion-group-r0sj recreated.\\nNode bootstrap-e2e-minion-group-r0sj Ready=True\\nnode/bootstrap-e2e-minion-group-r0sj uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.7-rc.0.3+34646fff129742\\nbootstrap-e2e-minion-group-9l2t   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nbootstrap-e2e-minion-group-gsd8   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nbootstrap-e2e-minion-group-r0sj   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\ncontroller-manager   Healthy   ok                  \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-9l2t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\nname: \\\"bootstrap-e2e-minion-group-gsd8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\nname: \\\"bootstrap-e2e-minion-group-r0sj\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\n\", stderr \"Project: k8s-jkns-gci-etcd3\\nNetwork Project: k8s-jkns-gci-etcd3\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-9l2t bootstrap-e2e-minion-group-gsd8 bootstrap-e2e-minion-group-r0sj\\n== Preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a].\\nNAME                                                          MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a  n1-standard-2               2020-05-23T22:30:45.658-07:00\\n== Finished preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\\n== Upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-9l2t. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-qc2c2, sig-apps-daemonset-upgrade-8333/ds1-4gft2\\n== Recreating instance bootstrap-e2e-minion-group-9l2t. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-9l2t to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-9l2t. == \\n== Draining bootstrap-e2e-minion-group-gsd8. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-rk8qn, sig-apps-daemonset-upgrade-8333/ds1-2bq5h\\n== Recreating instance bootstrap-e2e-minion-group-gsd8. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-gsd8 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-gsd8. == \\n== Draining bootstrap-e2e-minion-group-r0sj. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-c2nrb, sig-apps-daemonset-upgrade-8333/ds1-2lz5r; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5665/test-apparmor-ntrqg\\n== Recreating instance bootstrap-e2e-minion-group-r0sj. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-r0sj to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-r0sj. == \\n== Deleting old templates in k8s-jkns-gci-etcd3. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a. ==\\nWarning: Permanently added 'compute.4409736207352183347' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gci-etcd3\\nNetwork Project: k8s-jkns-gci-etcd3\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.11-rc.0.1+a33df4b740b54a]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-9l2t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-gsd8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-r0sj\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\nnode/bootstrap-e2e-minion-group-9l2t cordoned\nevicting pod \"ss-0\"\nevicting pod \"res-cons-upgrade-j9ddf\"\nevicting pod \"res-cons-upgrade-mt69x\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"foo-7ssvc\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-5krq7\"\nevicting pod \"kubernetes-dashboard-7778f8b456-kdd8w\"\npod/res-cons-upgrade-j9ddf evicted\npod/ss-0 evicted\npod/res-cons-upgrade-mt69x evicted\npod/kubernetes-dashboard-7778f8b456-kdd8w evicted\npod/volume-snapshot-controller-0 evicted\npod/kube-dns-autoscaler-65bc6d4889-5krq7 evicted\npod/foo-7ssvc evicted\nnode/bootstrap-e2e-minion-group-9l2t evicted\n.............................................................................Node bootstrap-e2e-minion-group-9l2t recreated.\nNode bootstrap-e2e-minion-group-9l2t Ready=True\nnode/bootstrap-e2e-minion-group-9l2t uncordoned\nnode/bootstrap-e2e-minion-group-gsd8 cordoned\nevicting pod \"ss-2\"\nevicting pod \"res-cons-upgrade-bn9g4\"\nevicting pod \"res-cons-upgrade-ctrl-qj48n\"\nevicting pod \"res-cons-upgrade-wgt4d\"\nevicting pod \"res-cons-upgrade-zdqb4\"\nevicting pod \"coredns-65567c7b57-q7dmf\"\nevicting pod \"kubernetes-dashboard-7778f8b456-kl4nb\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-qpm8w\"\nevicting pod \"service-test-q8rbq\"\nevicting pod \"foo-6tzcs\"\nevicting pod \"rs-75f8g\"\nevicting pod \"ss-0\"\npod/service-test-q8rbq evicted\npod/res-cons-upgrade-bn9g4 evicted\npod/metrics-server-v0.3.6-5f859c87d6-qpm8w evicted\npod/ss-0 evicted\npod/res-cons-upgrade-ctrl-qj48n evicted\npod/res-cons-upgrade-zdqb4 evicted\npod/res-cons-upgrade-wgt4d evicted\npod/rs-75f8g evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-7778f8b456-kl4nb evicted\npod/coredns-65567c7b57-q7dmf evicted\npod/foo-6tzcs evicted\nnode/bootstrap-e2e-minion-group-gsd8 evicted\n.........................................................................................................................Node bootstrap-e2e-minion-group-gsd8 recreated.\nNode bootstrap-e2e-minion-group-gsd8 Ready=True\nnode/bootstrap-e2e-minion-group-gsd8 uncordoned\nnode/bootstrap-e2e-minion-group-r0sj cordoned\nevicting pod \"ss-1\"\nevicting pod \"test-apparmor-ntrqg\"\nevicting pod \"l7-default-backend-678889f899-gsh7r\"\nevicting pod \"apparmor-loader-d46k9\"\nevicting pod \"test-apparmor-mks5s\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-k5m9f\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"coredns-65567c7b57-2mbz9\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-6vpjc\"\nevicting pod \"service-test-5n8rg\"\nevicting pod \"foo-xzg96\"\nevicting pod \"dp-657fc4b57d-r7wjr\"\nevicting pod \"res-cons-upgrade-2jvjb\"\npod/test-apparmor-mks5s evicted\npod/dp-657fc4b57d-r7wjr evicted\npod/apparmor-loader-d46k9 evicted\npod/volume-snapshot-controller-0 evicted\npod/service-test-5n8rg evicted\npod/ss-1 evicted\npod/l7-default-backend-678889f899-gsh7r evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-k5m9f evicted\npod/coredns-65567c7b57-2mbz9 evicted\npod/res-cons-upgrade-2jvjb evicted\npod/kube-dns-autoscaler-65bc6d4889-6vpjc evicted\npod/test-apparmor-ntrqg evicted\npod/foo-xzg96 evicted\nnode/bootstrap-e2e-minion-group-r0sj evicted\n................................................................................................................................................Node bootstrap-e2e-minion-group-r0sj recreated.\nNode bootstrap-e2e-minion-group-r0sj Ready=True\nnode/bootstrap-e2e-minion-group-r0sj uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.7-rc.0.3+34646fff129742\nbootstrap-e2e-minion-group-9l2t   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nbootstrap-e2e-minion-group-gsd8   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nbootstrap-e2e-minion-group-r0sj   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-9l2t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\nname: \"bootstrap-e2e-minion-group-gsd8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\nname: \"bootstrap-e2e-minion-group-r0sj\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\n", stderr "Project: k8s-jkns-gci-etcd3\nNetwork Project: k8s-jkns-gci-etcd3\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-9l2t bootstrap-e2e-minion-group-gsd8 bootstrap-e2e-minion-group-r0sj\n== Preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a].\nNAME                                                          MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a  n1-standard-2               2020-05-23T22:30:45.658-07:00\n== Finished preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\n== Upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-9l2t. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-qc2c2, sig-apps-daemonset-upgrade-8333/ds1-4gft2\n== Recreating instance bootstrap-e2e-minion-group-9l2t. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-9l2t to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-9l2t. == \n== Draining bootstrap-e2e-minion-group-gsd8. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-rk8qn, sig-apps-daemonset-upgrade-8333/ds1-2bq5h\n== Recreating instance bootstrap-e2e-minion-group-gsd8. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-gsd8 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-gsd8. == \n== Draining bootstrap-e2e-minion-group-r0sj. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-c2nrb, sig-apps-daemonset-upgrade-8333/ds1-2lz5r; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5665/test-apparmor-ntrqg\n== Recreating instance bootstrap-e2e-minion-group-r0sj. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-r0sj to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-r0sj. == \n== Deleting old templates in k8s-jkns-gci-etcd3. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a. ==\nWarning: Permanently added 'compute.4409736207352183347' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gci-etcd3\nNetwork Project: k8s-jkns-gci-etcd3\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178 +0x14a
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc002f8b218)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/lifecycle.runUpgradeSuite(0xc0009e1180, 0x76e1c40, 0xc, 0xc, 0xc000c85140, 0xc000897e00, 0xc001aa8800, 0x2, 0xc001aa8180)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:483 +0x47a
k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:183 +0x227
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000198800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:94 +0x242
k8s.io/kubernetes/test/e2e.TestE2E(0xc000198800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:116 +0x2b
testing.tRunner(0xc000198800, 0x49b0a50)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Find ss-0nevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade] 21m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sDowngrade\s\[Feature\:Downgrade\]\scluster\sdowngrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:165
May 24 05:41:27.143: Unexpected error:
    <*errors.errorString | 0xc0004801c0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.11-rc.0.1+a33df4b740b54a]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-9l2t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-gsd8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-r0sj\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\\nnode/bootstrap-e2e-minion-group-9l2t cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"res-cons-upgrade-j9ddf\\\"\\nevicting pod \\\"res-cons-upgrade-mt69x\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"foo-7ssvc\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-5krq7\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-kdd8w\\\"\\npod/res-cons-upgrade-j9ddf evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-mt69x evicted\\npod/kubernetes-dashboard-7778f8b456-kdd8w evicted\\npod/volume-snapshot-controller-0 evicted\\npod/kube-dns-autoscaler-65bc6d4889-5krq7 evicted\\npod/foo-7ssvc evicted\\nnode/bootstrap-e2e-minion-group-9l2t evicted\\n.............................................................................Node bootstrap-e2e-minion-group-9l2t recreated.\\nNode bootstrap-e2e-minion-group-9l2t Ready=True\\nnode/bootstrap-e2e-minion-group-9l2t uncordoned\\nnode/bootstrap-e2e-minion-group-gsd8 cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"res-cons-upgrade-bn9g4\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-qj48n\\\"\\nevicting pod \\\"res-cons-upgrade-wgt4d\\\"\\nevicting pod \\\"res-cons-upgrade-zdqb4\\\"\\nevicting pod \\\"coredns-65567c7b57-q7dmf\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-kl4nb\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-qpm8w\\\"\\nevicting pod \\\"service-test-q8rbq\\\"\\nevicting pod \\\"foo-6tzcs\\\"\\nevicting pod \\\"rs-75f8g\\\"\\nevicting pod \\\"ss-0\\\"\\npod/service-test-q8rbq evicted\\npod/res-cons-upgrade-bn9g4 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-qpm8w evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-ctrl-qj48n evicted\\npod/res-cons-upgrade-zdqb4 evicted\\npod/res-cons-upgrade-wgt4d evicted\\npod/rs-75f8g evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-7778f8b456-kl4nb evicted\\npod/coredns-65567c7b57-q7dmf evicted\\npod/foo-6tzcs evicted\\nnode/bootstrap-e2e-minion-group-gsd8 evicted\\n.........................................................................................................................Node bootstrap-e2e-minion-group-gsd8 recreated.\\nNode bootstrap-e2e-minion-group-gsd8 Ready=True\\nnode/bootstrap-e2e-minion-group-gsd8 uncordoned\\nnode/bootstrap-e2e-minion-group-r0sj cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"test-apparmor-ntrqg\\\"\\nevicting pod \\\"l7-default-backend-678889f899-gsh7r\\\"\\nevicting pod \\\"apparmor-loader-d46k9\\\"\\nevicting pod \\\"test-apparmor-mks5s\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-k5m9f\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"coredns-65567c7b57-2mbz9\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-6vpjc\\\"\\nevicting pod \\\"service-test-5n8rg\\\"\\nevicting pod \\\"foo-xzg96\\\"\\nevicting pod \\\"dp-657fc4b57d-r7wjr\\\"\\nevicting pod \\\"res-cons-upgrade-2jvjb\\\"\\npod/test-apparmor-mks5s evicted\\npod/dp-657fc4b57d-r7wjr evicted\\npod/apparmor-loader-d46k9 evicted\\npod/volume-snapshot-controller-0 evicted\\npod/service-test-5n8rg evicted\\npod/ss-1 evicted\\npod/l7-default-backend-678889f899-gsh7r evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-k5m9f evicted\\npod/coredns-65567c7b57-2mbz9 evicted\\npod/res-cons-upgrade-2jvjb evicted\\npod/kube-dns-autoscaler-65bc6d4889-6vpjc evicted\\npod/test-apparmor-ntrqg evicted\\npod/foo-xzg96 evicted\\nnode/bootstrap-e2e-minion-group-r0sj evicted\\n................................................................................................................................................Node bootstrap-e2e-minion-group-r0sj recreated.\\nNode bootstrap-e2e-minion-group-r0sj Ready=True\\nnode/bootstrap-e2e-minion-group-r0sj uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.7-rc.0.3+34646fff129742\\nbootstrap-e2e-minion-group-9l2t   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nbootstrap-e2e-minion-group-gsd8   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nbootstrap-e2e-minion-group-r0sj   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\ncontroller-manager   Healthy   ok                  \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.7-rc.0.3+34646fff129742\\\"\\nname: \\\"bootstrap-e2e-minion-group-9l2t\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\nname: \\\"bootstrap-e2e-minion-group-gsd8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\nname: \\\"bootstrap-e2e-minion-group-r0sj\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.11-rc.0.1+a33df4b740b54a\\\"\\n\", stderr \"Project: k8s-jkns-gci-etcd3\\nNetwork Project: k8s-jkns-gci-etcd3\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-9l2t bootstrap-e2e-minion-group-gsd8 bootstrap-e2e-minion-group-r0sj\\n== Preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a].\\nNAME                                                          MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a  n1-standard-2               2020-05-23T22:30:45.658-07:00\\n== Finished preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\\n== Upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-9l2t. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-qc2c2, sig-apps-daemonset-upgrade-8333/ds1-4gft2\\n== Recreating instance bootstrap-e2e-minion-group-9l2t. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-9l2t to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-9l2t. == \\n== Draining bootstrap-e2e-minion-group-gsd8. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-rk8qn, sig-apps-daemonset-upgrade-8333/ds1-2bq5h\\n== Recreating instance bootstrap-e2e-minion-group-gsd8. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-gsd8 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-gsd8. == \\n== Draining bootstrap-e2e-minion-group-r0sj. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-c2nrb, sig-apps-daemonset-upgrade-8333/ds1-2lz5r; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5665/test-apparmor-ntrqg\\n== Recreating instance bootstrap-e2e-minion-group-r0sj. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-r0sj to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-r0sj. == \\n== Deleting old templates in k8s-jkns-gci-etcd3. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a. ==\\nWarning: Permanently added 'compute.4409736207352183347' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gci-etcd3\\nNetwork Project: k8s-jkns-gci-etcd3\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.11-rc.0.1+a33df4b740b54a]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-9l2t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-gsd8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-r0sj\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\nnode/bootstrap-e2e-minion-group-9l2t cordoned\nevicting pod \"ss-0\"\nevicting pod \"res-cons-upgrade-j9ddf\"\nevicting pod \"res-cons-upgrade-mt69x\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"foo-7ssvc\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-5krq7\"\nevicting pod \"kubernetes-dashboard-7778f8b456-kdd8w\"\npod/res-cons-upgrade-j9ddf evicted\npod/ss-0 evicted\npod/res-cons-upgrade-mt69x evicted\npod/kubernetes-dashboard-7778f8b456-kdd8w evicted\npod/volume-snapshot-controller-0 evicted\npod/kube-dns-autoscaler-65bc6d4889-5krq7 evicted\npod/foo-7ssvc evicted\nnode/bootstrap-e2e-minion-group-9l2t evicted\n.............................................................................Node bootstrap-e2e-minion-group-9l2t recreated.\nNode bootstrap-e2e-minion-group-9l2t Ready=True\nnode/bootstrap-e2e-minion-group-9l2t uncordoned\nnode/bootstrap-e2e-minion-group-gsd8 cordoned\nevicting pod \"ss-2\"\nevicting pod \"res-cons-upgrade-bn9g4\"\nevicting pod \"res-cons-upgrade-ctrl-qj48n\"\nevicting pod \"res-cons-upgrade-wgt4d\"\nevicting pod \"res-cons-upgrade-zdqb4\"\nevicting pod \"coredns-65567c7b57-q7dmf\"\nevicting pod \"kubernetes-dashboard-7778f8b456-kl4nb\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-qpm8w\"\nevicting pod \"service-test-q8rbq\"\nevicting pod \"foo-6tzcs\"\nevicting pod \"rs-75f8g\"\nevicting pod \"ss-0\"\npod/service-test-q8rbq evicted\npod/res-cons-upgrade-bn9g4 evicted\npod/metrics-server-v0.3.6-5f859c87d6-qpm8w evicted\npod/ss-0 evicted\npod/res-cons-upgrade-ctrl-qj48n evicted\npod/res-cons-upgrade-zdqb4 evicted\npod/res-cons-upgrade-wgt4d evicted\npod/rs-75f8g evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-7778f8b456-kl4nb evicted\npod/coredns-65567c7b57-q7dmf evicted\npod/foo-6tzcs evicted\nnode/bootstrap-e2e-minion-group-gsd8 evicted\n.........................................................................................................................Node bootstrap-e2e-minion-group-gsd8 recreated.\nNode bootstrap-e2e-minion-group-gsd8 Ready=True\nnode/bootstrap-e2e-minion-group-gsd8 uncordoned\nnode/bootstrap-e2e-minion-group-r0sj cordoned\nevicting pod \"ss-1\"\nevicting pod \"test-apparmor-ntrqg\"\nevicting pod \"l7-default-backend-678889f899-gsh7r\"\nevicting pod \"apparmor-loader-d46k9\"\nevicting pod \"test-apparmor-mks5s\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-k5m9f\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"coredns-65567c7b57-2mbz9\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-6vpjc\"\nevicting pod \"service-test-5n8rg\"\nevicting pod \"foo-xzg96\"\nevicting pod \"dp-657fc4b57d-r7wjr\"\nevicting pod \"res-cons-upgrade-2jvjb\"\npod/test-apparmor-mks5s evicted\npod/dp-657fc4b57d-r7wjr evicted\npod/apparmor-loader-d46k9 evicted\npod/volume-snapshot-controller-0 evicted\npod/service-test-5n8rg evicted\npod/ss-1 evicted\npod/l7-default-backend-678889f899-gsh7r evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-k5m9f evicted\npod/coredns-65567c7b57-2mbz9 evicted\npod/res-cons-upgrade-2jvjb evicted\npod/kube-dns-autoscaler-65bc6d4889-6vpjc evicted\npod/test-apparmor-ntrqg evicted\npod/foo-xzg96 evicted\nnode/bootstrap-e2e-minion-group-r0sj evicted\n................................................................................................................................................Node bootstrap-e2e-minion-group-r0sj recreated.\nNode bootstrap-e2e-minion-group-r0sj Ready=True\nnode/bootstrap-e2e-minion-group-r0sj uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.7-rc.0.3+34646fff129742\nbootstrap-e2e-minion-group-9l2t   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nbootstrap-e2e-minion-group-gsd8   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nbootstrap-e2e-minion-group-r0sj   Ready                      <none>   15m   v1.16.11-rc.0.1+a33df4b740b54a\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.7-rc.0.3+34646fff129742\"\nname: \"bootstrap-e2e-minion-group-9l2t\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\nname: \"bootstrap-e2e-minion-group-gsd8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\nname: \"bootstrap-e2e-minion-group-r0sj\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.11-rc.0.1+a33df4b740b54a\"\n", stderr "Project: k8s-jkns-gci-etcd3\nNetwork Project: k8s-jkns-gci-etcd3\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-9l2t bootstrap-e2e-minion-group-gsd8 bootstrap-e2e-minion-group-r0sj\n== Preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a].\nNAME                                                          MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-11-rc-0-1-a33df4b740b54a  n1-standard-2               2020-05-23T22:30:45.658-07:00\n== Finished preparing node upgrade (to v1.16.11-rc.0.1+a33df4b740b54a). ==\n== Upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-9l2t. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-qc2c2, sig-apps-daemonset-upgrade-8333/ds1-4gft2\n== Recreating instance bootstrap-e2e-minion-group-9l2t. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-9l2t to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-9l2t. == \n== Draining bootstrap-e2e-minion-group-gsd8. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-rk8qn, sig-apps-daemonset-upgrade-8333/ds1-2bq5h\n== Recreating instance bootstrap-e2e-minion-group-gsd8. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-gsd8 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-gsd8. == \n== Draining bootstrap-e2e-minion-group-r0sj. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/metadata-proxy-v0.1-c2nrb, sig-apps-daemonset-upgrade-8333/ds1-2lz5r; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5665/test-apparmor-ntrqg\n== Recreating instance bootstrap-e2e-minion-group-r0sj. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-r0sj to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-r0sj. == \n== Deleting old templates in k8s-jkns-gci-etcd3. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-etcd3/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.11-rc.0.1+a33df4b740b54a. ==\nWarning: Permanently added 'compute.4409736207352183347' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gci-etcd3\nNetwork Project: k8s-jkns-gci-etcd3\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Find ss-0nevicting mentions in log files | View test history on testgrid


UpgradeTest 22m5s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterDowngrade\] --upgrade-target=ci/k8s-stable2 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 443 Passed Tests

Show 9051 Skipped Tests