This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 441 succeeded
Started2020-05-06 05:07
Elapsed12h39m
Revision
Builder71979f0b-8f57-11ea-990f-52c6b219fb85
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/abb04e4d-26a2-47ce-b44c-5a71b4faa586/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/abb04e4d-26a2-47ce-b44c-5a71b4faa586/targets/test
infra-commit8d5aee220
job-versionv1.17.6-beta.0.19+7148120a96140a
master_os_imagecos-77-12371-175-0
node_os_imagecos-73-11647-163-0
revisionv1.17.6-beta.0.19+7148120a96140a

Test Failures


Cluster downgrade [sig-cluster-lifecycle] cluster-downgrade 10m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\s\[sig\-cluster\-lifecycle\]\scluster\-downgrade$'
May  6 05:28:51.561: Unexpected error:
    <*errors.errorString | 0xc00397aaf0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.16+9b2f377af995d3]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-25ts\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-bkx8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-j89v\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\\nnode/bootstrap-e2e-minion-group-25ts cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"res-cons-upgrade-kkvxp\\\"\\nevicting pod \\\"service-test-5wv26\\\"\\nevicting pod \\\"rs-cxhzm\\\"\\nevicting pod \\\"foo-mkpv5\\\"\\nevicting pod \\\"coredns-65567c7b57-bmf8j\\\"\\nevicting pod \\\"dp-657fc4b57d-sxtn5\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-xgv5p\\\"\\nevicting pod \\\"res-cons-upgrade-z6pnh\\\"\\npod/dp-657fc4b57d-sxtn5 evicted\\npod/res-cons-upgrade-kkvxp evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-xgv5p evicted\\npod/rs-cxhzm evicted\\npod/ss-1 evicted\\npod/service-test-5wv26 evicted\\npod/res-cons-upgrade-z6pnh evicted\\npod/coredns-65567c7b57-bmf8j evicted\\npod/foo-mkpv5 evicted\\nnode/bootstrap-e2e-minion-group-25ts evicted\\n......................................................................................................Node bootstrap-e2e-minion-group-25ts recreated.\\nNode bootstrap-e2e-minion-group-25ts Ready=True\\nnode/bootstrap-e2e-minion-group-25ts uncordoned\\nnode/bootstrap-e2e-minion-group-bkx8 cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-pf8rd\\\"\\nevicting pod \\\"res-cons-upgrade-jc4zq\\\"\\nevicting pod \\\"res-cons-upgrade-tqpmk\\\"\\nevicting pod \\\"coredns-65567c7b57-7lfdc\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-d9glx\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-wvnzt\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-5ps9t\\\"\\nevicting pod \\\"l7-default-backend-678889f899-dwwmd\\\"\\nevicting pod \\\"service-test-7zs7l\\\"\\nevicting pod \\\"foo-bxzs5\\\"\\nevicting pod \\\"test-apparmor-rkxx2\\\"\\nevicting pod \\\"rs-rx7hv\\\"\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"apparmor-loader-68q29\\\"\\nevicting pod \\\"test-apparmor-rsnbw\\\"\\npod/test-apparmor-rkxx2 evicted\\npod/kube-dns-autoscaler-65bc6d4889-wvnzt evicted\\npod/res-cons-upgrade-ctrl-pf8rd evicted\\npod/service-test-7zs7l evicted\\npod/ss-1 evicted\\npod/l7-default-backend-678889f899-dwwmd evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-7778f8b456-5ps9t evicted\\npod/res-cons-upgrade-jc4zq evicted\\npod/res-cons-upgrade-tqpmk evicted\\npod/rs-rx7hv evicted\\npod/apparmor-loader-68q29 evicted\\npod/coredns-65567c7b57-7lfdc evicted\\npod/foo-bxzs5 evicted\\npod/test-apparmor-rsnbw evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-d9glx evicted\\nnode/bootstrap-e2e-minion-group-bkx8 evicted\\n....................................................................................................Node bootstrap-e2e-minion-group-bkx8 recreated.\\nNode bootstrap-e2e-minion-group-bkx8 Ready=True\\nnode/bootstrap-e2e-minion-group-bkx8 uncordoned\\nnode/bootstrap-e2e-minion-group-j89v cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-5phlz\\\"\\nevicting pod \\\"res-cons-upgrade-qr572\\\"\\nevicting pod \\\"service-test-jthnr\\\"\\nevicting pod \\\"res-cons-upgrade-dpwdr\\\"\\nevicting pod \\\"res-cons-upgrade-pvbft\\\"\\nevicting pod \\\"dp-657fc4b57d-8t2zc\\\"\\nevicting pod \\\"foo-4jxpn\\\"\\nevicting pod \\\"coredns-65567c7b57-clwfj\\\"\\nevicting pod \\\"event-exporter-v0.3.1-747b47fcd-s7pnt\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-7gfml\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\npod/dp-657fc4b57d-8t2zc evicted\\npod/service-test-jthnr evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-5phlz evicted\\npod/volume-snapshot-controller-0 evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-qr572 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-7gfml evicted\\npod/res-cons-upgrade-pvbft evicted\\npod/res-cons-upgrade-dpwdr evicted\\npod/coredns-65567c7b57-clwfj evicted\\npod/event-exporter-v0.3.1-747b47fcd-s7pnt evicted\\npod/foo-4jxpn evicted\\nnode/bootstrap-e2e-minion-group-j89v evicted\\n.........................................................................................................Node bootstrap-e2e-minion-group-j89v recreated.\\nNode bootstrap-e2e-minion-group-j89v Ready=True\\nnode/bootstrap-e2e-minion-group-j89v uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.17.6-beta.0.19+7148120a96140a\\nbootstrap-e2e-minion-group-25ts   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nbootstrap-e2e-minion-group-bkx8   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nbootstrap-e2e-minion-group-j89v   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-25ts\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\nname: \\\"bootstrap-e2e-minion-group-bkx8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\nname: \\\"bootstrap-e2e-minion-group-j89v\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\n\", stderr \"Project: e2e-gce-gci-ci-1-5\\nNetwork Project: e2e-gce-gci-ci-1-5\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-25ts bootstrap-e2e-minion-group-bkx8 bootstrap-e2e-minion-group-j89v\\n== Preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3].\\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3  n1-standard-2               2020-05-05T22:18:34.281-07:00\\n== Finished preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\\n== Upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-25ts. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jsng7, kube-system/metadata-proxy-v0.1-qzv9d, sig-apps-daemonset-upgrade-2077/ds1-zlrcl\\n== Recreating instance bootstrap-e2e-minion-group-25ts. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-25ts to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-25ts. == \\n== Draining bootstrap-e2e-minion-group-bkx8. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-7fbfr, kube-system/metadata-proxy-v0.1-fg5nw, sig-apps-daemonset-upgrade-2077/ds1-69qhr; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5640/test-apparmor-rsnbw\\n== Recreating instance bootstrap-e2e-minion-group-bkx8. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-bkx8 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-bkx8. == \\n== Draining bootstrap-e2e-minion-group-j89v. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-mgn4z, kube-system/metadata-proxy-v0.1-9x8r6, sig-apps-daemonset-upgrade-2077/ds1-57pkl\\n== Recreating instance bootstrap-e2e-minion-group-j89v. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-j89v to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-j89v. == \\n== Deleting old templates in e2e-gce-gci-ci-1-5. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3. ==\\nWarning: Permanently added 'compute.6168682871639767528' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: e2e-gce-gci-ci-1-5\\nNetwork Project: e2e-gce-gci-ci-1-5\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.16+9b2f377af995d3]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-25ts\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-bkx8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-j89v\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\nnode/bootstrap-e2e-minion-group-25ts cordoned\nevicting pod \"ss-1\"\nevicting pod \"res-cons-upgrade-kkvxp\"\nevicting pod \"service-test-5wv26\"\nevicting pod \"rs-cxhzm\"\nevicting pod \"foo-mkpv5\"\nevicting pod \"coredns-65567c7b57-bmf8j\"\nevicting pod \"dp-657fc4b57d-sxtn5\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-xgv5p\"\nevicting pod \"res-cons-upgrade-z6pnh\"\npod/dp-657fc4b57d-sxtn5 evicted\npod/res-cons-upgrade-kkvxp evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-xgv5p evicted\npod/rs-cxhzm evicted\npod/ss-1 evicted\npod/service-test-5wv26 evicted\npod/res-cons-upgrade-z6pnh evicted\npod/coredns-65567c7b57-bmf8j evicted\npod/foo-mkpv5 evicted\nnode/bootstrap-e2e-minion-group-25ts evicted\n......................................................................................................Node bootstrap-e2e-minion-group-25ts recreated.\nNode bootstrap-e2e-minion-group-25ts Ready=True\nnode/bootstrap-e2e-minion-group-25ts uncordoned\nnode/bootstrap-e2e-minion-group-bkx8 cordoned\nevicting pod \"ss-2\"\nevicting pod \"res-cons-upgrade-ctrl-pf8rd\"\nevicting pod \"res-cons-upgrade-jc4zq\"\nevicting pod \"res-cons-upgrade-tqpmk\"\nevicting pod \"coredns-65567c7b57-7lfdc\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-d9glx\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-wvnzt\"\nevicting pod \"kubernetes-dashboard-7778f8b456-5ps9t\"\nevicting pod \"l7-default-backend-678889f899-dwwmd\"\nevicting pod \"service-test-7zs7l\"\nevicting pod \"foo-bxzs5\"\nevicting pod \"test-apparmor-rkxx2\"\nevicting pod \"rs-rx7hv\"\nevicting pod \"ss-1\"\nevicting pod \"apparmor-loader-68q29\"\nevicting pod \"test-apparmor-rsnbw\"\npod/test-apparmor-rkxx2 evicted\npod/kube-dns-autoscaler-65bc6d4889-wvnzt evicted\npod/res-cons-upgrade-ctrl-pf8rd evicted\npod/service-test-7zs7l evicted\npod/ss-1 evicted\npod/l7-default-backend-678889f899-dwwmd evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-7778f8b456-5ps9t evicted\npod/res-cons-upgrade-jc4zq evicted\npod/res-cons-upgrade-tqpmk evicted\npod/rs-rx7hv evicted\npod/apparmor-loader-68q29 evicted\npod/coredns-65567c7b57-7lfdc evicted\npod/foo-bxzs5 evicted\npod/test-apparmor-rsnbw evicted\npod/fluentd-gcp-scaler-76d9c77b4d-d9glx evicted\nnode/bootstrap-e2e-minion-group-bkx8 evicted\n....................................................................................................Node bootstrap-e2e-minion-group-bkx8 recreated.\nNode bootstrap-e2e-minion-group-bkx8 Ready=True\nnode/bootstrap-e2e-minion-group-bkx8 uncordoned\nnode/bootstrap-e2e-minion-group-j89v cordoned\nevicting pod \"ss-0\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-5phlz\"\nevicting pod \"res-cons-upgrade-qr572\"\nevicting pod \"service-test-jthnr\"\nevicting pod \"res-cons-upgrade-dpwdr\"\nevicting pod \"res-cons-upgrade-pvbft\"\nevicting pod \"dp-657fc4b57d-8t2zc\"\nevicting pod \"foo-4jxpn\"\nevicting pod \"coredns-65567c7b57-clwfj\"\nevicting pod \"event-exporter-v0.3.1-747b47fcd-s7pnt\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-7gfml\"\nevicting pod \"volume-snapshot-controller-0\"\npod/dp-657fc4b57d-8t2zc evicted\npod/service-test-jthnr evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-5phlz evicted\npod/volume-snapshot-controller-0 evicted\npod/ss-0 evicted\npod/res-cons-upgrade-qr572 evicted\npod/metrics-server-v0.3.6-5f859c87d6-7gfml evicted\npod/res-cons-upgrade-pvbft evicted\npod/res-cons-upgrade-dpwdr evicted\npod/coredns-65567c7b57-clwfj evicted\npod/event-exporter-v0.3.1-747b47fcd-s7pnt evicted\npod/foo-4jxpn evicted\nnode/bootstrap-e2e-minion-group-j89v evicted\n.........................................................................................................Node bootstrap-e2e-minion-group-j89v recreated.\nNode bootstrap-e2e-minion-group-j89v Ready=True\nnode/bootstrap-e2e-minion-group-j89v uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.17.6-beta.0.19+7148120a96140a\nbootstrap-e2e-minion-group-25ts   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nbootstrap-e2e-minion-group-bkx8   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nbootstrap-e2e-minion-group-j89v   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-25ts\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\nname: \"bootstrap-e2e-minion-group-bkx8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\nname: \"bootstrap-e2e-minion-group-j89v\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\n", stderr "Project: e2e-gce-gci-ci-1-5\nNetwork Project: e2e-gce-gci-ci-1-5\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-25ts bootstrap-e2e-minion-group-bkx8 bootstrap-e2e-minion-group-j89v\n== Preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3].\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3  n1-standard-2               2020-05-05T22:18:34.281-07:00\n== Finished preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\n== Upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-25ts. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jsng7, kube-system/metadata-proxy-v0.1-qzv9d, sig-apps-daemonset-upgrade-2077/ds1-zlrcl\n== Recreating instance bootstrap-e2e-minion-group-25ts. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-25ts to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-25ts. == \n== Draining bootstrap-e2e-minion-group-bkx8. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-7fbfr, kube-system/metadata-proxy-v0.1-fg5nw, sig-apps-daemonset-upgrade-2077/ds1-69qhr; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5640/test-apparmor-rsnbw\n== Recreating instance bootstrap-e2e-minion-group-bkx8. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-bkx8 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-bkx8. == \n== Draining bootstrap-e2e-minion-group-j89v. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-mgn4z, kube-system/metadata-proxy-v0.1-9x8r6, sig-apps-daemonset-upgrade-2077/ds1-57pkl\n== Recreating instance bootstrap-e2e-minion-group-j89v. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-j89v to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-j89v. == \n== Deleting old templates in e2e-gce-gci-ci-1-5. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3. ==\nWarning: Permanently added 'compute.6168682871639767528' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: e2e-gce-gci-ci-1-5\nNetwork Project: e2e-gce-gci-ci-1-5\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178 +0x14a
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc0028ed218)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/lifecycle.runUpgradeSuite(0xc000c39400, 0x76e0c40, 0xc, 0xc, 0xc000641320, 0xc002e8b0e0, 0xc002017520, 0x2, 0xc0029fb280)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:483 +0x47a
k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:183 +0x227
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001d26200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:94 +0x242
k8s.io/kubernetes/test/e2e.TestE2E(0xc001d26200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:116 +0x2b
testing.tRunner(0xc001d26200, 0x49afa00)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Find ss-1nevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade] 21m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sDowngrade\s\[Feature\:Downgrade\]\scluster\sdowngrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:165
May  6 05:28:51.561: Unexpected error:
    <*errors.errorString | 0xc00397aaf0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.16+9b2f377af995d3]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-25ts\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-bkx8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-j89v\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\\nnode/bootstrap-e2e-minion-group-25ts cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"res-cons-upgrade-kkvxp\\\"\\nevicting pod \\\"service-test-5wv26\\\"\\nevicting pod \\\"rs-cxhzm\\\"\\nevicting pod \\\"foo-mkpv5\\\"\\nevicting pod \\\"coredns-65567c7b57-bmf8j\\\"\\nevicting pod \\\"dp-657fc4b57d-sxtn5\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-xgv5p\\\"\\nevicting pod \\\"res-cons-upgrade-z6pnh\\\"\\npod/dp-657fc4b57d-sxtn5 evicted\\npod/res-cons-upgrade-kkvxp evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-xgv5p evicted\\npod/rs-cxhzm evicted\\npod/ss-1 evicted\\npod/service-test-5wv26 evicted\\npod/res-cons-upgrade-z6pnh evicted\\npod/coredns-65567c7b57-bmf8j evicted\\npod/foo-mkpv5 evicted\\nnode/bootstrap-e2e-minion-group-25ts evicted\\n......................................................................................................Node bootstrap-e2e-minion-group-25ts recreated.\\nNode bootstrap-e2e-minion-group-25ts Ready=True\\nnode/bootstrap-e2e-minion-group-25ts uncordoned\\nnode/bootstrap-e2e-minion-group-bkx8 cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-pf8rd\\\"\\nevicting pod \\\"res-cons-upgrade-jc4zq\\\"\\nevicting pod \\\"res-cons-upgrade-tqpmk\\\"\\nevicting pod \\\"coredns-65567c7b57-7lfdc\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-d9glx\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-wvnzt\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-5ps9t\\\"\\nevicting pod \\\"l7-default-backend-678889f899-dwwmd\\\"\\nevicting pod \\\"service-test-7zs7l\\\"\\nevicting pod \\\"foo-bxzs5\\\"\\nevicting pod \\\"test-apparmor-rkxx2\\\"\\nevicting pod \\\"rs-rx7hv\\\"\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"apparmor-loader-68q29\\\"\\nevicting pod \\\"test-apparmor-rsnbw\\\"\\npod/test-apparmor-rkxx2 evicted\\npod/kube-dns-autoscaler-65bc6d4889-wvnzt evicted\\npod/res-cons-upgrade-ctrl-pf8rd evicted\\npod/service-test-7zs7l evicted\\npod/ss-1 evicted\\npod/l7-default-backend-678889f899-dwwmd evicted\\npod/ss-2 evicted\\npod/kubernetes-dashboard-7778f8b456-5ps9t evicted\\npod/res-cons-upgrade-jc4zq evicted\\npod/res-cons-upgrade-tqpmk evicted\\npod/rs-rx7hv evicted\\npod/apparmor-loader-68q29 evicted\\npod/coredns-65567c7b57-7lfdc evicted\\npod/foo-bxzs5 evicted\\npod/test-apparmor-rsnbw evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-d9glx evicted\\nnode/bootstrap-e2e-minion-group-bkx8 evicted\\n....................................................................................................Node bootstrap-e2e-minion-group-bkx8 recreated.\\nNode bootstrap-e2e-minion-group-bkx8 Ready=True\\nnode/bootstrap-e2e-minion-group-bkx8 uncordoned\\nnode/bootstrap-e2e-minion-group-j89v cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-5phlz\\\"\\nevicting pod \\\"res-cons-upgrade-qr572\\\"\\nevicting pod \\\"service-test-jthnr\\\"\\nevicting pod \\\"res-cons-upgrade-dpwdr\\\"\\nevicting pod \\\"res-cons-upgrade-pvbft\\\"\\nevicting pod \\\"dp-657fc4b57d-8t2zc\\\"\\nevicting pod \\\"foo-4jxpn\\\"\\nevicting pod \\\"coredns-65567c7b57-clwfj\\\"\\nevicting pod \\\"event-exporter-v0.3.1-747b47fcd-s7pnt\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-7gfml\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\npod/dp-657fc4b57d-8t2zc evicted\\npod/service-test-jthnr evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-5phlz evicted\\npod/volume-snapshot-controller-0 evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-qr572 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-7gfml evicted\\npod/res-cons-upgrade-pvbft evicted\\npod/res-cons-upgrade-dpwdr evicted\\npod/coredns-65567c7b57-clwfj evicted\\npod/event-exporter-v0.3.1-747b47fcd-s7pnt evicted\\npod/foo-4jxpn evicted\\nnode/bootstrap-e2e-minion-group-j89v evicted\\n.........................................................................................................Node bootstrap-e2e-minion-group-j89v recreated.\\nNode bootstrap-e2e-minion-group-j89v Ready=True\\nnode/bootstrap-e2e-minion-group-j89v uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.17.6-beta.0.19+7148120a96140a\\nbootstrap-e2e-minion-group-25ts   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nbootstrap-e2e-minion-group-bkx8   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nbootstrap-e2e-minion-group-j89v   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.19+7148120a96140a\\\"\\nname: \\\"bootstrap-e2e-minion-group-25ts\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\nname: \\\"bootstrap-e2e-minion-group-bkx8\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\nname: \\\"bootstrap-e2e-minion-group-j89v\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.16+9b2f377af995d3\\\"\\n\", stderr \"Project: e2e-gce-gci-ci-1-5\\nNetwork Project: e2e-gce-gci-ci-1-5\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-25ts bootstrap-e2e-minion-group-bkx8 bootstrap-e2e-minion-group-j89v\\n== Preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3].\\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3  n1-standard-2               2020-05-05T22:18:34.281-07:00\\n== Finished preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\\n== Upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-25ts. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jsng7, kube-system/metadata-proxy-v0.1-qzv9d, sig-apps-daemonset-upgrade-2077/ds1-zlrcl\\n== Recreating instance bootstrap-e2e-minion-group-25ts. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-25ts to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-25ts. == \\n== Draining bootstrap-e2e-minion-group-bkx8. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-7fbfr, kube-system/metadata-proxy-v0.1-fg5nw, sig-apps-daemonset-upgrade-2077/ds1-69qhr; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5640/test-apparmor-rsnbw\\n== Recreating instance bootstrap-e2e-minion-group-bkx8. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-bkx8 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-bkx8. == \\n== Draining bootstrap-e2e-minion-group-j89v. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-mgn4z, kube-system/metadata-proxy-v0.1-9x8r6, sig-apps-daemonset-upgrade-2077/ds1-57pkl\\n== Recreating instance bootstrap-e2e-minion-group-j89v. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-j89v to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-j89v. == \\n== Deleting old templates in e2e-gce-gci-ci-1-5. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3. ==\\nWarning: Permanently added 'compute.6168682871639767528' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: e2e-gce-gci-ci-1-5\\nNetwork Project: e2e-gce-gci-ci-1-5\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.16+9b2f377af995d3]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-25ts\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-bkx8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-j89v\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\nnode/bootstrap-e2e-minion-group-25ts cordoned\nevicting pod \"ss-1\"\nevicting pod \"res-cons-upgrade-kkvxp\"\nevicting pod \"service-test-5wv26\"\nevicting pod \"rs-cxhzm\"\nevicting pod \"foo-mkpv5\"\nevicting pod \"coredns-65567c7b57-bmf8j\"\nevicting pod \"dp-657fc4b57d-sxtn5\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-xgv5p\"\nevicting pod \"res-cons-upgrade-z6pnh\"\npod/dp-657fc4b57d-sxtn5 evicted\npod/res-cons-upgrade-kkvxp evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-xgv5p evicted\npod/rs-cxhzm evicted\npod/ss-1 evicted\npod/service-test-5wv26 evicted\npod/res-cons-upgrade-z6pnh evicted\npod/coredns-65567c7b57-bmf8j evicted\npod/foo-mkpv5 evicted\nnode/bootstrap-e2e-minion-group-25ts evicted\n......................................................................................................Node bootstrap-e2e-minion-group-25ts recreated.\nNode bootstrap-e2e-minion-group-25ts Ready=True\nnode/bootstrap-e2e-minion-group-25ts uncordoned\nnode/bootstrap-e2e-minion-group-bkx8 cordoned\nevicting pod \"ss-2\"\nevicting pod \"res-cons-upgrade-ctrl-pf8rd\"\nevicting pod \"res-cons-upgrade-jc4zq\"\nevicting pod \"res-cons-upgrade-tqpmk\"\nevicting pod \"coredns-65567c7b57-7lfdc\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-d9glx\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-wvnzt\"\nevicting pod \"kubernetes-dashboard-7778f8b456-5ps9t\"\nevicting pod \"l7-default-backend-678889f899-dwwmd\"\nevicting pod \"service-test-7zs7l\"\nevicting pod \"foo-bxzs5\"\nevicting pod \"test-apparmor-rkxx2\"\nevicting pod \"rs-rx7hv\"\nevicting pod \"ss-1\"\nevicting pod \"apparmor-loader-68q29\"\nevicting pod \"test-apparmor-rsnbw\"\npod/test-apparmor-rkxx2 evicted\npod/kube-dns-autoscaler-65bc6d4889-wvnzt evicted\npod/res-cons-upgrade-ctrl-pf8rd evicted\npod/service-test-7zs7l evicted\npod/ss-1 evicted\npod/l7-default-backend-678889f899-dwwmd evicted\npod/ss-2 evicted\npod/kubernetes-dashboard-7778f8b456-5ps9t evicted\npod/res-cons-upgrade-jc4zq evicted\npod/res-cons-upgrade-tqpmk evicted\npod/rs-rx7hv evicted\npod/apparmor-loader-68q29 evicted\npod/coredns-65567c7b57-7lfdc evicted\npod/foo-bxzs5 evicted\npod/test-apparmor-rsnbw evicted\npod/fluentd-gcp-scaler-76d9c77b4d-d9glx evicted\nnode/bootstrap-e2e-minion-group-bkx8 evicted\n....................................................................................................Node bootstrap-e2e-minion-group-bkx8 recreated.\nNode bootstrap-e2e-minion-group-bkx8 Ready=True\nnode/bootstrap-e2e-minion-group-bkx8 uncordoned\nnode/bootstrap-e2e-minion-group-j89v cordoned\nevicting pod \"ss-0\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-5phlz\"\nevicting pod \"res-cons-upgrade-qr572\"\nevicting pod \"service-test-jthnr\"\nevicting pod \"res-cons-upgrade-dpwdr\"\nevicting pod \"res-cons-upgrade-pvbft\"\nevicting pod \"dp-657fc4b57d-8t2zc\"\nevicting pod \"foo-4jxpn\"\nevicting pod \"coredns-65567c7b57-clwfj\"\nevicting pod \"event-exporter-v0.3.1-747b47fcd-s7pnt\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-7gfml\"\nevicting pod \"volume-snapshot-controller-0\"\npod/dp-657fc4b57d-8t2zc evicted\npod/service-test-jthnr evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-5phlz evicted\npod/volume-snapshot-controller-0 evicted\npod/ss-0 evicted\npod/res-cons-upgrade-qr572 evicted\npod/metrics-server-v0.3.6-5f859c87d6-7gfml evicted\npod/res-cons-upgrade-pvbft evicted\npod/res-cons-upgrade-dpwdr evicted\npod/coredns-65567c7b57-clwfj evicted\npod/event-exporter-v0.3.1-747b47fcd-s7pnt evicted\npod/foo-4jxpn evicted\nnode/bootstrap-e2e-minion-group-j89v evicted\n.........................................................................................................Node bootstrap-e2e-minion-group-j89v recreated.\nNode bootstrap-e2e-minion-group-j89v Ready=True\nnode/bootstrap-e2e-minion-group-j89v uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   14m   v1.17.6-beta.0.19+7148120a96140a\nbootstrap-e2e-minion-group-25ts   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nbootstrap-e2e-minion-group-bkx8   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nbootstrap-e2e-minion-group-j89v   Ready                      <none>   14m   v1.16.10-beta.0.16+9b2f377af995d3\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.19+7148120a96140a\"\nname: \"bootstrap-e2e-minion-group-25ts\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\nname: \"bootstrap-e2e-minion-group-bkx8\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\nname: \"bootstrap-e2e-minion-group-j89v\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.16+9b2f377af995d3\"\n", stderr "Project: e2e-gce-gci-ci-1-5\nNetwork Project: e2e-gce-gci-ci-1-5\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-25ts bootstrap-e2e-minion-group-bkx8 bootstrap-e2e-minion-group-j89v\n== Preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3].\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-10-beta-0-16-9b2f377af995d3  n1-standard-2               2020-05-05T22:18:34.281-07:00\n== Finished preparing node upgrade (to v1.16.10-beta.0.16+9b2f377af995d3). ==\n== Upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-25ts. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jsng7, kube-system/metadata-proxy-v0.1-qzv9d, sig-apps-daemonset-upgrade-2077/ds1-zlrcl\n== Recreating instance bootstrap-e2e-minion-group-25ts. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-25ts to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-25ts. == \n== Draining bootstrap-e2e-minion-group-bkx8. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-7fbfr, kube-system/metadata-proxy-v0.1-fg5nw, sig-apps-daemonset-upgrade-2077/ds1-69qhr; deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-5640/test-apparmor-rsnbw\n== Recreating instance bootstrap-e2e-minion-group-bkx8. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-bkx8 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-bkx8. == \n== Draining bootstrap-e2e-minion-group-j89v. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-mgn4z, kube-system/metadata-proxy-v0.1-9x8r6, sig-apps-daemonset-upgrade-2077/ds1-57pkl\n== Recreating instance bootstrap-e2e-minion-group-j89v. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-j89v to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-j89v. == \n== Deleting old templates in e2e-gce-gci-ci-1-5. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-1-5/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.10-beta.0.16+9b2f377af995d3. ==\nWarning: Permanently added 'compute.6168682871639767528' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: e2e-gce-gci-ci-1-5\nNetwork Project: e2e-gce-gci-ci-1-5\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Find ss-1nevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart. 2m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\sdisruptive\[Disruptive\]\sShould\stest\sthat\spv\swritten\sbefore\skubelet\srestart\sis\sreadable\safter\srestart\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:144
May  6 15:47:57.201: deploying csi gce-pd driver: create ClusterRoleBinding: clusterrolebindings.rbac.authorization.k8s.io "psp-csi-controller-driver-registrar-role-disruptive-3255" already exists
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/csi.go:443
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


SkewTest 11h59m

error during kubetest --test --test_args=--ginkgo.focus=\[Slow\]|\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=skew --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 21m44s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterDowngrade\] --upgrade-target=ci/k8s-stable2 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 441 Passed Tests

Show 9051 Skipped Tests