This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 441 succeeded
Started2020-05-17 16:23
Elapsed13h7m
Revision
Builderaa5d8825-985a-11ea-aa9b-c69591b579a5
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c0c7fd93-abd6-480f-b1fb-ca9b49b35b1f/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/c0c7fd93-abd6-480f-b1fb-ca9b49b35b1f/targets/test
infra-commit088d21b2a
job-versionv1.17.6-beta.0.35+1596ecd5da9c7b
master_os_imagecos-77-12371-175-0
node_os_imagecos-73-11647-163-0
revisionv1.17.6-beta.0.35+1596ecd5da9c7b

Test Failures


Cluster downgrade [sig-cluster-lifecycle] cluster-downgrade 11m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\s\[sig\-cluster\-lifecycle\]\scluster\-downgrade$'
May 17 16:44:15.570: Unexpected error:
    <*errors.errorString | 0xc001e522f0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.29+d4d11e74d02748]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-3n62\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-5rxc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-c49r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\\nnode/bootstrap-e2e-minion-group-3n62 cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"foo-sggzt\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-xc2sp\\\"\\nevicting pod \\\"res-cons-upgrade-5tvnz\\\"\\nevicting pod \\\"res-cons-upgrade-cm46g\\\"\\nevicting pod \\\"l7-default-backend-678889f899-8chnw\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-rbzs2\\\"\\npod/res-cons-upgrade-5tvnz evicted\\npod/l7-default-backend-678889f899-8chnw evicted\\npod/res-cons-upgrade-cm46g evicted\\npod/metrics-server-v0.3.6-5f859c87d6-xc2sp evicted\\npod/ss-0 evicted\\npod/foo-sggzt evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-rbzs2 evicted\\nnode/bootstrap-e2e-minion-group-3n62 evicted\\n.........................................................................................................................................................Node bootstrap-e2e-minion-group-3n62 recreated.\\nNode bootstrap-e2e-minion-group-3n62 Ready=True\\nnode/bootstrap-e2e-minion-group-3n62 uncordoned\\nnode/bootstrap-e2e-minion-group-5rxc cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"service-test-x7765\\\"\\nevicting pod \\\"dp-657fc4b57d-qnmkj\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-r5bxq\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-4jm9q\\\"\\nevicting pod \\\"foo-8k4fb\\\"\\nevicting pod \\\"coredns-65567c7b57-fjgkr\\\"\\nevicting pod \\\"res-cons-upgrade-8hh82\\\"\\nevicting pod \\\"rs-smcrc\\\"\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-bzp2b\\\"\\nevicting pod \\\"res-cons-upgrade-q5qdk\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-fmd87\\\"\\nevicting pod \\\"res-cons-upgrade-7tvsf\\\"\\npod/dp-657fc4b57d-qnmkj evicted\\npod/kube-dns-autoscaler-65bc6d4889-bzp2b evicted\\npod/service-test-x7765 evicted\\npod/rs-smcrc evicted\\npod/res-cons-upgrade-8hh82 evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-q5qdk evicted\\npod/ss-1 evicted\\npod/kubernetes-dashboard-7778f8b456-fmd87 evicted\\npod/res-cons-upgrade-7tvsf evicted\\npod/coredns-65567c7b57-fjgkr evicted\\npod/res-cons-upgrade-ctrl-r5bxq evicted\\npod/volume-snapshot-controller-0 evicted\\npod/foo-8k4fb evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-4jm9q evicted\\nnode/bootstrap-e2e-minion-group-5rxc evicted\\n...........................................................................................................Node bootstrap-e2e-minion-group-5rxc recreated.\\nNode bootstrap-e2e-minion-group-5rxc Ready=True\\nnode/bootstrap-e2e-minion-group-5rxc uncordoned\\nnode/bootstrap-e2e-minion-group-c49r cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"coredns-65567c7b57-kwlks\\\"\\nevicting pod \\\"l7-default-backend-678889f899-8pbm2\\\"\\nevicting pod \\\"event-exporter-v0.3.1-747b47fcd-ttqxd\\\"\\nevicting pod \\\"foo-6vrq8\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-7k52n\\\"\\nevicting pod \\\"apparmor-loader-7k7dd\\\"\\nevicting pod \\\"test-apparmor-lhxrv\\\"\\nevicting pod \\\"res-cons-upgrade-5tnz7\\\"\\nevicting pod \\\"res-cons-upgrade-nrbrt\\\"\\nevicting pod \\\"test-apparmor-tlkjf\\\"\\nevicting pod \\\"service-test-x2zzp\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-lqhq4\\\"\\npod/test-apparmor-lhxrv evicted\\npod/ss-2 evicted\\npod/l7-default-backend-678889f899-8pbm2 evicted\\npod/apparmor-loader-7k7dd evicted\\npod/res-cons-upgrade-5tnz7 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-lqhq4 evicted\\npod/res-cons-upgrade-nrbrt evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-7k52n evicted\\npod/service-test-x2zzp evicted\\npod/coredns-65567c7b57-kwlks evicted\\npod/test-apparmor-tlkjf evicted\\npod/foo-6vrq8 evicted\\npod/event-exporter-v0.3.1-747b47fcd-ttqxd evicted\\nnode/bootstrap-e2e-minion-group-c49r evicted\\n....................................................................................................................................................Node bootstrap-e2e-minion-group-c49r recreated.\\nNode bootstrap-e2e-minion-group-c49r Ready=True\\nnode/bootstrap-e2e-minion-group-c49r uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.6-beta.0.35+1596ecd5da9c7b\\nbootstrap-e2e-minion-group-3n62   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nbootstrap-e2e-minion-group-5rxc   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nbootstrap-e2e-minion-group-c49r   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-3n62\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\nname: \\\"bootstrap-e2e-minion-group-5rxc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\nname: \\\"bootstrap-e2e-minion-group-c49r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\n\", stderr \"Project: k8s-jkns-gce-slow-1-6\\nNetwork Project: k8s-jkns-gce-slow-1-6\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-3n62 bootstrap-e2e-minion-group-5rxc bootstrap-e2e-minion-group-c49r\\n== Preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748].\\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748  n1-standard-2               2020-05-17T09:32:54.152-07:00\\n== Finished preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\\n== Upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-3n62. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-nr7r7, kube-system/metadata-proxy-v0.1-n7l7m, sig-apps-daemonset-upgrade-8481/ds1-7f7qj\\n== Recreating instance bootstrap-e2e-minion-group-3n62. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-3n62 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-3n62. == \\n== Draining bootstrap-e2e-minion-group-5rxc. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-9jn6m, kube-system/metadata-proxy-v0.1-wnfdq, sig-apps-daemonset-upgrade-8481/ds1-c62fk\\n== Recreating instance bootstrap-e2e-minion-group-5rxc. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-5rxc to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-5rxc. == \\n== Draining bootstrap-e2e-minion-group-c49r. == \\nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-3041/test-apparmor-tlkjf; ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jv7cf, kube-system/metadata-proxy-v0.1-5jjf4, sig-apps-daemonset-upgrade-8481/ds1-d9m6k\\n== Recreating instance bootstrap-e2e-minion-group-c49r. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-c49r to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-c49r. == \\n== Deleting old templates in k8s-jkns-gce-slow-1-6. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748. ==\\nWarning: Permanently added 'compute.1034839008819862361' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gce-slow-1-6\\nNetwork Project: k8s-jkns-gce-slow-1-6\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.29+d4d11e74d02748]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-3n62\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-5rxc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-c49r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\nnode/bootstrap-e2e-minion-group-3n62 cordoned\nevicting pod \"ss-0\"\nevicting pod \"foo-sggzt\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-xc2sp\"\nevicting pod \"res-cons-upgrade-5tvnz\"\nevicting pod \"res-cons-upgrade-cm46g\"\nevicting pod \"l7-default-backend-678889f899-8chnw\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-rbzs2\"\npod/res-cons-upgrade-5tvnz evicted\npod/l7-default-backend-678889f899-8chnw evicted\npod/res-cons-upgrade-cm46g evicted\npod/metrics-server-v0.3.6-5f859c87d6-xc2sp evicted\npod/ss-0 evicted\npod/foo-sggzt evicted\npod/fluentd-gcp-scaler-76d9c77b4d-rbzs2 evicted\nnode/bootstrap-e2e-minion-group-3n62 evicted\n.........................................................................................................................................................Node bootstrap-e2e-minion-group-3n62 recreated.\nNode bootstrap-e2e-minion-group-3n62 Ready=True\nnode/bootstrap-e2e-minion-group-3n62 uncordoned\nnode/bootstrap-e2e-minion-group-5rxc cordoned\nevicting pod \"ss-1\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"service-test-x7765\"\nevicting pod \"dp-657fc4b57d-qnmkj\"\nevicting pod \"res-cons-upgrade-ctrl-r5bxq\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-4jm9q\"\nevicting pod \"foo-8k4fb\"\nevicting pod \"coredns-65567c7b57-fjgkr\"\nevicting pod \"res-cons-upgrade-8hh82\"\nevicting pod \"rs-smcrc\"\nevicting pod \"ss-0\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-bzp2b\"\nevicting pod \"res-cons-upgrade-q5qdk\"\nevicting pod \"kubernetes-dashboard-7778f8b456-fmd87\"\nevicting pod \"res-cons-upgrade-7tvsf\"\npod/dp-657fc4b57d-qnmkj evicted\npod/kube-dns-autoscaler-65bc6d4889-bzp2b evicted\npod/service-test-x7765 evicted\npod/rs-smcrc evicted\npod/res-cons-upgrade-8hh82 evicted\npod/ss-0 evicted\npod/res-cons-upgrade-q5qdk evicted\npod/ss-1 evicted\npod/kubernetes-dashboard-7778f8b456-fmd87 evicted\npod/res-cons-upgrade-7tvsf evicted\npod/coredns-65567c7b57-fjgkr evicted\npod/res-cons-upgrade-ctrl-r5bxq evicted\npod/volume-snapshot-controller-0 evicted\npod/foo-8k4fb evicted\npod/fluentd-gcp-scaler-76d9c77b4d-4jm9q evicted\nnode/bootstrap-e2e-minion-group-5rxc evicted\n...........................................................................................................Node bootstrap-e2e-minion-group-5rxc recreated.\nNode bootstrap-e2e-minion-group-5rxc Ready=True\nnode/bootstrap-e2e-minion-group-5rxc uncordoned\nnode/bootstrap-e2e-minion-group-c49r cordoned\nevicting pod \"ss-2\"\nevicting pod \"coredns-65567c7b57-kwlks\"\nevicting pod \"l7-default-backend-678889f899-8pbm2\"\nevicting pod \"event-exporter-v0.3.1-747b47fcd-ttqxd\"\nevicting pod \"foo-6vrq8\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-7k52n\"\nevicting pod \"apparmor-loader-7k7dd\"\nevicting pod \"test-apparmor-lhxrv\"\nevicting pod \"res-cons-upgrade-5tnz7\"\nevicting pod \"res-cons-upgrade-nrbrt\"\nevicting pod \"test-apparmor-tlkjf\"\nevicting pod \"service-test-x2zzp\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-lqhq4\"\npod/test-apparmor-lhxrv evicted\npod/ss-2 evicted\npod/l7-default-backend-678889f899-8pbm2 evicted\npod/apparmor-loader-7k7dd evicted\npod/res-cons-upgrade-5tnz7 evicted\npod/metrics-server-v0.3.6-5f859c87d6-lqhq4 evicted\npod/res-cons-upgrade-nrbrt evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-7k52n evicted\npod/service-test-x2zzp evicted\npod/coredns-65567c7b57-kwlks evicted\npod/test-apparmor-tlkjf evicted\npod/foo-6vrq8 evicted\npod/event-exporter-v0.3.1-747b47fcd-ttqxd evicted\nnode/bootstrap-e2e-minion-group-c49r evicted\n....................................................................................................................................................Node bootstrap-e2e-minion-group-c49r recreated.\nNode bootstrap-e2e-minion-group-c49r Ready=True\nnode/bootstrap-e2e-minion-group-c49r uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.6-beta.0.35+1596ecd5da9c7b\nbootstrap-e2e-minion-group-3n62   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nbootstrap-e2e-minion-group-5rxc   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nbootstrap-e2e-minion-group-c49r   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-3n62\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\nname: \"bootstrap-e2e-minion-group-5rxc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\nname: \"bootstrap-e2e-minion-group-c49r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\n", stderr "Project: k8s-jkns-gce-slow-1-6\nNetwork Project: k8s-jkns-gce-slow-1-6\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-3n62 bootstrap-e2e-minion-group-5rxc bootstrap-e2e-minion-group-c49r\n== Preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748].\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748  n1-standard-2               2020-05-17T09:32:54.152-07:00\n== Finished preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\n== Upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-3n62. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-nr7r7, kube-system/metadata-proxy-v0.1-n7l7m, sig-apps-daemonset-upgrade-8481/ds1-7f7qj\n== Recreating instance bootstrap-e2e-minion-group-3n62. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-3n62 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-3n62. == \n== Draining bootstrap-e2e-minion-group-5rxc. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-9jn6m, kube-system/metadata-proxy-v0.1-wnfdq, sig-apps-daemonset-upgrade-8481/ds1-c62fk\n== Recreating instance bootstrap-e2e-minion-group-5rxc. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-5rxc to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-5rxc. == \n== Draining bootstrap-e2e-minion-group-c49r. == \nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-3041/test-apparmor-tlkjf; ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jv7cf, kube-system/metadata-proxy-v0.1-5jjf4, sig-apps-daemonset-upgrade-8481/ds1-d9m6k\n== Recreating instance bootstrap-e2e-minion-group-c49r. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-c49r to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-c49r. == \n== Deleting old templates in k8s-jkns-gce-slow-1-6. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748. ==\nWarning: Permanently added 'compute.1034839008819862361' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gce-slow-1-6\nNetwork Project: k8s-jkns-gce-slow-1-6\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178 +0x14a
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc0027c7218)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/lifecycle.runUpgradeSuite(0xc000bd83c0, 0x76e0c40, 0xc, 0xc, 0xc00061ccc0, 0xc00355ee10, 0xc003704d40, 0x2, 0xc0036746e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:483 +0x47a
k8s.io/kubernetes/test/e2e/lifecycle.glob..func3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:183 +0x227
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002bd6400)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:94 +0x242
k8s.io/kubernetes/test/e2e.TestE2E(0xc002bd6400)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:116 +0x2b
testing.tRunner(0xc002bd6400, 0x49afa50)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Find ss-0nevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade] 22m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sDowngrade\s\[Feature\:Downgrade\]\scluster\sdowngrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:165
May 17 16:44:15.570: Unexpected error:
    <*errors.errorString | 0xc001e522f0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.29+d4d11e74d02748]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-3n62\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-5rxc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-c49r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading node environment variables. ==\\nUsing subnet bootstrap-e2e\\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\\nnode/bootstrap-e2e-minion-group-3n62 cordoned\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"foo-sggzt\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-xc2sp\\\"\\nevicting pod \\\"res-cons-upgrade-5tvnz\\\"\\nevicting pod \\\"res-cons-upgrade-cm46g\\\"\\nevicting pod \\\"l7-default-backend-678889f899-8chnw\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-rbzs2\\\"\\npod/res-cons-upgrade-5tvnz evicted\\npod/l7-default-backend-678889f899-8chnw evicted\\npod/res-cons-upgrade-cm46g evicted\\npod/metrics-server-v0.3.6-5f859c87d6-xc2sp evicted\\npod/ss-0 evicted\\npod/foo-sggzt evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-rbzs2 evicted\\nnode/bootstrap-e2e-minion-group-3n62 evicted\\n.........................................................................................................................................................Node bootstrap-e2e-minion-group-3n62 recreated.\\nNode bootstrap-e2e-minion-group-3n62 Ready=True\\nnode/bootstrap-e2e-minion-group-3n62 uncordoned\\nnode/bootstrap-e2e-minion-group-5rxc cordoned\\nevicting pod \\\"ss-1\\\"\\nevicting pod \\\"volume-snapshot-controller-0\\\"\\nevicting pod \\\"service-test-x7765\\\"\\nevicting pod \\\"dp-657fc4b57d-qnmkj\\\"\\nevicting pod \\\"res-cons-upgrade-ctrl-r5bxq\\\"\\nevicting pod \\\"fluentd-gcp-scaler-76d9c77b4d-4jm9q\\\"\\nevicting pod \\\"foo-8k4fb\\\"\\nevicting pod \\\"coredns-65567c7b57-fjgkr\\\"\\nevicting pod \\\"res-cons-upgrade-8hh82\\\"\\nevicting pod \\\"rs-smcrc\\\"\\nevicting pod \\\"ss-0\\\"\\nevicting pod \\\"kube-dns-autoscaler-65bc6d4889-bzp2b\\\"\\nevicting pod \\\"res-cons-upgrade-q5qdk\\\"\\nevicting pod \\\"kubernetes-dashboard-7778f8b456-fmd87\\\"\\nevicting pod \\\"res-cons-upgrade-7tvsf\\\"\\npod/dp-657fc4b57d-qnmkj evicted\\npod/kube-dns-autoscaler-65bc6d4889-bzp2b evicted\\npod/service-test-x7765 evicted\\npod/rs-smcrc evicted\\npod/res-cons-upgrade-8hh82 evicted\\npod/ss-0 evicted\\npod/res-cons-upgrade-q5qdk evicted\\npod/ss-1 evicted\\npod/kubernetes-dashboard-7778f8b456-fmd87 evicted\\npod/res-cons-upgrade-7tvsf evicted\\npod/coredns-65567c7b57-fjgkr evicted\\npod/res-cons-upgrade-ctrl-r5bxq evicted\\npod/volume-snapshot-controller-0 evicted\\npod/foo-8k4fb evicted\\npod/fluentd-gcp-scaler-76d9c77b4d-4jm9q evicted\\nnode/bootstrap-e2e-minion-group-5rxc evicted\\n...........................................................................................................Node bootstrap-e2e-minion-group-5rxc recreated.\\nNode bootstrap-e2e-minion-group-5rxc Ready=True\\nnode/bootstrap-e2e-minion-group-5rxc uncordoned\\nnode/bootstrap-e2e-minion-group-c49r cordoned\\nevicting pod \\\"ss-2\\\"\\nevicting pod \\\"coredns-65567c7b57-kwlks\\\"\\nevicting pod \\\"l7-default-backend-678889f899-8pbm2\\\"\\nevicting pod \\\"event-exporter-v0.3.1-747b47fcd-ttqxd\\\"\\nevicting pod \\\"foo-6vrq8\\\"\\nevicting pod \\\"heapster-v1.6.0-beta.1-6cf46d596d-7k52n\\\"\\nevicting pod \\\"apparmor-loader-7k7dd\\\"\\nevicting pod \\\"test-apparmor-lhxrv\\\"\\nevicting pod \\\"res-cons-upgrade-5tnz7\\\"\\nevicting pod \\\"res-cons-upgrade-nrbrt\\\"\\nevicting pod \\\"test-apparmor-tlkjf\\\"\\nevicting pod \\\"service-test-x2zzp\\\"\\nevicting pod \\\"metrics-server-v0.3.6-5f859c87d6-lqhq4\\\"\\npod/test-apparmor-lhxrv evicted\\npod/ss-2 evicted\\npod/l7-default-backend-678889f899-8pbm2 evicted\\npod/apparmor-loader-7k7dd evicted\\npod/res-cons-upgrade-5tnz7 evicted\\npod/metrics-server-v0.3.6-5f859c87d6-lqhq4 evicted\\npod/res-cons-upgrade-nrbrt evicted\\npod/heapster-v1.6.0-beta.1-6cf46d596d-7k52n evicted\\npod/service-test-x2zzp evicted\\npod/coredns-65567c7b57-kwlks evicted\\npod/test-apparmor-tlkjf evicted\\npod/foo-6vrq8 evicted\\npod/event-exporter-v0.3.1-747b47fcd-ttqxd evicted\\nnode/bootstrap-e2e-minion-group-c49r evicted\\n....................................................................................................................................................Node bootstrap-e2e-minion-group-c49r recreated.\\nNode bootstrap-e2e-minion-group-c49r Ready=True\\nnode/bootstrap-e2e-minion-group-c49r uncordoned\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Applying the latest default CoreDNS configuration ==\\nserviceaccount/coredns unchanged\\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\\nconfigmap/coredns configured\\ndeployment.apps/coredns unchanged\\nservice/kube-dns unchanged\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.6-beta.0.35+1596ecd5da9c7b\\nbootstrap-e2e-minion-group-3n62   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nbootstrap-e2e-minion-group-5rxc   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nbootstrap-e2e-minion-group-c49r   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\ncontroller-manager   Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.6-beta.0.35+1596ecd5da9c7b\\\"\\nname: \\\"bootstrap-e2e-minion-group-3n62\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\nname: \\\"bootstrap-e2e-minion-group-5rxc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\nname: \\\"bootstrap-e2e-minion-group-c49r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.10-beta.0.29+d4d11e74d02748\\\"\\n\", stderr \"Project: k8s-jkns-gce-slow-1-6\\nNetwork Project: k8s-jkns-gce-slow-1-6\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-3n62 bootstrap-e2e-minion-group-5rxc bootstrap-e2e-minion-group-c49r\\n== Preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748].\\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\\nbootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748  n1-standard-2               2020-05-17T09:32:54.152-07:00\\n== Finished preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\\n== Upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748 with max parallelism of 1. ==\\n== Draining bootstrap-e2e-minion-group-3n62. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-nr7r7, kube-system/metadata-proxy-v0.1-n7l7m, sig-apps-daemonset-upgrade-8481/ds1-7f7qj\\n== Recreating instance bootstrap-e2e-minion-group-3n62. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-3n62 to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-3n62. == \\n== Draining bootstrap-e2e-minion-group-5rxc. == \\nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-9jn6m, kube-system/metadata-proxy-v0.1-wnfdq, sig-apps-daemonset-upgrade-8481/ds1-c62fk\\n== Recreating instance bootstrap-e2e-minion-group-5rxc. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-5rxc to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-5rxc. == \\n== Draining bootstrap-e2e-minion-group-c49r. == \\nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-3041/test-apparmor-tlkjf; ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jv7cf, kube-system/metadata-proxy-v0.1-5jjf4, sig-apps-daemonset-upgrade-8481/ds1-d9m6k\\n== Recreating instance bootstrap-e2e-minion-group-c49r. ==\\n== Waiting for new node to be added to k8s.  ==\\n== Waiting for bootstrap-e2e-minion-group-c49r to become ready. ==\\n== Uncordon bootstrap-e2e-minion-group-c49r. == \\n== Deleting old templates in k8s-jkns-gce-slow-1-6. ==\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template].\\n== Finished upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748. ==\\nWarning: Permanently added 'compute.1034839008819862361' (ED25519) to the list of known hosts.\\r\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gce-slow-1-6\\nNetwork Project: k8s-jkns-gce-slow-1-6\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-N -o v1.16.10-beta.0.29+d4d11e74d02748]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-3n62\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-5rxc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-c49r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading node environment variables. ==\nUsing subnet bootstrap-e2e\nInstance template name: bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\nnode/bootstrap-e2e-minion-group-3n62 cordoned\nevicting pod \"ss-0\"\nevicting pod \"foo-sggzt\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-xc2sp\"\nevicting pod \"res-cons-upgrade-5tvnz\"\nevicting pod \"res-cons-upgrade-cm46g\"\nevicting pod \"l7-default-backend-678889f899-8chnw\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-rbzs2\"\npod/res-cons-upgrade-5tvnz evicted\npod/l7-default-backend-678889f899-8chnw evicted\npod/res-cons-upgrade-cm46g evicted\npod/metrics-server-v0.3.6-5f859c87d6-xc2sp evicted\npod/ss-0 evicted\npod/foo-sggzt evicted\npod/fluentd-gcp-scaler-76d9c77b4d-rbzs2 evicted\nnode/bootstrap-e2e-minion-group-3n62 evicted\n.........................................................................................................................................................Node bootstrap-e2e-minion-group-3n62 recreated.\nNode bootstrap-e2e-minion-group-3n62 Ready=True\nnode/bootstrap-e2e-minion-group-3n62 uncordoned\nnode/bootstrap-e2e-minion-group-5rxc cordoned\nevicting pod \"ss-1\"\nevicting pod \"volume-snapshot-controller-0\"\nevicting pod \"service-test-x7765\"\nevicting pod \"dp-657fc4b57d-qnmkj\"\nevicting pod \"res-cons-upgrade-ctrl-r5bxq\"\nevicting pod \"fluentd-gcp-scaler-76d9c77b4d-4jm9q\"\nevicting pod \"foo-8k4fb\"\nevicting pod \"coredns-65567c7b57-fjgkr\"\nevicting pod \"res-cons-upgrade-8hh82\"\nevicting pod \"rs-smcrc\"\nevicting pod \"ss-0\"\nevicting pod \"kube-dns-autoscaler-65bc6d4889-bzp2b\"\nevicting pod \"res-cons-upgrade-q5qdk\"\nevicting pod \"kubernetes-dashboard-7778f8b456-fmd87\"\nevicting pod \"res-cons-upgrade-7tvsf\"\npod/dp-657fc4b57d-qnmkj evicted\npod/kube-dns-autoscaler-65bc6d4889-bzp2b evicted\npod/service-test-x7765 evicted\npod/rs-smcrc evicted\npod/res-cons-upgrade-8hh82 evicted\npod/ss-0 evicted\npod/res-cons-upgrade-q5qdk evicted\npod/ss-1 evicted\npod/kubernetes-dashboard-7778f8b456-fmd87 evicted\npod/res-cons-upgrade-7tvsf evicted\npod/coredns-65567c7b57-fjgkr evicted\npod/res-cons-upgrade-ctrl-r5bxq evicted\npod/volume-snapshot-controller-0 evicted\npod/foo-8k4fb evicted\npod/fluentd-gcp-scaler-76d9c77b4d-4jm9q evicted\nnode/bootstrap-e2e-minion-group-5rxc evicted\n...........................................................................................................Node bootstrap-e2e-minion-group-5rxc recreated.\nNode bootstrap-e2e-minion-group-5rxc Ready=True\nnode/bootstrap-e2e-minion-group-5rxc uncordoned\nnode/bootstrap-e2e-minion-group-c49r cordoned\nevicting pod \"ss-2\"\nevicting pod \"coredns-65567c7b57-kwlks\"\nevicting pod \"l7-default-backend-678889f899-8pbm2\"\nevicting pod \"event-exporter-v0.3.1-747b47fcd-ttqxd\"\nevicting pod \"foo-6vrq8\"\nevicting pod \"heapster-v1.6.0-beta.1-6cf46d596d-7k52n\"\nevicting pod \"apparmor-loader-7k7dd\"\nevicting pod \"test-apparmor-lhxrv\"\nevicting pod \"res-cons-upgrade-5tnz7\"\nevicting pod \"res-cons-upgrade-nrbrt\"\nevicting pod \"test-apparmor-tlkjf\"\nevicting pod \"service-test-x2zzp\"\nevicting pod \"metrics-server-v0.3.6-5f859c87d6-lqhq4\"\npod/test-apparmor-lhxrv evicted\npod/ss-2 evicted\npod/l7-default-backend-678889f899-8pbm2 evicted\npod/apparmor-loader-7k7dd evicted\npod/res-cons-upgrade-5tnz7 evicted\npod/metrics-server-v0.3.6-5f859c87d6-lqhq4 evicted\npod/res-cons-upgrade-nrbrt evicted\npod/heapster-v1.6.0-beta.1-6cf46d596d-7k52n evicted\npod/service-test-x2zzp evicted\npod/coredns-65567c7b57-kwlks evicted\npod/test-apparmor-tlkjf evicted\npod/foo-6vrq8 evicted\npod/event-exporter-v0.3.1-747b47fcd-ttqxd evicted\nnode/bootstrap-e2e-minion-group-c49r evicted\n....................................................................................................................................................Node bootstrap-e2e-minion-group-c49r recreated.\nNode bootstrap-e2e-minion-group-c49r Ready=True\nnode/bootstrap-e2e-minion-group-c49r uncordoned\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Applying the latest default CoreDNS configuration ==\nserviceaccount/coredns unchanged\nclusterrole.rbac.authorization.k8s.io/system:coredns unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:coredns configured\nconfigmap/coredns configured\ndeployment.apps/coredns unchanged\nservice/kube-dns unchanged\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   15m   v1.17.6-beta.0.35+1596ecd5da9c7b\nbootstrap-e2e-minion-group-3n62   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nbootstrap-e2e-minion-group-5rxc   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nbootstrap-e2e-minion-group-c49r   Ready                      <none>   15m   v1.16.10-beta.0.29+d4d11e74d02748\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.6-beta.0.35+1596ecd5da9c7b\"\nname: \"bootstrap-e2e-minion-group-3n62\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\nname: \"bootstrap-e2e-minion-group-5rxc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\nname: \"bootstrap-e2e-minion-group-c49r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.10-beta.0.29+d4d11e74d02748\"\n", stderr "Project: k8s-jkns-gce-slow-1-6\nNetwork Project: k8s-jkns-gce-slow-1-6\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-3n62 bootstrap-e2e-minion-group-5rxc bootstrap-e2e-minion-group-c49r\n== Preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\nAttempt 1 to create bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748].\nNAME                                                             MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP\nbootstrap-e2e-minion-template-v1-16-10-beta-0-29-d4d11e74d02748  n1-standard-2               2020-05-17T09:32:54.152-07:00\n== Finished preparing node upgrade (to v1.16.10-beta.0.29+d4d11e74d02748). ==\n== Upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748 with max parallelism of 1. ==\n== Draining bootstrap-e2e-minion-group-3n62. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-nr7r7, kube-system/metadata-proxy-v0.1-n7l7m, sig-apps-daemonset-upgrade-8481/ds1-7f7qj\n== Recreating instance bootstrap-e2e-minion-group-3n62. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-3n62 to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-3n62. == \n== Draining bootstrap-e2e-minion-group-5rxc. == \nWARNING: ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-9jn6m, kube-system/metadata-proxy-v0.1-wnfdq, sig-apps-daemonset-upgrade-8481/ds1-c62fk\n== Recreating instance bootstrap-e2e-minion-group-5rxc. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-5rxc to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-5rxc. == \n== Draining bootstrap-e2e-minion-group-c49r. == \nWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: apparmor-upgrade-3041/test-apparmor-tlkjf; ignoring DaemonSet-managed Pods: kube-system/fluentd-gcp-v3.2.0-jv7cf, kube-system/metadata-proxy-v0.1-5jjf4, sig-apps-daemonset-upgrade-8481/ds1-d9m6k\n== Recreating instance bootstrap-e2e-minion-group-c49r. ==\n== Waiting for new node to be added to k8s.  ==\n== Waiting for bootstrap-e2e-minion-group-c49r to become ready. ==\n== Uncordon bootstrap-e2e-minion-group-c49r. == \n== Deleting old templates in k8s-jkns-gce-slow-1-6. ==\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gce-slow-1-6/global/instanceTemplates/bootstrap-e2e-minion-template].\n== Finished upgrading nodes to v1.16.10-beta.0.29+d4d11e74d02748. ==\nWarning: Permanently added 'compute.1034839008819862361' (ED25519) to the list of known hosts.\r\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gce-slow-1-6\nNetwork Project: k8s-jkns-gce-slow-1-6\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:178
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Find ss-0nevicting mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow] 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgluster\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumeIO\sshould\swrite\sfiles\sof\svarious\ssizes\,\sverify\ssize\,\svalidate\scontent\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:137
May 18 02:00:45.120: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running ../../../../kubernetes_skew/cluster/kubectl.sh --server=https://104.198.13.149 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-1056 gluster-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/gluster-volumeio-1056-dd_if bs=1048576 >>/opt/gluster_io_test_volumeio-1056-104857600 2>/dev/null; let i+=1; done:\nCommand stdout:\n\nstderr:\ncommand terminated with exit code 137\n\nerror:\nexit status 137",
        },
        Code: 137,
    }
    error running ../../../../kubernetes_skew/cluster/kubectl.sh --server=https://104.198.13.149 --kubeconfig=/workspace/.kube/config exec --namespace=volumeio-1056 gluster-io-client -- /bin/sh -c i=0; while [ $i -lt 100 ]; do dd if=/opt/gluster-volumeio-1056-dd_if bs=1048576 >>/opt/gluster_io_test_volumeio-1056-104857600 2>/dev/null; let i+=1; done:
    Command stdout:
    
    stderr:
    command terminated with exit code 137
    
    error:
    exit status 137
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:153
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


SkewTest 12h28m

error during kubetest --test --test_args=--ginkgo.focus=\[Slow\]|\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=skew --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 22m28s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterDowngrade\] --upgrade-target=ci/k8s-stable2 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 441 Passed Tests

Show 9051 Skipped Tests