This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 10 failed / 897 succeeded
Started2020-03-18 20:20
Elapsed15h13m
Revision
Builderd39f412e-6955-11ea-a525-bee7ca66797b
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/2a7031f9-cfb4-4967-b80c-d4c785048dbe/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/2a7031f9-cfb4-4967-b80c-d4c785048dbe/targets/test
infra-commit50fa6062e
job-versionv1.16.9-beta.0.7+5116ee4b159565
master_os_imagecos-77-12371-175-0
node_os_imagecos-73-11647-163-0
revisionv1.16.9-beta.0.7+5116ee4b159565

Test Failures


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 17m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:91
Mar 18 20:37:31.989: Unexpected error:
    <*errors.errorString | 0xc00306d700>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.5-beta.0.11+4dfcd1cc87879f]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nname: \\\"bootstrap-e2e-minion-group-010n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nname: \\\"bootstrap-e2e-minion-group-qg1r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nname: \\\"bootstrap-e2e-minion-group-z5t2\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.17.5-beta.0.11+4dfcd1cc87879f/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n........................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m47s   v1.17.5-beta.0.11+4dfcd1cc87879f\\nbootstrap-e2e-minion-group-010n   Ready                      <none>   9m48s   v1.16.9-beta.0.7+5116ee4b159565\\nbootstrap-e2e-minion-group-qg1r   Ready                      <none>   9m40s   v1.16.9-beta.0.7+5116ee4b159565\\nbootstrap-e2e-minion-group-z5t2   Ready                      <none>   9m40s   v1.16.9-beta.0.7+5116ee4b159565\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\ncontroller-manager   Healthy   ok                  \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.5-beta.0.11+4dfcd1cc87879f\\\"\\nname: \\\"bootstrap-e2e-minion-group-010n\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nname: \\\"bootstrap-e2e-minion-group-qg1r\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\nname: \\\"bootstrap-e2e-minion-group-z5t2\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.7+5116ee4b159565\\\"\\n\", stderr \"Project: k8s-jkns-gci-gce-kubenet\\nNetwork Project: k8s-jkns-gci-gce-kubenet\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-010n bootstrap-e2e-minion-group-qg1r bootstrap-e2e-minion-group-z5t2\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.233.238.215; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-kubenet/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-kubenet/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.238.215  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gci-gce-kubenet\\nNetwork Project: k8s-jkns-gci-gce-kubenet\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.5-beta.0.11+4dfcd1cc87879f]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nname: \"bootstrap-e2e-minion-group-010n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nname: \"bootstrap-e2e-minion-group-qg1r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nname: \"bootstrap-e2e-minion-group-z5t2\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.17.5-beta.0.11+4dfcd1cc87879f/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n........................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m47s   v1.17.5-beta.0.11+4dfcd1cc87879f\nbootstrap-e2e-minion-group-010n   Ready                      <none>   9m48s   v1.16.9-beta.0.7+5116ee4b159565\nbootstrap-e2e-minion-group-qg1r   Ready                      <none>   9m40s   v1.16.9-beta.0.7+5116ee4b159565\nbootstrap-e2e-minion-group-z5t2   Ready                      <none>   9m40s   v1.16.9-beta.0.7+5116ee4b159565\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-1               Healthy   {\"health\":\"true\"}   \ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.5-beta.0.11+4dfcd1cc87879f\"\nname: \"bootstrap-e2e-minion-group-010n\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nname: \"bootstrap-e2e-minion-group-qg1r\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\nname: \"bootstrap-e2e-minion-group-z5t2\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.7+5116ee4b159565\"\n", stderr "Project: k8s-jkns-gci-gce-kubenet\nNetwork Project: k8s-jkns-gci-gce-kubenet\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-010n bootstrap-e2e-minion-group-qg1r bootstrap-e2e-minion-group-z5t2\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.233.238.215; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-kubenet/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-kubenet/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.233.238.215  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gci-gce-kubenet\nNetwork Project: k8s-jkns-gci-gce-kubenet\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:106
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart. 11m54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\sdisruptive\[Disruptive\]\sShould\stest\sthat\spv\swritten\sbefore\skubelet\srestart\sis\sreadable\safter\srestart\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:144
Mar 19 08:40:14.850: while deleting pod
Unexpected error:
    <*errors.errorString | 0xc0048b7d60>: {
        s: "pod \"security-context-271ab723-d885-45e6-a608-92b5d85a5c3f\" was not deleted: timed out waiting for the condition",
    }
    pod "security-context-271ab723-d885-45e6-a608-92b5d85a5c3f" was not deleted: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:102
				
				Click to see stdout/stderrfrom junit_01.xml

Find security-context-271ab723-d885-45e6-a608-92b5d85a5c3f mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly] 6m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgluster\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:328
Mar 19 10:09:37.337: Expected pod to be not found.
Unexpected error:
    <*errors.errorString | 0xc0000d9000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:278
				
				Click to see stdout/stderrfrom junit_01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly] 1m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sfile\sas\ssubpath\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:318
Mar 19 11:21:01.775: while waiting for container to stabilize
Unexpected error:
    <*errors.StatusError | 0xc0013e2320>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"pod-subpath-test-local-preprovisionedpv-jz5r\" not found",
            Reason: "NotFound",
            Details: {
                Name: "pod-subpath-test-local-preprovisionedpv-jz5r",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "pod-subpath-test-local-preprovisionedpv-jz5r" not found
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:874
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly] 6m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\snfs\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:328
Mar 19 09:44:23.827: Expected pod to be not found.
Unexpected error:
    <*errors.errorString | 0xc0000d9000>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:278