This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 8 failed / 925 succeeded
Started2020-03-15 16:10
Elapsed15h14m
Revision
Builder7303299b-66d7-11ea-8023-9a87cb763276
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e034e8eb-dc21-49bd-a87d-c3e763fe9f6e/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/e034e8eb-dc21-49bd-a87d-c3e763fe9f6e/targets/test
infra-commita0cadb92a
job-versionv1.16.9-beta.0.1+92e71139aa1639
master_os_imagecos-77-12371-175-0
node_os_imagecos-73-11647-163-0
revisionv1.16.9-beta.0.1+92e71139aa1639

Test Failures


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 17m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:91
Mar 15 16:25:34.640: Unexpected error:
    <*errors.errorString | 0xc002e59bc0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.5-beta.0.1+106c255ad7ab80]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nname: \\\"bootstrap-e2e-minion-group-5fc7\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nname: \\\"bootstrap-e2e-minion-group-cfvq\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nname: \\\"bootstrap-e2e-minion-group-rgqq\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.17.5-beta.0.1+106c255ad7ab80/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n.......................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m8s    v1.17.5-beta.0.1+106c255ad7ab80\\nbootstrap-e2e-minion-group-5fc7   Ready                      <none>   9m11s   v1.16.9-beta.0.1+92e71139aa1639\\nbootstrap-e2e-minion-group-cfvq   Ready                      <none>   9m12s   v1.16.9-beta.0.1+92e71139aa1639\\nbootstrap-e2e-minion-group-rgqq   Ready                      <none>   9m11s   v1.16.9-beta.0.1+92e71139aa1639\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\ncontroller-manager   Healthy   ok                  \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\nscheduler            Healthy   ok                  \\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.5-beta.0.1+106c255ad7ab80\\\"\\nname: \\\"bootstrap-e2e-minion-group-5fc7\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nname: \\\"bootstrap-e2e-minion-group-cfvq\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\nname: \\\"bootstrap-e2e-minion-group-rgqq\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.9-beta.0.1+92e71139aa1639\\\"\\n\", stderr \"Project: k8s-jkns-gci-gce-protobuf\\nNetwork Project: k8s-jkns-gci-gce-protobuf\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-5fc7 bootstrap-e2e-minion-group-cfvq bootstrap-e2e-minion-group-rgqq\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.247.48.77; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.247.48.77  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: k8s-jkns-gci-gce-protobuf\\nNetwork Project: k8s-jkns-gci-gce-protobuf\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.5-beta.0.1+106c255ad7ab80]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nname: \"bootstrap-e2e-minion-group-5fc7\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nname: \"bootstrap-e2e-minion-group-cfvq\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nname: \"bootstrap-e2e-minion-group-rgqq\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.17.5-beta.0.1+106c255ad7ab80/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n.......................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m8s    v1.17.5-beta.0.1+106c255ad7ab80\nbootstrap-e2e-minion-group-5fc7   Ready                      <none>   9m11s   v1.16.9-beta.0.1+92e71139aa1639\nbootstrap-e2e-minion-group-cfvq   Ready                      <none>   9m12s   v1.16.9-beta.0.1+92e71139aa1639\nbootstrap-e2e-minion-group-rgqq   Ready                      <none>   9m11s   v1.16.9-beta.0.1+92e71139aa1639\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \nscheduler            Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.5-beta.0.1+106c255ad7ab80\"\nname: \"bootstrap-e2e-minion-group-5fc7\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nname: \"bootstrap-e2e-minion-group-cfvq\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\nname: \"bootstrap-e2e-minion-group-rgqq\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.9-beta.0.1+92e71139aa1639\"\n", stderr "Project: k8s-jkns-gci-gce-protobuf\nNetwork Project: k8s-jkns-gci-gce-protobuf\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-5fc7 bootstrap-e2e-minion-group-cfvq bootstrap-e2e-minion-group-rgqq\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.247.48.77; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.247.48.77  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: k8s-jkns-gci-gce-protobuf\nNetwork Project: k8s-jkns-gci-gce-protobuf\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:106
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 2m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:58
Mar 16 06:15:48.713: Timed out after 60.001s.
Expected success, but got an error:
    <*errors.errorString | 0xc001fddef0>: {
        s: "Event KubeletStart does not exist: [{{ } {bootstrap-e2e-minion-group-s1pr.15fcb2e6f962b126  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb2e6f962b126 d9c5991c-62ee-4652-a6d7-66c34df3a442 30422 0 2020-03-16 05:51:14 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } Starting Starting kube-proxy. {kube-proxy bootstrap-e2e-minion-group-s1pr} 2020-03-16 05:51:14 +0000 UTC 2020-03-16 05:51:14 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s1pr.15fcb42f31184934  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb42f31184934 00eb36f3-17f5-49bc-a77f-ec3c50165d00 31375 0 2020-03-16 06:14:43 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-s1pr} 2020-03-16 06:14:43 +0000 UTC 2020-03-16 06:14:43 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s1pr.15fcb42f31187772  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb42f31187772 d516a330-d8b8-4487-9268-00d2a1751bb1 31374 0 2020-03-16 06:14:43 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-s1pr} 2020-03-16 06:14:43 +0000 UTC 2020-03-16 06:14:43 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-ss08.15fcb42fc86e28db  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-ss08.15fcb42fc86e28db c209a498-b950-4a26-a7a5-4c34ad399baa 31377 0 2020-03-16 06:14:46 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-ss08 bootstrap-e2e-minion-group-ss08   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-ss08} 2020-03-16 06:14:46 +0000 UTC 2020-03-16 06:14:46 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-ss08.15fcb42fc86e61bb  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-ss08.15fcb42fc86e61bb 15722687-7498-4cf3-ac57-c69b0c703420 31376 0 2020-03-16 06:14:46 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-ss08 bootstrap-e2e-minion-group-ss08   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-ss08} 2020-03-16 06:14:46 +0000 UTC 2020-03-16 06:14:46 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcac309e5801a0  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcac309e5801a0 83bfdcab-65c3-4938-8970-e8a52eb7bf80 30672 0 2020-03-16 05:14:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } KubeletStart Started Kubernetes kubelet. {systemd-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 03:48:13 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 4 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcac6a18ecc9cf  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcac6a18ecc9cf 0bd7aab9-4f5f-4634-8552-0c2b53d03989 30671 0 2020-03-16 06:01:17 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz cc9574d4-9e08-4d0f-8313-11269f5b1899   } NodeNotReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeNotReady {node-controller } 2020-03-16 03:52:20 +0000 UTC 2020-03-16 06:01:17 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb374008a197e  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb374008a197e c4c4b3fb-40a6-46b6-b5c7-f4be0ceb2fcd 30673 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } Starting Starting kubelet. {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3740992c397  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3740992c397 891c571b-96aa-4bc8-b139-879856b0cb8e 30677 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasSufficientMemory Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasSufficientMemory {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37409930bb7  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37409930bb7 37b5d3d1-4d27-417b-ae50-98e4bf6d3778 30678 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasNoDiskPressure Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasNoDiskPressure {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37409934cd4  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37409934cd4 4f408fc4-ae78-4c9f-a183-5599ec6497b3 30679 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasSufficientPID Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasSufficientPID {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3740cfdf367  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3740cfdf367 cb039065-e8cd-438d-8368-45a79786ce6f 30680 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeNotReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeNotReady {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37427f9514c  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37427f9514c e5e2fa76-1383-4d73-8730-7eb29a58e50b 30681 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeAllocatableEnforced Updated Node Allocatable limit across pods {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:20 +0000 UTC 2020-03-16 06:01:20 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3742aede75f  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3742aede75f 1e746790-b47b-4738-b519-1aefa77e2a28 30682 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeReady {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:20 +0000 UTC 2020-03-16 06:01:20 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb4305559e725  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb4305559e725 4a4c8139-a969-409c-8beb-a4dea82d5bae 31379 0 2020-03-16 06:14:48 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:14:48 +0000 UTC 2020-03-16 06:14:48 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb430555a2e4a  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb430555a2e4a 70e27f75-4420-4d2a-9051-315230ef234a 31378 0 2020-03-16 06:14:48 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:14:48 +0000 UTC 2020-03-16 06:14:48 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  }]",
    }
    Event KubeletStart does not exist: [{{ } {bootstrap-e2e-minion-group-s1pr.15fcb2e6f962b126  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb2e6f962b126 d9c5991c-62ee-4652-a6d7-66c34df3a442 30422 0 2020-03-16 05:51:14 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } Starting Starting kube-proxy. {kube-proxy bootstrap-e2e-minion-group-s1pr} 2020-03-16 05:51:14 +0000 UTC 2020-03-16 05:51:14 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s1pr.15fcb42f31184934  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb42f31184934 00eb36f3-17f5-49bc-a77f-ec3c50165d00 31375 0 2020-03-16 06:14:43 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-s1pr} 2020-03-16 06:14:43 +0000 UTC 2020-03-16 06:14:43 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-s1pr.15fcb42f31187772  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-s1pr.15fcb42f31187772 d516a330-d8b8-4487-9268-00d2a1751bb1 31374 0 2020-03-16 06:14:43 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-s1pr bootstrap-e2e-minion-group-s1pr   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-s1pr} 2020-03-16 06:14:43 +0000 UTC 2020-03-16 06:14:43 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-ss08.15fcb42fc86e28db  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-ss08.15fcb42fc86e28db c209a498-b950-4a26-a7a5-4c34ad399baa 31377 0 2020-03-16 06:14:46 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-ss08 bootstrap-e2e-minion-group-ss08   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-ss08} 2020-03-16 06:14:46 +0000 UTC 2020-03-16 06:14:46 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-ss08.15fcb42fc86e61bb  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-ss08.15fcb42fc86e61bb 15722687-7498-4cf3-ac57-c69b0c703420 31376 0 2020-03-16 06:14:46 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-ss08 bootstrap-e2e-minion-group-ss08   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-ss08} 2020-03-16 06:14:46 +0000 UTC 2020-03-16 06:14:46 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcac309e5801a0  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcac309e5801a0 83bfdcab-65c3-4938-8970-e8a52eb7bf80 30672 0 2020-03-16 05:14:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } KubeletStart Started Kubernetes kubelet. {systemd-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 03:48:13 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 4 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcac6a18ecc9cf  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcac6a18ecc9cf 0bd7aab9-4f5f-4634-8552-0c2b53d03989 30671 0 2020-03-16 06:01:17 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz cc9574d4-9e08-4d0f-8313-11269f5b1899   } NodeNotReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeNotReady {node-controller } 2020-03-16 03:52:20 +0000 UTC 2020-03-16 06:01:17 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb374008a197e  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb374008a197e c4c4b3fb-40a6-46b6-b5c7-f4be0ceb2fcd 30673 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } Starting Starting kubelet. {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3740992c397  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3740992c397 891c571b-96aa-4bc8-b139-879856b0cb8e 30677 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasSufficientMemory Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasSufficientMemory {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37409930bb7  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37409930bb7 37b5d3d1-4d27-417b-ae50-98e4bf6d3778 30678 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasNoDiskPressure Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasNoDiskPressure {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37409934cd4  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37409934cd4 4f408fc4-ae78-4c9f-a183-5599ec6497b3 30679 0 2020-03-16 06:01:19 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeHasSufficientPID Node bootstrap-e2e-minion-group-x4dz status is now: NodeHasSufficientPID {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 2 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3740cfdf367  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3740cfdf367 cb039065-e8cd-438d-8368-45a79786ce6f 30680 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeNotReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeNotReady {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:19 +0000 UTC 2020-03-16 06:01:19 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb37427f9514c  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb37427f9514c e5e2fa76-1383-4d73-8730-7eb29a58e50b 30681 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeAllocatableEnforced Updated Node Allocatable limit across pods {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:20 +0000 UTC 2020-03-16 06:01:20 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb3742aede75f  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb3742aede75f 1e746790-b47b-4738-b519-1aefa77e2a28 30682 0 2020-03-16 06:01:20 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } NodeReady Node bootstrap-e2e-minion-group-x4dz status is now: NodeReady {kubelet bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:01:20 +0000 UTC 2020-03-16 06:01:20 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb4305559e725  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb4305559e725 4a4c8139-a969-409c-8beb-a4dea82d5bae 31379 0 2020-03-16 06:14:48 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } TaskHung kernel: INFO: task umount.aufs:21568 blocked for more than 120 seconds. {kernel-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:14:48 +0000 UTC 2020-03-16 06:14:48 +0000 UTC 1 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {bootstrap-e2e-minion-group-x4dz.15fcb430555a2e4a  default /api/v1/namespaces/default/events/bootstrap-e2e-minion-group-x4dz.15fcb430555a2e4a 70e27f75-4420-4d2a-9051-315230ef234a 31378 0 2020-03-16 06:14:48 +0000 UTC <nil> <nil> map[] map[] [] []  []} {Node  bootstrap-e2e-minion-group-x4dz bootstrap-e2e-minion-group-x4dz   } AUFSUmountHung Node condition KernelDeadlock is now: True, reason: AUFSUmountHung {kernel-monitor bootstrap-e2e-minion-group-x4dz} 2020-03-16 06:14:48 +0000 UTC 2020-03-16 06:14:48 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  }]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:141
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly] 7m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:328
Mar 16 06:54:46.681: Expected pod to be not found.
Unexpected error:
    <*errors.errorString | 0xc00009d010>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:278
				
				Click to see stdout/stderrfrom junit_01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart. 11m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sGenericPersistentVolume\[Disruptive\]\sWhen\skubelet\srestarts\sShould\stest\sthat\sa\sfile\swritten\sto\sthe\smount\sbefore\skubelet\srestart\sis\sreadable\safter\srestart\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/generic_persistent_volume-disruptive.go:80
Mar 16 07:13:32.896: Persistent Volume pvc-d9355f28-5033-4d09-a774-94b66d4f7386 not deleted by dynamic provisioner
Unexpected error:
    <*errors.errorString | 0xc008599180>: {
        s: "PersistentVolume pvc-d9355f28-5033-4d09-a774-94b66d4f7386 still exists within 5m0s",
    }
    PersistentVolume pvc-d9355f28-5033-4d09-a774-94b66d4f7386 still exists within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/nfs_persistent_volume-disruptive.go:292