This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 18 failed / 192 succeeded
Started2020-01-16 11:19
Elapsed4h53m
Revision
Buildergke-prow-default-pool-cf4891d4-sr6z
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c15421c3-5d63-4681-beb2-96143ec3e9b4/targets/test'}}
pod036318b2-3852-11ea-8f3e-66e48c863062
resultstorehttps://source.cloud.google.com/results/invocations/c15421c3-5d63-4681-beb2-96143ec3e9b4/targets/test
infra-commita9b921ef5
job-versionv1.16.5-beta.1.51+e7f962ba86f4ce
master_os_imagecos-77-12371-89-0
node_os_imagecos-73-11647-163-0
pod036318b2-3852-11ea-8f3e-66e48c863062
revisionv1.16.5-beta.1.51+e7f962ba86f4ce

Test Failures


Cluster upgrade [sig-cloud-provider-gcp] cluster-upgrade 5m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-cloud\-provider\-gcp\]\scluster\-upgrade$'
Jan 16 11:34:24.995: Unexpected error:
    <*errors.errorString | 0xc00071ad70>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.2-beta.0]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-dmz1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-drlt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-v29x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release/release/v1.17.2-beta.0/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n......................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m10s   v1.17.1\\nbootstrap-e2e-minion-group-dmz1   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nbootstrap-e2e-minion-group-drlt   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nbootstrap-e2e-minion-group-v29x   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\ncontroller-manager   Healthy   ok                  \\nscheduler            Healthy   ok                  \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.1\\\"\\nname: \\\"bootstrap-e2e-minion-group-dmz1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-drlt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-v29x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\n\", stderr \"Project: gce-gci-upg-lat-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-dmz1 bootstrap-e2e-minion-group-drlt bootstrap-e2e-minion-group-v29x\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.83.235.203; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-89-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-114-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.235.203  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-gci-upg-lat-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\\nZone: us-west1-b\\nWARNING: Some requests did not succeed.\\n - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.\\n - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.\\n - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.\\n\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.2-beta.0]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-dmz1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-drlt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-v29x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release/release/v1.17.2-beta.0/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n......................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m10s   v1.17.1\nbootstrap-e2e-minion-group-dmz1   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nbootstrap-e2e-minion-group-drlt   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nbootstrap-e2e-minion-group-v29x   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-0               Healthy   {\"health\":\"true\"}   \netcd-1               Healthy   {\"health\":\"true\"}   \ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.1\"\nname: \"bootstrap-e2e-minion-group-dmz1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-drlt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-v29x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\n", stderr "Project: gce-gci-upg-lat-1-4-ctl-skew\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-dmz1 bootstrap-e2e-minion-group-drlt bootstrap-e2e-minion-group-v29x\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.83.235.203; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-89-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-114-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.235.203  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-gci-upg-lat-1-4-ctl-skew\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\nZone: us-west1-b\nWARNING: Some requests did not succeed.\n - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.\n - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.\n - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.\n\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred

k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:147 +0x12f
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc002e69200)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/cloud/gcp.runUpgradeSuite(0xc000a1bcc0, 0x7bdf120, 0xc, 0xc, 0xc0006c3d70, 0xc000c3fa40, 0x2, 0xc002569300)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:479 +0x47a
k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:152 +0x222
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0003de300)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0003de300)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0003de300, 0x4c764b0)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes 5m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sNodes\s\[Disruptive\]\sResize\s\[Slow\]\sshould\sbe\sable\sto\sdelete\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/resize_nodes.go:111
Jan 16 14:05:03.224: Unexpected error:
    <*errors.errorString | 0xc004364ee0>: {
        s: "failed to wait for pods responding: pod with UID 63644ca8-0c39-4da9-a39a-70de48cce2b6 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &PodList{ListMeta:{/api/v1/namespaces/resize-nodes-2658/pods 44292  <nil>},Items:[]Pod{Pod{ObjectMeta:{my-hostname-delete-node-kbf7x my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-kbf7x 272d1d08-0f41-4995-abfa-9b0b746c1e6d 44204 0 2020-01-16 14:04:31 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002aca9ae 0xc002aca9af}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-f6rs,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.8,PodIP:10.64.0.125,StartTime:2020-01-16 14:04:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:04:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f8cda843edb1cb6d6b32009aff0f8358124988f8062ec2b1aa5efbf27497d292,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},},Pod{ObjectMeta:{my-hostname-delete-node-pchlt my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-pchlt 231f8dad-c4ac-41b2-b4fa-5f4cb5311f4c 43518 0 2020-01-16 14:00:56 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002acaae6 0xc002acaae7}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-f6rs,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.8,PodIP:10.64.0.124,StartTime:2020-01-16 14:00:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:00:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9e22d72607f58fec5fed72403ace65c06035419cc398d243f557c9ff708ab7ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},},Pod{ObjectMeta:{my-hostname-delete-node-t8qpb my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-t8qpb 1d819805-646d-4728-95a8-b903f98eba3d 43514 0 2020-01-16 14:00:56 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002acac16 0xc002acac17}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-v29x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.1.225,StartTime:2020-01-16 14:00:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:00:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://60c91b663ec2cfdab49fbab8b6e343e340f3f4ac6dbc8dab16a65efdf8264c44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},},},}",
    }
    failed to wait for pods responding: pod with UID 63644ca8-0c39-4da9-a39a-70de48cce2b6 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &PodList{ListMeta:{/api/v1/namespaces/resize-nodes-2658/pods 44292  <nil>},Items:[]Pod{Pod{ObjectMeta:{my-hostname-delete-node-kbf7x my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-kbf7x 272d1d08-0f41-4995-abfa-9b0b746c1e6d 44204 0 2020-01-16 14:04:31 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002aca9ae 0xc002aca9af}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-f6rs,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:04:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.8,PodIP:10.64.0.125,StartTime:2020-01-16 14:04:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:04:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f8cda843edb1cb6d6b32009aff0f8358124988f8062ec2b1aa5efbf27497d292,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},},Pod{ObjectMeta:{my-hostname-delete-node-pchlt my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-pchlt 231f8dad-c4ac-41b2-b4fa-5f4cb5311f4c 43518 0 2020-01-16 14:00:56 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002acaae6 0xc002acaae7}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-f6rs,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.8,PodIP:10.64.0.124,StartTime:2020-01-16 14:00:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:00:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9e22d72607f58fec5fed72403ace65c06035419cc398d243f557c9ff708ab7ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},},Pod{ObjectMeta:{my-hostname-delete-node-t8qpb my-hostname-delete-node- resize-nodes-2658 /api/v1/namespaces/resize-nodes-2658/pods/my-hostname-delete-node-t8qpb 1d819805-646d-4728-95a8-b903f98eba3d 43514 0 2020-01-16 14:00:56 +0000 UTC <nil> <nil> map[name:my-hostname-delete-node] map[] [{v1 ReplicationController my-hostname-delete-node d60b8f9e-58b8-4488-aef6-cc15ba63ed3e 0xc002acac16 0xc002acac17}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8wwbb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8wwbb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:my-hostname-delete-node,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9376,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8wwbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:bootstrap-e2e-minion-group-v29x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-16 14:00:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.138.0.5,PodIP:10.64.1.225,StartTime:2020-01-16 14:00:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:my-hostname-delete-node,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-16 14:00:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://60c91b663ec2cfdab49fbab8b6e343e340f3f4ac6dbc8dab16a65efdf8264c44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.1.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},},},}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/resize_nodes.go:137
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find with mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 17m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:136
Jan 16 11:34:24.995: Unexpected error:
    <*errors.errorString | 0xc00071ad70>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.2-beta.0]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-dmz1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-drlt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-v29x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release/release/v1.17.2-beta.0/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n......................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE     VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m10s   v1.17.1\\nbootstrap-e2e-minion-group-dmz1   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nbootstrap-e2e-minion-group-drlt   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nbootstrap-e2e-minion-group-v29x   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\\nValidate output:\\nNAME                 STATUS    MESSAGE             ERROR\\netcd-0               Healthy   {\\\"health\\\":\\\"true\\\"}   \\netcd-1               Healthy   {\\\"health\\\":\\\"true\\\"}   \\ncontroller-manager   Healthy   ok                  \\nscheduler            Healthy   ok                  \\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.1\\\"\\nname: \\\"bootstrap-e2e-minion-group-dmz1\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-drlt\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\nname: \\\"bootstrap-e2e-minion-group-v29x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.5-beta.1.51+e7f962ba86f4ce\\\"\\n\", stderr \"Project: gce-gci-upg-lat-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-dmz1 bootstrap-e2e-minion-group-drlt bootstrap-e2e-minion-group-v29x\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.83.235.203; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-89-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-114-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.235.203  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: gce-gci-upg-lat-1-4-ctl-skew\\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\\nZone: us-west1-b\\nWARNING: Some requests did not succeed.\\n - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.\\n - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.\\n - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.\\n\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.17.2-beta.0]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-dmz1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-drlt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-v29x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release/release/v1.17.2-beta.0/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n......................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE     VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   9m10s   v1.17.1\nbootstrap-e2e-minion-group-dmz1   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nbootstrap-e2e-minion-group-drlt   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nbootstrap-e2e-minion-group-v29x   Ready                      <none>   9m14s   v1.16.5-beta.1.51+e7f962ba86f4ce\nValidate output:\nNAME                 STATUS    MESSAGE             ERROR\netcd-0               Healthy   {\"health\":\"true\"}   \netcd-1               Healthy   {\"health\":\"true\"}   \ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.1\"\nname: \"bootstrap-e2e-minion-group-dmz1\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-drlt\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\nname: \"bootstrap-e2e-minion-group-v29x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.5-beta.1.51+e7f962ba86f4ce\"\n", stderr "Project: gce-gci-upg-lat-1-4-ctl-skew\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-dmz1 bootstrap-e2e-minion-group-drlt bootstrap-e2e-minion-group-v29x\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.83.235.203; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/gce-gci-upg-lat-1-4-ctl-skew/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-89-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-114-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   34.83.235.203  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: gce-gci-upg-lat-1-4-ctl-skew\nNetwork Project: gce-gci-upg-lat-1-4-ctl-skew\nZone: us-west1-b\nWARNING: Some requests did not succeed.\n - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.\n - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.\n - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.\n\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:147
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns. 1m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\sdisruptive\[Disruptive\]\sShould\stest\sthat\spv\sused\sin\sa\spod\sthat\sis\sdeleted\swhile\sthe\skubelet\sis\sdown\scleans\sup\swhen\sthe\skubelet\sreturns\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/disruptive.go:149
Jan 16 12:30:16.515: Expected find stdout to be empty.
Expected
    <string>: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-978aa3e2-173f-4974-9874-1d2078cda16c/dev/effbefb1-a4f4-48e6-84ab-ac52800b8dbb
    
to be empty
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:414