This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 16 failed / 15 succeeded
Started2020-10-08 20:32
Elapsed42m16s
Revision
Builder46b7e45b-09a5-11eb-8b01-e6b73feeeb4e
infra-commit27b797365
job-versionv1.17.13-rc.0.23+744dec9d5d21b5
master_os_imagecos-77-12371-175-0
node_os_imagecos-77-12371-175-0
revisionv1.17.13-rc.0.23+744dec9d5d21b5

Test Failures


Cluster upgrade [sig-apps] daemonset-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sdaemonset\-upgrade$'
Oct  8 20:47:34.932: Unexpected error:
    <*url.Error | 0xc001c16240>: {
        Op: "Get",
        URL: "https://34.83.199.21/api/v1/nodes",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/api/v1/nodes: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*DaemonSetUpgradeTest).validateRunningDaemonSet(0x7b26330, 0xc0009371e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/daemonsets.go:115 +0xbe
k8s.io/kubernetes/test/e2e/upgrades/apps.(*DaemonSetUpgradeTest).Test(0x7b26330, 0xc0009371e0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/daemonsets.go:104 +0xa4
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998980, 0xc0034c0ca0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0ca0, 0xc003437720)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] deployment-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sdeployment\-upgrade$'
Oct  8 20:47:34.934: Unexpected error:
    <*url.Error | 0xc00297b050>: {
        Op: "Get",
        URL: "https://34.83.199.21/apis/apps/v1/namespaces/sig-apps-deployment-upgrade-3826/deployments/dp",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/apis/apps/v1/namespaces/sig-apps-deployment-upgrade-3826/deployments/dp: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*DeploymentUpgradeTest).Test(0x7b3c340, 0xc0009366e0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/deployments.go:136 +0x271
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998780, 0xc0034c0c00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0c00, 0xc0034376d0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] job-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sjob\-upgrade$'
Oct  8 20:47:34.932: Unexpected error:
    <*url.Error | 0xc002959680>: {
        Op: "Get",
        URL: "https://34.83.199.21/api/v1/namespaces/sig-apps-job-upgrade-9750/pods?labelSelector=job%3Dfoo",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/api/v1/namespaces/sig-apps-job-upgrade-9750/pods?labelSelector=job%3Dfoo: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*JobUpgradeTest).Test(0x7b308f0, 0xc0009369a0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/job.go:65 +0xe4
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998840, 0xc0034c0c20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0c20, 0xc0034376e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] replicaset-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sreplicaset\-upgrade$'
Oct  8 20:47:34.932: Unexpected error:
    <*url.Error | 0xc00297ae70>: {
        Op: "Get",
        URL: "https://34.83.199.21/apis/apps/v1/namespaces/sig-apps-replicaset-upgrade-4013/replicasets/rs",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/apis/apps/v1/namespaces/sig-apps-replicaset-upgrade-4013/replicasets/rs: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*ReplicaSetUpgradeTest).Test(0x7b2ad90, 0xc0009362c0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/replicasets.go:84 +0x28a
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998680, 0xc0034c0ba0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0ba0, 0xc0034376b0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] statefulset-upgrade 6m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sstatefulset\-upgrade$'
Oct  8 20:48:04.934: Unexpected error:
    <*url.Error | 0xc002959e00>: {
        Op: "Get",
        URL: "https://34.83.199.21/apis/apps/v1/namespaces/ss/statefulsets",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/apis/apps/v1/namespaces/ss/statefulsets: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets(0x535a5a0, 0xc0029b4160, 0x49119e2, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:75 +0x1c7
k8s.io/kubernetes/test/e2e/upgrades/apps.(*StatefulSetUpgradeTest).Teardown(0x7b2ada0, 0xc000936580)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/statefulset.go:111 +0x53
panic(0x44fcaa0, 0xc002998280)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001aaedc0, 0x2b0, 0x790f465, 0x6c, 0x44, 0xc00195e700, 0x6ff)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa1
panic(0x3d254a0, 0x5108d50)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001aaedc0, 0x2b0, 0xc0022f7020, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1cc
k8s.io/kubernetes/test/e2e/framework.Fail(0xc001aaeb00, 0x29b, 0xc0030f0350, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:67 +0x1ee
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc002998000, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x0, 0xc002998000)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f0
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc002998000, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x51d0c60, 0xc002908540, 0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xf5
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x535a5a0, 0xc0029b4160, 0xc0006d0a00, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x216
k8s.io/kubernetes/test/e2e/framework/statefulset.ExecInStatefulPods(0x535a5a0, 0xc0029b4160, 0xc0006d0a00, 0xc001e06000, 0xe, 0xc00071c0a0, 0x1f)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:237 +0x5d
k8s.io/kubernetes/test/e2e/framework/statefulset.CheckMount(0x535a5a0, 0xc0029b4160, 0xc0006d0a00, 0x4913f38, 0x5, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:216 +0x3c7
k8s.io/kubernetes/test/e2e/upgrades/apps.(*StatefulSetUpgradeTest).verify(0x7b2ada0, 0xc000936580)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/statefulset.go:116 +0x8c
k8s.io/kubernetes/test/e2e/upgrades/apps.(*StatefulSetUpgradeTest).Test(0x7b2ada0, 0xc000936580, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/statefulset.go:106 +0x4c
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998740, 0xc0034c0be0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0be0, 0xc0034376c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-cloud-provider-gcp] cluster-upgrade 2m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-cloud\-provider\-gcp\]\scluster\-upgrade$'
Oct  8 20:47:04.929: Unexpected error:
    <*errors.errorString | 0xc001e021a0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.10-rc.0.34+92bf8a2b53f9bd]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-klgc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-v86z\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-vbzr\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\nFailure trying to curl release .sha1\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.10-rc.0.34+92bf8a2b53f9bd/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n\", stderr \"Project: k8s-jkns-e2e-protobuf\\nNetwork Project: k8s-jkns-e2e-protobuf\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-klgc bootstrap-e2e-minion-group-v86z bootstrap-e2e-minion-group-vbzr\\ncurl: (22) The requested URL returned error: 404 \\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.83.199.21; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nERROR: (gcloud.compute.instances.create) Could not fetch resource:\\n - The zone 'projects/k8s-jkns-e2e-protobuf/zones/us-west1-b' does not have enough resources available to fulfill the request.  '(resource type:compute)'.\\nFailed to create master instance due to non-retryable error\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.10-rc.0.34+92bf8a2b53f9bd]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-klgc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-v86z\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-vbzr\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\nFailure trying to curl release .sha1\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.10-rc.0.34+92bf8a2b53f9bd/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n", stderr "Project: k8s-jkns-e2e-protobuf\nNetwork Project: k8s-jkns-e2e-protobuf\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-klgc bootstrap-e2e-minion-group-v86z bootstrap-e2e-minion-group-vbzr\ncurl: (22) The requested URL returned error: 404 \nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.83.199.21; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nERROR: (gcloud.compute.instances.create) Could not fetch resource:\n - The zone 'projects/k8s-jkns-e2e-protobuf/zones/us-west1-b' does not have enough resources available to fulfill the request.  '(resource type:compute)'.\nFailed to create master instance due to non-retryable error\n"
occurred

k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:153 +0x132
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do(0xc003c611e8)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:111 +0x38a
k8s.io/kubernetes/test/e2e/cloud/gcp.runUpgradeSuite(0xc000293a20, 0x7999240, 0xc, 0xc, 0xc000380c00, 0xc0024230e0, 0x2, 0xc0034f9d60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:485 +0x47a
k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func2.3.1()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:158 +0x222
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0028b8100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:125 +0x324
k8s.io/kubernetes/test/e2e.TestE2E(0xc0028b8100)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc0028b8100, 0x4af5d18)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] [sig-api-machinery] configmap-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\s\[sig\-api\-machinery\]\sconfigmap\-upgrade$'
Oct  8 20:47:34.934: Error creating Pod
Unexpected error:
    <*url.Error | 0xc002908d80>: {
        Op: "Post",
        URL: "https://34.83.199.21/api/v1/namespaces/sig-storage-sig-api-machinery-configmap-upgrade-8730/pods",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Post https://34.83.199.21/api/v1/namespaces/sig-storage-sig-api-machinery-configmap-upgrade-8730/pods: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001dbe280, 0xc0009f8000, 0x34)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83 +0x157
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000936b00, 0xc0009f8000, 0x4953006, 0x15, 0xc002491d30, 0x2, 0x2, 0x4af9f28, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798 +0xad
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000936b00, 0x496613f, 0x18, 0xc0009f8000, 0x0, 0xc00009ad30, 0x2, 0x2, 0x4af9f28)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:782 +0x1bc
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:561
k8s.io/kubernetes/test/e2e/upgrades.(*ConfigMapUpgradeTest).testPod(0x7b26320, 0xc000936b00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/configmaps.go:148 +0x9a8
k8s.io/kubernetes/test/e2e/upgrades.(*ConfigMapUpgradeTest).Test(0x7b26320, 0xc000936b00, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/configmaps.go:74 +0x76
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998880, 0xc0034c0c40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0c40, 0xc0034376f0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] [sig-api-machinery] secret-upgrade 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\s\[sig\-api\-machinery\]\ssecret\-upgrade$'
Oct  8 20:47:34.933: Error creating Pod
Unexpected error:
    <*url.Error | 0xc002946e40>: {
        Op: "Post",
        URL: "https://34.83.199.21/api/v1/namespaces/sig-storage-sig-api-machinery-secret-upgrade-546/pods",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Post https://34.83.199.21/api/v1/namespaces/sig-storage-sig-api-machinery-secret-upgrade-546/pods: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001d5a0e0, 0xc003bae800, 0x30)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83 +0x157
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000936160, 0xc003bae800, 0x4943ac7, 0x12, 0xc002545d30, 0x2, 0x2, 0x4af9f28, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798 +0xad
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000936160, 0x495a224, 0x16, 0xc003bae800, 0x0, 0xc001283d30, 0x2, 0x2, 0x4af9f28)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:782 +0x1bc
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:561
k8s.io/kubernetes/test/e2e/upgrades.(*SecretUpgradeTest).testPod(0x7b26318, 0xc000936160)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/secrets.go:145 +0x9a5
k8s.io/kubernetes/test/e2e/upgrades.(*SecretUpgradeTest).Test(0x7b26318, 0xc000936160, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/secrets.go:72 +0x76
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998640, 0xc0034c0b80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0b80, 0xc003437690)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] persistent-volume-upgrade 6m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\spersistent\-volume\-upgrade$'
Oct  8 20:48:04.934: Failed to delete 1 or more PVs/PVCs. Errors: failed to delete PVC "pvc-bm22g": PVC Delete API error: Delete https://34.83.199.21/api/v1/namespaces/sig-storage-persistent-volume-upgrade-1208/persistentvolumeclaims/pvc-bm22g: dial tcp 34.83.199.21:443: i/o timeout

k8s.io/kubernetes/test/e2e/upgrades/storage.(*PersistentVolumeUpgradeTest).Teardown(0x7b26328, 0xc000936f20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/storage/persistent_volumes.go:78 +0xfc
panic(0x44fcaa0, 0xc00193ba40)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000c84000, 0x28d, 0x7941977, 0x60, 0x53, 0xc001fe6000, 0x7eb)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa1
panic(0x3d254a0, 0x5108d50)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000c84000, 0x28d, 0xc00248d7a0, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1cc
k8s.io/kubernetes/test/e2e/framework.Fail(0xc000a5c280, 0x278, 0xc000dda850, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:67 +0x1ee
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00193b340, 0x5238080, 0x7b5cdb8, 0x0, 0xc001dcc8d0, 0x1, 0x1, 0xc00193b340)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f0
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00193b340, 0x5238080, 0x7b5cdb8, 0xc001dcc8d0, 0x1, 0x1, 0x3d64180)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x51d0c60, 0xc0029c6e70, 0xc001dcc8d0, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xf5
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001de0080, 0xc00033ac00, 0x2a)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:83 +0x157
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000936f20, 0xc00033ac00, 0x491f30b, 0x9, 0xc00248dd70, 0x1, 0x1, 0x4af9f28, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:798 +0xad
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000936f20, 0x4934960, 0xf, 0xc00033ac00, 0x0, 0xc0026d0d70, 0x1, 0x1, 0x4af9f28)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:782 +0x1bc
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:561
k8s.io/kubernetes/test/e2e/upgrades/storage.(*PersistentVolumeUpgradeTest).testPod(0x7b26328, 0xc000936f20, 0x4996ebf, 0x20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/storage/persistent_volumes.go:86 +0x117
k8s.io/kubernetes/test/e2e/upgrades/storage.(*PersistentVolumeUpgradeTest).Test(0x7b26328, 0xc000936f20, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/storage/persistent_volumes.go:71 +0x8f
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998940, 0xc0034c0c80)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0c80, 0xc003437710)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade apparmor-upgrade 6m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Oct  8 20:47:34.931: Failed to list nodes
Unexpected error:
    <*url.Error | 0xc0028e9170>: {
        Op: "Get",
        URL: "https://34.83.199.21/api/v1/nodes",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/api/v1/nodes: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyNodesAppArmorEnabled(0x7b26338, 0xc000937340)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:105 +0x17c
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x7b26338, 0xc000937340, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:77 +0x5a
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc0029989c0, 0xc0034c0cc0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0cc0, 0xc003437730)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade hpa-upgrade 9m41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\shpa\-upgrade$'
Oct  8 20:50:59.939: Unexpected error:
    <*url.Error | 0xc002926030>: {
        Op: "Get",
        URL: "https://34.83.199.21/api/v1/namespaces/hpa-upgrade-8677/replicationcontrollers/res-cons-upgrade",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 199, 21],
                Port: 443,
                Zone: "",
            },
            Err: {},
        },
    }
    Get https://34.83.199.21/api/v1/namespaces/hpa-upgrade-8677/replicationcontrollers/res-cons-upgrade: dial tcp 34.83.199.21:443: i/o timeout
occurred

k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).CleanUp(0xc00386e2d0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:417 +0x212
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Teardown(0x7b2adb0, 0xc000936dc0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:75 +0x55
panic(0x44fcaa0, 0xc0021a4700)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x44fcaa0, 0xc0021a4700)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00104a000, 0x28a, 0x790dc62, 0x79, 0x141, 0xc001044a80, 0xa0c)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa1
panic(0x3d254a0, 0x5108d50)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00104a000, 0x28a, 0xc0024ef858, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1cc
k8s.io/kubernetes/test/e2e/framework.Fail(0xc000b38000, 0x275, 0xc00280a5b0, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:67 +0x1ee
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0021a4380, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x0, 0xc0021a4380)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f0
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0021a4380, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x51d0c60, 0xc002926690, 0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xf5
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).GetReplicas(0xc00386e2d0, 0x26)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:321 +0x6cb
k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas.func1(0xc0000a0b80, 0xc00034a380, 0x7f6d92c926d0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:355 +0x37
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0000a0c88, 0xc69e00, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x6f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001d88500, 0xc0026dfc88, 0xc001d88500, 0x26)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x4a817c800, 0xd18c2e2800, 0xc0000a0c88, 0xc00011a8c0, 0xc0000a0c90)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d
k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc00386e2d0, 0x1, 0xd18c2e2800)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:354 +0x7f
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7b2adb0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85 +0x211
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Test(0x7b2adb0, 0xc000936dc0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:68 +0x99
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc0029988c0, 0xc0034c0c60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0c60, 0xc003437700)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade service-upgrade 6m46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sservice\-upgrade$'
Oct  8 20:48:05.004: Failed to delete service service-upgrade-5070/service-test

k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceDeletedWithFinalizer(0x535a5a0, 0xc0024afa20, 0xc00392b7c0, 0x14, 0xc003982fd0, 0xc)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service/wait.go:37 +0x384
k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).test.func2(0x7b3bac0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/services.go:124 +0x90
panic(0x44fcaa0, 0xc0021a4ac0)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000b38280, 0x241, 0x79212d3, 0x6c, 0x65, 0xc001954700, 0x60e)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa1
panic(0x3d254a0, 0x5108d50)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000b38280, 0x241, 0xc001a6da18, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1cc
k8s.io/kubernetes/test/e2e/framework.Fail(0xc001680900, 0x22c, 0xc00280a880, 0x1, 0x1)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:67 +0x1ee
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0021a47c0, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x0, 0xc0021a47c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f0
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0021a47c0, 0x5238080, 0x7b5cdb8, 0x0, 0x0, 0x0, 0x7e2fc9)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x51ca8e0, 0xc002ab0ad0, 0x0, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xf5
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/framework/service.GetServiceLoadBalancerCreationTimeout(0x535a5a0, 0xc0024afa20, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service/resource.go:101 +0x76
k8s.io/kubernetes/test/e2e/framework/service.WaitForServiceUpdatedWithFinalizer(0x535a5a0, 0xc0024afa20, 0xc00392b7c0, 0x14, 0xc003982fd0, 0xc, 0x1bf08eb001)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/service/wait.go:61 +0xdb
k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).test(0x7b3bac0, 0xc000293ce0, 0xc00355c180, 0x35e0101)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/services.go:127 +0x1fa
k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).Test(0x7b3bac0, 0xc000293ce0, 0xc00355c180, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/services.go:91 +0x54
k8s.io/kubernetes/test/e2e/cloud/gcp.(*chaosMonkeyAdapter).Test(0xc002998600, 0xc0034c0b60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:400 +0x369
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034c0b60, 0xc003437680)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


IsUp 30s

error during ./hack/e2e-internal/e2e-status.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 22m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-cloud\-provider\-gcp\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:142
Oct  8 20:47:04.929: Unexpected error:
    <*errors.errorString | 0xc001e021a0>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.10-rc.0.34+92bf8a2b53f9bd]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-klgc\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-v86z\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nname: \\\"bootstrap-e2e-minion-group-vbzr\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.17.13-rc.0.23+744dec9d5d21b5\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\nFailure trying to curl release .sha1\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.10-rc.0.34+92bf8a2b53f9bd/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n\", stderr \"Project: k8s-jkns-e2e-protobuf\\nNetwork Project: k8s-jkns-e2e-protobuf\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-klgc bootstrap-e2e-minion-group-v86z bootstrap-e2e-minion-group-vbzr\\ncurl: (22) The requested URL returned error: 404 \\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 34.83.199.21; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\\n\\nERROR: (gcloud.compute.instances.create) Could not fetch resource:\\n - The zone 'projects/k8s-jkns-e2e-protobuf/zones/us-west1-b' does not have enough resources available to fulfill the request.  '(resource type:compute)'.\\nFailed to create master instance due to non-retryable error\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.18.10-rc.0.34+92bf8a2b53f9bd]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-klgc\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-v86z\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nname: \"bootstrap-e2e-minion-group-vbzr\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.17.13-rc.0.23+744dec9d5d21b5\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\nFailure trying to curl release .sha1\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.18.10-rc.0.34+92bf8a2b53f9bd/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n", stderr "Project: k8s-jkns-e2e-protobuf\nNetwork Project: k8s-jkns-e2e-protobuf\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-klgc bootstrap-e2e-minion-group-v86z bootstrap-e2e-minion-group-vbzr\ncurl: (22) The requested URL returned error: 404 \nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 34.83.199.21; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-protobuf/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-77-12371-175-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-77-12371-183-0'.\n\nERROR: (gcloud.compute.instances.create) Could not fetch resource:\n - The zone 'projects/k8s-jkns-e2e-protobuf/zones/us-west1-b' does not have enough resources available to fulfill the request.  '(resource type:compute)'.\nFailed to create master instance due to non-retryable error\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cloud/gcp/cluster_upgrade.go:153
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


UpgradeTest 23m16s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-target=ci/k8s-stable1 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


diffResources 0.00s

Error: 9 leaked resources
+NAME                                                            LOCATION    LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
+bootstrap-e2e-dynamic-pvc-059248bf-1d7a-4a94-9756-cb3abaab3a96  us-west1-b  zone            1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-6d26c64e-f381-4fbd-b6e0-8a5c293bcc18  us-west1-b  zone            1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-74e3fb82-b251-413a-acc8-d828778d182c  us-west1-b  zone            1        pd-standard  READY
+bootstrap-e2e-dynamic-pvc-a448421e-cdd0-446d-977c-12a406370296  us-west1-b  zone            2        pd-standard  READY
+NAME                              REGION    IP_ADDRESS     IP_PROTOCOL  TARGET
+a7fe1c192fbd341bd83de51bfb26d748  us-west1  35.203.153.29  TCP          us-west1/targetPools/a7fe1c192fbd341bd83de51bfb26d748
+NAME                              REGION    SESSION_AFFINITY  BACKUP  HEALTH_CHECKS
+a7fe1c192fbd341bd83de51bfb26d748  us-west1  NONE                      k8s-360225ebf137ffa9-node
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Show 4993 Skipped Tests