PRliggitt: Ensure all new API versions of resources default to DeleteDependents
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2018-12-07 00:57
Elapsed27m29s
Versionv1.14.0-alpha.0.895+1dc10e7238a23e
Buildergke-prow-default-pool-3c8994a8-tc56
Refs master:1cd6ccb3
71792:a215829e
pod07782b81-f9bb-11e8-8e0e-0a580a6c001a
infra-commitd88a10807
job-versionv1.14.0-alpha.0.895+1dc10e7238a23e
pod07782b81-f9bb-11e8-8e0e-0a580a6c001a
repok8s.io/kubernetes
repo-commit1dc10e7238a23e662c32ccaefca66ef077175812
repos{u'k8s.io/kubernetes': u'master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71792:a215829eeccccd7df6d4f15030e3ccc28c398ea8', u'k8s.io/release': u'master'}

Test Failures


Up 3m19s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 00:57:36.946] process 233 exited with code 0 after 0.0m
I1207 00:57:36.947] Call:  gcloud config get-value account
I1207 00:57:37.223] process 246 exited with code 0 after 0.0m
I1207 00:57:37.224] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 00:57:37.224] Call:  kubectl get -oyaml pods/07782b81-f9bb-11e8-8e0e-0a580a6c001a
W1207 00:57:37.334] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 00:57:37.337] Command failed
I1207 00:57:37.337] process 259 exited with code 1 after 0.0m
E1207 00:57:37.337] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/07782b81-f9bb-11e8-8e0e-0a580a6c001a']' returned non-zero exit status 1
I1207 00:57:37.338] Root: /go/src
I1207 00:57:37.338] cd to /go/src
I1207 00:57:37.338] Checkout: /go/src/k8s.io/kubernetes master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71792:a215829eeccccd7df6d4f15030e3ccc28c398ea8 to /go/src/k8s.io/kubernetes
I1207 00:57:37.338] Call:  git init k8s.io/kubernetes
... skipping 1000 lines ...
I1207 01:18:53.856] Waiting for group to become stable, current operations: creating: 7
I1207 01:18:53.856] Group is stable
W1207 01:18:56.300] INSTANCE_GROUPS=e2e-71792-ac87c-minion-group
W1207 01:18:56.301] NODE_NAMES=e2e-71792-ac87c-minion-group-3czb e2e-71792-ac87c-minion-group-3h2q e2e-71792-ac87c-minion-group-7607 e2e-71792-ac87c-minion-group-kwv6 e2e-71792-ac87c-minion-group-nn7r e2e-71792-ac87c-minion-group-tr80 e2e-71792-ac87c-minion-group-v53d
W1207 01:18:56.301] Trying to find master named 'e2e-71792-ac87c-master'
W1207 01:18:56.301] Looking for address 'e2e-71792-ac87c-master-ip'
W1207 01:18:56.939] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W1207 01:18:56.939]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-71792-ac87c-master-ip' was not found
W1207 01:18:56.939] 
W1207 01:18:57.005] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
W1207 01:18:57.012] 2018/12/07 01:18:57 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 3m19.726157215s
W1207 01:18:57.012] 2018/12/07 01:18:57 e2e.go:538: Dumping logs locally to: /workspace/_artifacts
W1207 01:18:57.012] 2018/12/07 01:18:57 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
... skipping 3 lines ...
I1207 01:18:57.164] Sourcing kube-util.sh
I1207 01:18:57.164] Detecting project
I1207 01:18:57.164] Project: k8s-presubmit-scale
I1207 01:18:57.164] Network Project: k8s-presubmit-scale
I1207 01:18:57.164] Zone: us-east1-b
I1207 01:18:57.164] Dumping logs from master locally to '/workspace/_artifacts'
W1207 01:18:57.670] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W1207 01:18:57.671]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-71792-ac87c-master-ip' was not found
W1207 01:18:57.671] 
W1207 01:18:57.752] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I1207 01:18:57.852] Master not detected. Is the cluster up?
I1207 01:18:57.853] Dumping logs from nodes locally to '/workspace/_artifacts'
I1207 01:18:57.853] Detecting nodes in the cluster
... skipping 29 lines ...
W1207 01:20:49.004] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:49.004] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:49.004] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:49.004] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:49.005] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:49.005] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:49.040] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:49.073] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:49.073] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:49.073] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:49.073] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:49.074] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:49.074] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:49.074] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:49.077] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:49.362] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:49.362] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:49.362] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:49.362] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:49.362] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:49.363] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:49.363] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:49.365] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:49.427] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:49.428] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:49.428] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:49.428] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:49.428] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:49.428] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:49.428] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:49.431] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:49.815] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:49.815] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:49.815] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:49.815] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:49.815] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:49.816] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:49.816] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:49.818] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:50.031] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:50.032] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:50.032] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:50.032] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:50.032] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:50.032] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:50.032] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:50.035] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:50.542] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:50.542] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:50.542] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:50.542] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:50.542] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:50.543] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:50.543] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:50.545] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:50.624] 2018/12/07 01:20:50 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m53.611701659s
W1207 01:20:50.624] 2018/12/07 01:20:50 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W1207 01:20:50.662] Project: k8s-presubmit-scale
W1207 01:20:50.663] Network Project: k8s-presubmit-scale
W1207 01:20:50.663] Zone: us-east1-b
I1207 01:20:50.763] Shutting down test cluster in background.
... skipping 6 lines ...
W1207 01:20:52.524] Zone: us-east1-b
W1207 01:20:54.308] INSTANCE_GROUPS=e2e-71792-ac87c-minion-group
W1207 01:20:54.308] NODE_NAMES=e2e-71792-ac87c-minion-group-3czb e2e-71792-ac87c-minion-group-3h2q e2e-71792-ac87c-minion-group-7607 e2e-71792-ac87c-minion-group-kwv6 e2e-71792-ac87c-minion-group-nn7r e2e-71792-ac87c-minion-group-tr80 e2e-71792-ac87c-minion-group-v53d
I1207 01:20:54.409] Bringing down cluster
W1207 01:20:56.471] Deleting Managed Instance Group...
W1207 01:20:56.810] done.
W1207 01:20:56.813] ERROR: (gcloud.compute.instance-groups.managed.delete) Some requests did not succeed:
W1207 01:20:56.814]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-71792-ac87c-minion-group' is not ready
W1207 01:20:56.814] 
W1207 01:20:56.886] Failed to delete instance group(s).
W1207 01:20:58.664] ERROR: (gcloud.compute.instance-templates.delete) Could not fetch resource:
W1207 01:20:58.665]  - The instance_template resource 'projects/k8s-presubmit-scale/global/instanceTemplates/e2e-71792-ac87c-minion-template' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-71792-ac87c-minion-group'
W1207 01:20:58.665] 
W1207 01:21:06.231] Warning: Permanently added 'compute.6585432602203944150' (RSA) to the list of known hosts.
I1207 01:21:06.715] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-71792-ac87c-master, port: 2379, result: 0
I1207 01:21:08.174] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-71792-ac87c-master, port: 4002, result: 0
W1207 01:21:14.357] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-71792-ac87c-master].
W1207 01:23:39.376] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-71792-ac87c-master].
W1207 01:24:05.456] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-master-https].
W1207 01:24:28.305] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-master].
W1207 01:24:29.128] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-node].
W1207 01:24:34.984] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-ssh].
I1207 01:24:36.012] Deleting firewall rules remaining in network e2e-71792-ac87c: 
I1207 01:24:36.726] Deleting custom subnet...
W1207 01:24:37.738] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W1207 01:24:37.739]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-71792-ac87c-custom-subnet' is not ready
W1207 01:24:37.739] 
W1207 01:24:46.497] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W1207 01:24:46.498]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-71792-ac87c' is already being used by 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-71792-ac87c-custom-subnet'
W1207 01:24:46.498] 
I1207 01:24:46.598] Failed to delete network 'e2e-71792-ac87c'. Listing firewall-rules:
W1207 01:24:47.427] 
W1207 01:24:47.427] To show all fields of the firewall, please show in JSON format: --format=json
W1207 01:24:47.427] To show all fields in table format, please see the examples in --help.
W1207 01:24:47.427] 
I1207 01:24:47.765] Property "clusters.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:24:47.894] Property "users.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:24:48.024] Property "users.k8s-presubmit-scale_e2e-71792-ac87c-basic-auth" unset.
I1207 01:24:48.159] Property "contexts.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:24:48.163] Cleared config for k8s-presubmit-scale_e2e-71792-ac87c from /workspace/.kube/config
I1207 01:24:48.164] Done
W1207 01:24:48.264] 2018/12/07 01:24:48 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 3m57.54158034s
W1207 01:24:48.265] 2018/12/07 01:24:48 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1207 01:24:48.265] 2018/12/07 01:24:48 main.go:313: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W1207 01:24:48.265] Traceback (most recent call last):
W1207 01:24:48.265]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 758, in <module>
W1207 01:24:48.265]     main(parse_args())
W1207 01:24:48.265]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 609, in main
W1207 01:24:48.265]     mode.start(runner_args)
W1207 01:24:48.266]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1207 01:24:48.266]     check_env(env, self.command, *args)
W1207 01:24:48.266]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1207 01:24:48.266]     subprocess.check_call(cmd, env=env)
W1207 01:24:48.266]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 01:24:48.266]     raise CalledProcessError(retcode, cmd)
W1207 01:24:48.267] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-71792-ac87c', '--gcp-network=e2e-71792-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=\\[Feature:Performance\\] --gather-resource-usage=true --gather-metrics-at-teardown=true', '--timeout=100m')' returned non-zero exit status 1
E1207 01:24:48.270] Command failed
I1207 01:24:48.270] process 729 exited with code 1 after 26.1m
E1207 01:24:48.271] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I1207 01:24:48.271] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 01:24:48.704] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 01:24:48.795] process 87691 exited with code 0 after 0.0m
I1207 01:24:48.795] Call:  gcloud config get-value account
I1207 01:24:49.099] process 87704 exited with code 0 after 0.0m
I1207 01:24:49.100] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 01:24:49.100] Upload result and artifacts...
I1207 01:24:49.100] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31381
I1207 01:24:49.100] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31381/artifacts
W1207 01:24:50.845] CommandException: One or more URLs matched no objects.
E1207 01:24:51.062] Command failed
I1207 01:24:51.062] process 87717 exited with code 1 after 0.0m
W1207 01:24:51.062] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31381/artifacts not exist yet
I1207 01:24:51.062] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31381/artifacts
I1207 01:24:54.212] process 87862 exited with code 0 after 0.1m
I1207 01:24:54.213] Call:  git rev-parse HEAD
I1207 01:24:54.217] process 88535 exited with code 0 after 0.0m
... skipping 21 lines ...