PRliggitt: Ensure all new API versions of resources default to DeleteDependents
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2018-12-07 00:59
Elapsed26m21s
Versionv1.14.0-alpha.0.895+96b769c99178e8
Buildergke-prow-default-pool-3c8994a8-w3w6
Refs master:1cd6ccb3
71792:3b099ddf
pod408a8a7a-f9bb-11e8-bc26-0a580a6c030d
infra-commitd88a10807
job-versionv1.14.0-alpha.0.895+96b769c99178e8
pod408a8a7a-f9bb-11e8-bc26-0a580a6c030d
repok8s.io/kubernetes
repo-commit96b769c99178e8f6f1a51e9291eacd398ae04225
repos{u'k8s.io/kubernetes': u'master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71792:3b099ddf860ee60f8a9d3670310e6f636c2c4b76', u'k8s.io/release': u'master'}

Test Failures


Up 47s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 10 lines ...
I1207 00:59:39.147] process 233 exited with code 0 after 0.0m
I1207 00:59:39.148] Call:  gcloud config get-value account
I1207 00:59:39.383] process 246 exited with code 0 after 0.0m
I1207 00:59:39.383] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 00:59:39.383] Call:  kubectl get -oyaml pods/408a8a7a-f9bb-11e8-bc26-0a580a6c030d
W1207 00:59:39.477] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E1207 00:59:39.480] Command failed
I1207 00:59:39.480] process 259 exited with code 1 after 0.0m
E1207 00:59:39.480] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/408a8a7a-f9bb-11e8-bc26-0a580a6c030d']' returned non-zero exit status 1
I1207 00:59:39.481] Root: /go/src
I1207 00:59:39.481] cd to /go/src
I1207 00:59:39.481] Checkout: /go/src/k8s.io/kubernetes master:1cd6ccb34458def1347ae96b2e8aacb5338f8e1d,71792:3b099ddf860ee60f8a9d3670310e6f636c2c4b76 to /go/src/k8s.io/kubernetes
I1207 00:59:39.481] Call:  git init k8s.io/kubernetes
... skipping 885 lines ...
W1207 01:18:24.259] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-ssh].
I1207 01:18:25.228] Deleting firewall rules remaining in network e2e-71792-ac87c: e2e-71792-ac87c-master-etcd
I1207 01:18:25.228] e2e-71792-ac87c-minion-all
W1207 01:18:42.989] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-master-etcd].
W1207 01:18:48.744] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-minion-all].
I1207 01:18:49.786] Deleting custom subnet...
W1207 01:18:50.685] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W1207 01:18:50.685]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-71792-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-71792-ac87c-minion-group-v53d'
W1207 01:18:50.686] 
W1207 01:18:55.821] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W1207 01:18:55.821]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-71792-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-71792-ac87c-minion-group'
W1207 01:18:55.822] 
I1207 01:18:55.922] Failed to delete network 'e2e-71792-ac87c'. Listing firewall-rules:
W1207 01:18:56.765] 
W1207 01:18:56.765] To show all fields of the firewall, please show in JSON format: --format=json
W1207 01:18:56.766] To show all fields in table format, please see the examples in --help.
W1207 01:18:56.766] 
I1207 01:18:57.064] Property "clusters.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:18:57.180] Property "users.k8s-presubmit-scale_e2e-71792-ac87c" unset.
... skipping 22 lines ...
W1207 01:19:42.334] Creating firewall...
I1207 01:19:42.601] IP aliases are enabled. Creating subnetworks.
W1207 01:19:43.080] .Creating firewall...
W1207 01:19:43.855] ..Creating firewall...
I1207 01:19:43.956] Using subnet e2e-71792-ac87c-custom-subnet
I1207 01:19:43.956] Starting master and configuring firewalls
W1207 01:19:44.688] ..ERROR: (gcloud.compute.disks.create) Could not fetch resource:
W1207 01:19:44.688]  - The resource 'projects/k8s-presubmit-scale/zones/us-east1-b/disks/e2e-71792-ac87c-master-pd' already exists
W1207 01:19:44.689] 
W1207 01:19:44.773] .2018/12/07 01:19:44 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 47.36050046s
W1207 01:19:44.773] 2018/12/07 01:19:44 e2e.go:538: Dumping logs locally to: /workspace/_artifacts
W1207 01:19:44.774] 2018/12/07 01:19:44 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W1207 01:19:44.819] Trying to find master named 'e2e-71792-ac87c-master'
W1207 01:19:44.820] Looking for address 'e2e-71792-ac87c-master-ip'
W1207 01:19:45.471] ...ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W1207 01:19:45.471]  - The resource 'projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-71792-ac87c-master-ip' was not found
W1207 01:19:45.471] 
W1207 01:19:45.544] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I1207 01:19:45.645] Checking for custom logdump instances, if any
I1207 01:19:45.645] Sourcing kube-util.sh
I1207 01:19:45.645] Detecting project
... skipping 52 lines ...
W1207 01:20:40.084] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:40.084] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:40.084] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:40.084] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:40.084] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:40.085] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:40.100] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:40.790] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:40.791] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:40.791] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:40.791] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:40.791] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:40.791] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:40.791] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:40.794] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:40.859] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:40.859] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:40.859] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:40.859] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:40.860] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:40.860] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:40.860] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:40.863] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:41.055] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:41.055] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:41.055] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:41.055] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:41.055] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:41.056] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
... skipping 2 lines ...
W1207 01:20:41.056] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:41.056] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:41.056] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:41.056] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:41.057] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:41.057] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:41.059] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:41.059] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:41.132] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:41.132] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:41.133] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:41.133] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:41.133] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:41.133] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:41.133] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:41.135] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:41.589] scp: /var/log/fluentd.log*: No such file or directory
W1207 01:20:41.590] scp: /var/log/node-problem-detector.log*: No such file or directory
W1207 01:20:41.590] scp: /var/log/kubelet.cov*: No such file or directory
W1207 01:20:41.590] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W1207 01:20:41.590] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W1207 01:20:41.591] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W1207 01:20:41.591] scp: /var/log/startupscript.log*: No such file or directory
W1207 01:20:41.594] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W1207 01:20:41.665] 2018/12/07 01:20:41 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 56.892194677s
W1207 01:20:41.665] 2018/12/07 01:20:41 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W1207 01:20:41.702] Project: k8s-presubmit-scale
W1207 01:20:41.702] Network Project: k8s-presubmit-scale
W1207 01:20:41.702] Zone: us-east1-b
I1207 01:20:41.803] Shutting down test cluster in background.
... skipping 8 lines ...
W1207 01:20:45.148] NODE_NAMES=e2e-71792-ac87c-minion-group-3czb e2e-71792-ac87c-minion-group-3h2q e2e-71792-ac87c-minion-group-7607 e2e-71792-ac87c-minion-group-kwv6 e2e-71792-ac87c-minion-group-nn7r e2e-71792-ac87c-minion-group-tr80 e2e-71792-ac87c-minion-group-v53d
I1207 01:20:45.249] Bringing down cluster
W1207 01:20:47.380] Deleting Managed Instance Group...
W1207 01:23:38.376] ....................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-71792-ac87c-minion-group].
W1207 01:23:38.376] done.
W1207 01:23:46.958] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-71792-ac87c-minion-template].
W1207 01:23:58.836] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:23:58.836]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-master-https' is not ready
W1207 01:23:58.836] 
W1207 01:23:59.462] Failed to delete firewall rules.
W1207 01:24:17.066] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:17.067]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-master' is not ready
W1207 01:24:17.067] 
W1207 01:24:18.057] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:18.057]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-node' is not ready
W1207 01:24:18.057] 
W1207 01:24:19.127] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:19.127]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-ssh' is not ready
W1207 01:24:19.127] 
W1207 01:24:19.200] Failed to delete firewall rules.
I1207 01:24:19.983] Deleting firewall rules remaining in network e2e-71792-ac87c: e2e-71792-ac87c-default-internal-master
I1207 01:24:19.984] e2e-71792-ac87c-default-internal-node
I1207 01:24:19.984] e2e-71792-ac87c-default-ssh
W1207 01:24:21.665] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:21.666]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-master' is not ready
W1207 01:24:21.666] 
W1207 01:24:22.463] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:22.464]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-internal-node' is not ready
W1207 01:24:22.464] 
W1207 01:24:23.297] ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
W1207 01:24:23.298]  - The resource 'projects/k8s-presubmit-scale/global/firewalls/e2e-71792-ac87c-default-ssh' is not ready
W1207 01:24:23.298] 
W1207 01:24:23.376] Failed to delete firewall rules.
I1207 01:24:24.052] Deleting custom subnet...
W1207 01:24:57.371] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-71792-ac87c-custom-subnet].
W1207 01:25:41.067] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/networks/e2e-71792-ac87c].
I1207 01:25:41.445] Property "clusters.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:25:41.561] Property "users.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:25:41.678] Property "users.k8s-presubmit-scale_e2e-71792-ac87c-basic-auth" unset.
I1207 01:25:41.792] Property "contexts.k8s-presubmit-scale_e2e-71792-ac87c" unset.
I1207 01:25:41.795] Cleared config for k8s-presubmit-scale_e2e-71792-ac87c from /workspace/.kube/config
I1207 01:25:41.795] Done
W1207 01:25:41.893] 2018/12/07 01:25:41 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 5m0.131651894s
W1207 01:25:41.894] 2018/12/07 01:25:41 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1207 01:25:41.894] 2018/12/07 01:25:41 main.go:313: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
W1207 01:25:41.894] Traceback (most recent call last):
W1207 01:25:41.894]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 758, in <module>
W1207 01:25:41.894]     main(parse_args())
W1207 01:25:41.894]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 609, in main
W1207 01:25:41.895]     mode.start(runner_args)
W1207 01:25:41.895]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1207 01:25:41.895]     check_env(env, self.command, *args)
W1207 01:25:41.895]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1207 01:25:41.895]     subprocess.check_call(cmd, env=env)
W1207 01:25:41.895]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
W1207 01:25:41.895]     raise CalledProcessError(retcode, cmd)
W1207 01:25:41.896] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-71792-ac87c', '--gcp-network=e2e-71792-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=\\[Feature:Performance\\] --gather-resource-usage=true --gather-metrics-at-teardown=true', '--timeout=100m')' returned non-zero exit status 1
E1207 01:25:41.896] Command failed
I1207 01:25:41.896] process 731 exited with code 1 after 25.0m
E1207 01:25:41.896] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I1207 01:25:41.897] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1207 01:25:42.431] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1207 01:25:42.492] process 85865 exited with code 0 after 0.0m
I1207 01:25:42.492] Call:  gcloud config get-value account
I1207 01:25:42.810] process 85878 exited with code 0 after 0.0m
I1207 01:25:42.810] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I1207 01:25:42.810] Upload result and artifacts...
I1207 01:25:42.811] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31382
I1207 01:25:42.811] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31382/artifacts
W1207 01:25:44.454] CommandException: One or more URLs matched no objects.
E1207 01:25:44.626] Command failed
I1207 01:25:44.626] process 85891 exited with code 1 after 0.0m
W1207 01:25:44.627] Remote dir gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31382/artifacts not exist yet
I1207 01:25:44.627] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/71792/pull-kubernetes-kubemark-e2e-gce-big/31382/artifacts
I1207 01:25:48.127] process 86036 exited with code 0 after 0.1m
I1207 01:25:48.128] Call:  git rev-parse HEAD
I1207 01:25:48.131] process 86709 exited with code 0 after 0.0m
... skipping 21 lines ...