This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoomichi: Add check-conformance-test-requirements.go
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2019-03-21 02:41
Elapsed36m32s
Revision
Buildergke-prow-containerd-pool-99179761-mrdv
Refs master:ed4258e5
75524:e77f640b
podcb67d297-4b82-11e9-b1a5-0a580a6c1332
infra-commit524058c8d
job-versionv1.15.0-alpha.0.1381+e1c3fd4fa2b3a6
podcb67d297-4b82-11e9-b1a5-0a580a6c1332
repok8s.io/kubernetes
repo-commite1c3fd4fa2b3a66d0bffb4205b7cc6d5ab315b00
repos{u'k8s.io/kubernetes': u'master:ed4258e5c0d722425b1c7744b2bf09ad0d9fbfea,75524:e77f640b7bd4ceb820498545d97f28ef03c1f0fe', u'k8s.io/perf-tests': u'master', u'k8s.io/release': u'master'}
revisionv1.15.0-alpha.0.1381+e1c3fd4fa2b3a6

Test Failures


Up 2m2s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 1040 lines ...
W0321 03:06:48.097] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75524-ac87c-minion-group-dnmh].
W0321 03:07:08.118] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-default-internal-master].
W0321 03:07:13.981] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-default-internal-node].
W0321 03:07:14.573] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-default-ssh].
I0321 03:07:15.788] Deleting firewall rules remaining in network e2e-75524-ac87c: 
I0321 03:07:17.239] Deleting custom subnet...
W0321 03:07:18.240] ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
W0321 03:07:18.240]  - The subnetwork resource 'projects/k8s-presubmit-scale/regions/us-east1/subnetworks/e2e-75524-ac87c-custom-subnet' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75524-ac87c-minion-group-ccxw'
W0321 03:07:18.240] 
W0321 03:07:23.639] ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
W0321 03:07:23.639]  - The network resource 'projects/k8s-presubmit-scale/global/networks/e2e-75524-ac87c' is already being used by 'projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-75524-ac87c-minion-group'
W0321 03:07:23.639] 
I0321 03:07:23.740] Failed to delete network 'e2e-75524-ac87c'. Listing firewall-rules:
W0321 03:07:24.660] 
W0321 03:07:24.660] To show all fields of the firewall, please show in JSON format: --format=json
W0321 03:07:24.660] To show all fields in table format, please see the examples in --help.
W0321 03:07:24.660] 
I0321 03:07:25.008] Property "clusters.k8s-presubmit-scale_e2e-75524-ac87c" unset.
I0321 03:07:25.145] Property "users.k8s-presubmit-scale_e2e-75524-ac87c" unset.
... skipping 99 lines ...
W0321 03:09:24.030] 
W0321 03:09:24.030] NAME                    ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
W0321 03:09:24.030] e2e-75524-ac87c-master  us-east1-b  n1-standard-4               10.40.0.32   35.231.57.183  RUNNING
I0321 03:09:24.131] Creating nodes.
I0321 03:09:26.022] Using subnet e2e-75524-ac87c-custom-subnet
W0321 03:09:27.141] Instance template e2e-75524-ac87c-minion-template already exists; deleting.
W0321 03:09:28.316] Failed to delete existing instance template
W0321 03:09:28.325] 2019/03/21 03:09:28 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 2m2.869179941s
W0321 03:09:28.326] 2019/03/21 03:09:28 e2e.go:522: Dumping logs locally to: /workspace/_artifacts
W0321 03:09:28.326] 2019/03/21 03:09:28 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0321 03:09:28.393] Trying to find master named 'e2e-75524-ac87c-master'
W0321 03:09:28.393] Looking for address 'e2e-75524-ac87c-master-ip'
I0321 03:09:28.493] Checking for custom logdump instances, if any
... skipping 17 lines ...
W0321 03:10:02.498] scp: /var/log/glbc.log*: No such file or directory
W0321 03:10:02.498] scp: /var/log/cluster-autoscaler.log*: No such file or directory
W0321 03:10:02.498] scp: /var/log/kube-addon-manager.log*: No such file or directory
W0321 03:10:02.498] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:02.498] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:02.498] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:02.503] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0321 03:10:02.612] Dumping logs from nodes locally to '/workspace/_artifacts'
I0321 03:10:02.612] Detecting nodes in the cluster
I0321 03:10:39.359] Changing logfiles to be world-readable for download
I0321 03:10:39.449] Changing logfiles to be world-readable for download
I0321 03:10:40.654] Changing logfiles to be world-readable for download
I0321 03:10:40.913] Changing logfiles to be world-readable for download
... skipping 19 lines ...
W0321 03:10:45.593] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:45.593] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:45.593] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:45.594] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:45.594] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:45.594] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:45.597] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:45.649] 
W0321 03:10:45.649] Specify --start=42977 in the next get-serial-port-output invocation to get only the new output starting from here.
W0321 03:10:45.681] 
W0321 03:10:45.682] Specify --start=42780 in the next get-serial-port-output invocation to get only the new output starting from here.
W0321 03:10:45.717] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:45.717] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:45.718] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:45.718] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:45.718] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:45.718] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:45.718] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:45.722] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:45.801] 
W0321 03:10:45.802] Specify --start=42882 in the next get-serial-port-output invocation to get only the new output starting from here.
W0321 03:10:46.660] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:46.660] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:46.661] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:46.661] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:46.661] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:46.661] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:46.661] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:46.664] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:47.244] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:47.244] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:47.245] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:47.245] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:47.245] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:47.245] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:47.245] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:47.249] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:47.375] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:47.375] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:47.375] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:47.376] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:47.376] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:47.376] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:47.376] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:47.379] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:47.513] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:47.513] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:47.513] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:47.513] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:47.514] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:47.514] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:47.514] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:47.517] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:47.539] scp: /var/log/fluentd.log*: No such file or directory
W0321 03:10:47.539] scp: /var/log/node-problem-detector.log*: No such file or directory
W0321 03:10:47.539] scp: /var/log/kubelet.cov*: No such file or directory
W0321 03:10:47.539] scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
W0321 03:10:47.540] scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
W0321 03:10:47.540] scp: /var/log/npd-hollow-node-*.log*: No such file or directory
W0321 03:10:47.540] scp: /var/log/startupscript.log*: No such file or directory
W0321 03:10:47.543] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
W0321 03:10:50.641] INSTANCE_GROUPS=e2e-75524-ac87c-minion-group
W0321 03:10:50.642] NODE_NAMES=e2e-75524-ac87c-minion-group-2k2c e2e-75524-ac87c-minion-group-66vd e2e-75524-ac87c-minion-group-ccxw e2e-75524-ac87c-minion-group-dnmh e2e-75524-ac87c-minion-group-ksln e2e-75524-ac87c-minion-group-mqdl e2e-75524-ac87c-minion-group-nfrm
I0321 03:10:51.464] Failures for e2e-75524-ac87c-minion-group
W0321 03:10:52.046] 2019/03/21 03:10:52 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 1m23.720935868s
W0321 03:10:52.047] 2019/03/21 03:10:52 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
W0321 03:10:52.099] Project: k8s-presubmit-scale
... skipping 12 lines ...
I0321 03:10:57.579] Bringing down cluster
W0321 03:11:00.596] Deleting Managed Instance Group...
W0321 03:13:14.674] .............................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instanceGroupManagers/e2e-75524-ac87c-minion-group].
W0321 03:13:14.675] done.
W0321 03:13:20.345] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75524-ac87c-minion-template].
W0321 03:13:29.599] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/instanceTemplates/e2e-75524-ac87c-windows-node-template].
I0321 03:13:42.079] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75524-ac87c-master, port: 2379, result: 0
I0321 03:13:43.599] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-75524-ac87c-master, port: 4002, result: 0
W0321 03:13:49.874] Updated [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75524-ac87c-master].
W0321 03:15:53.920] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/zones/us-east1-b/instances/e2e-75524-ac87c-master].
W0321 03:16:20.411] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-master-https].
W0321 03:16:21.194] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-master-etcd].
W0321 03:16:22.092] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/global/firewalls/e2e-75524-ac87c-minion-all].
W0321 03:16:30.781] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-presubmit-scale/regions/us-east1/addresses/e2e-75524-ac87c-master-ip].
... skipping 9 lines ...
I0321 03:18:04.613] Property "users.k8s-presubmit-scale_e2e-75524-ac87c-basic-auth" unset.
I0321 03:18:04.748] Property "contexts.k8s-presubmit-scale_e2e-75524-ac87c" unset.
I0321 03:18:04.752] Cleared config for k8s-presubmit-scale_e2e-75524-ac87c from /workspace/.kube/config
I0321 03:18:04.752] Done
W0321 03:18:04.790] 2019/03/21 03:18:04 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m12.708544964s
W0321 03:18:04.790] 2019/03/21 03:18:04 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0321 03:18:04.791] 2019/03/21 03:18:04 main.go:307: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
W0321 03:18:04.791] Traceback (most recent call last):
W0321 03:18:04.791]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0321 03:18:04.791]     main(parse_args())
W0321 03:18:04.791]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0321 03:18:04.791]     mode.start(runner_args)
W0321 03:18:04.792]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0321 03:18:04.792]     check_env(env, self.command, *args)
W0321 03:18:04.792]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0321 03:18:04.792]     subprocess.check_call(cmd, env=env)
W0321 03:18:04.792]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0321 03:18:04.792]     raise CalledProcessError(retcode, cmd)
W0321 03:18:04.793] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-kubemark-e2e-gce-big', '--up', '--down', '--provider=gce', '--cluster=e2e-75524-ac87c', '--gcp-network=e2e-75524-ac87c', '--extract=local', '--gcp-master-size=n1-standard-4', '--gcp-node-size=n1-standard-8', '--gcp-nodes=7', '--gcp-project=k8s-presubmit-scale', '--gcp-zone=us-east1-b', '--kubemark', '--kubemark-nodes=500', '--test_args=--ginkgo.focus=xxxx', '--test-cmd=/go/src/k8s.io/perf-tests/run-e2e.sh', '--test-cmd-args=cluster-loader2', '--test-cmd-args=--nodes=500', '--test-cmd-args=--provider=kubemark', '--test-cmd-args=--report-dir=/workspace/_artifacts', '--test-cmd-args=--testconfig=testing/density/config.yaml', '--test-cmd-args=--testconfig=testing/load/config.yaml', '--test-cmd-args=--testoverrides=./testing/load/kubemark/500_nodes/override.yaml', '--test-cmd-name=ClusterLoaderV2', '--timeout=100m')' returned non-zero exit status 1
E0321 03:18:04.793] Command failed
I0321 03:18:04.794] process 720 exited with code 1 after 35.3m
E0321 03:18:04.794] FAIL: pull-kubernetes-kubemark-e2e-gce-big
I0321 03:18:04.795] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0321 03:18:05.245] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0321 03:18:05.311] process 81809 exited with code 0 after 0.0m
I0321 03:18:05.312] Call:  gcloud config get-value account
I0321 03:18:05.636] process 81821 exited with code 0 after 0.0m
I0321 03:18:05.636] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0321 03:18:05.636] Upload result and artifacts...
I0321 03:18:05.636] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75524/pull-kubernetes-kubemark-e2e-gce-big/41852
I0321 03:18:05.637] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75524/pull-kubernetes-kubemark-e2e-gce-big/41852/artifacts
W0321 03:18:06.746] CommandException: One or more URLs matched no objects.
E0321 03:18:06.901] Command failed
I0321 03:18:06.902] process 81833 exited with code 1 after 0.0m
W0321 03:18:06.902] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75524/pull-kubernetes-kubemark-e2e-gce-big/41852/artifacts not exist yet
I0321 03:18:06.902] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75524/pull-kubernetes-kubemark-e2e-gce-big/41852/artifacts
I0321 03:18:09.580] process 81975 exited with code 0 after 0.0m
I0321 03:18:09.581] Call:  git rev-parse HEAD
I0321 03:18:09.584] process 82653 exited with code 0 after 0.0m
... skipping 21 lines ...