This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2020-01-15 15:11
Elapsed17m57s
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/41c0f436-6bed-4b88-9ad1-2e816eaf10fa/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/41c0f436-6bed-4b88-9ad1-2e816eaf10fa/targets/test
job-versionv1.18.0-alpha.1.755+05209312b74eac
revisionv1.18.0-alpha.1.755+05209312b74eac

Test Failures


Up 7m35s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 261 lines ...
Trying to find master named 'e2e-test-prow-master'
Looking for address 'e2e-test-prow-master-ip'
Using master: e2e-test-prow-master (external IP: 35.223.160.237; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

......................................................................................................Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 35.223.160.237 port 443: Connection refused
-----------------------------------------------------
2020/01/15 15:20:25 process.go:155: Step './hack/e2e-internal/e2e-up.sh' finished in 7m35.512237984s
2020/01/15 15:20:25 e2e.go:534: Dumping logs locally to: /logs/artifacts
2020/01/15 15:20:25 process.go:153: Running: ./cluster/log-dump/log-dump.sh /logs/artifacts
Checking for custom logdump instances, if any
Sourcing kube-util.sh
... skipping 11 lines ...
Specify --start=46902 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/kube-addon-manager.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-cmtf
... skipping 6 lines ...

Specify --start=49293 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-test-prow-minion-group
NODE_NAMES=e2e-test-prow-minion-group-04v9 e2e-test-prow-minion-group-cmtf e2e-test-prow-minion-group-dvlg
Failures for e2e-test-prow-minion-group (if any):
2020/01/15 15:22:13 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m48.675167354s
2020/01/15 15:22:13 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-gce-soak-1-4
... skipping 46 lines ...
W0115 15:29:27.315295   10025 loader.go:223] Config not found: /root/.kube/config
Property "contexts.k8s-jkns-gce-soak-1-4_e2e-test-prow" unset.
Cleared config for k8s-jkns-gce-soak-1-4_e2e-test-prow from /root/.kube/config
Done
2020/01/15 15:29:27 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m13.622779672s
2020/01/15 15:29:27 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2020/01/15 15:29:39 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
2020/01/15 15:29:39 e2e.go:82: err: exit status 1
exit status 1
make: *** [Makefile:54: e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2020/01/15 15:29:39 Cleaning up Docker data root...
[Barnacle] 2020/01/15 15:29:39 Removing all containers.
... skipping 12 lines ...