This job view page is being replaced by Spyglass soon. Check out the new job view.
PRcaesarxuchao: Adding e2e test checking the triggering controller working with the migrator
ResultFAILURE
Tests 1 failed / 15 succeeded
Started2019-03-18 19:42
Elapsed16m43s
Revision1185508233935c3109eb42a482b64a8f2ee08a44
Refs 25
job-versionv1.15.0-alpha.0.1259+aa9cbd112cc34f
revisionv1.15.0-alpha.0.1259+aa9cbd112cc34f

Test Failures


0.06s

error during ../test/e2e/test-cmd.sh: exit status 1
				from junit_runner.xml

Filter through log files


Show 15 Passed Tests

Error lines from build-log.txt

... skipping 333 lines ...
Trying to find master named 'bootstrap-e2e-master'
Looking for address 'bootstrap-e2e-master-ip'
Using master: bootstrap-e2e-master (external IP: 35.239.81.70)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.......Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-flaky_bootstrap-e2e" set.
User "k8s-jkns-gci-gce-flaky_bootstrap-e2e" set.
Context "k8s-jkns-gci-gce-flaky_bootstrap-e2e" created.
Switched to context "k8s-jkns-gci-gce-flaky_bootstrap-e2e".
... skipping 17 lines ...
Waiting for 2 ready nodes. 0 ready nodes, 0 registered. Retrying.
Found 2 node(s).
NAME                              STATUS                     ROLES    AGE   VERSION
bootstrap-e2e-master              Ready,SchedulingDisabled   <none>   6s    v1.15.0-alpha.0.1259+aa9cbd112cc34f
bootstrap-e2e-minion-group-hjpn   Ready                      <none>   11s   v1.15.0-alpha.0.1259+aa9cbd112cc34f
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 97 lines ...

Specify --start=41078 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-hjpn

Specify --start=45747 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=bootstrap-e2e-minion-group
NODE_NAMES=bootstrap-e2e-minion-group-hjpn
Failures for bootstrap-e2e-minion-group
2019/03/18 19:49:33 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m1.002463266s
2019/03/18 19:49:33 e2e.go:444: Listing resources...
2019/03/18 19:49:33 process.go:153: Running: ./cluster/gce/list-resources.sh
... skipping 24 lines ...
Bringing down cluster
Deleting Managed Instance Group...
....................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/zones/us-central1-f/instanceGroupManagers/bootstrap-e2e-minion-group].
done.
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/global/instanceTemplates/bootstrap-e2e-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/global/instanceTemplates/bootstrap-e2e-windows-node-template].
{"message":"Internal Server Error"}Removing etcd replica, name: bootstrap-e2e-master, port: 2379, result: 0
{"message":"Internal Server Error"}Removing etcd replica, name: bootstrap-e2e-master, port: 4002, result: 0
Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/zones/us-central1-f/instances/bootstrap-e2e-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/zones/us-central1-f/instances/bootstrap-e2e-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/global/firewalls/bootstrap-e2e-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/global/firewalls/bootstrap-e2e-master-etcd].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/global/firewalls/bootstrap-e2e-minion-all].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-gci-gce-flaky/regions/us-central1/addresses/bootstrap-e2e-master-ip].
... skipping 28 lines ...
Listed 0 items.
Listed 0 items.
2019/03/18 19:59:05 process.go:155: Step './cluster/gce/list-resources.sh' finished in 8.008974617s
2019/03/18 19:59:05 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /logs/artifacts/gcp-resources-before.txt /logs/artifacts/gcp-resources-after.txt
2019/03/18 19:59:05 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /logs/artifacts/gcp-resources-before.txt /logs/artifacts/gcp-resources-after.txt' finished in 1.424714ms
2019/03/18 19:59:05 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/03/18 19:59:15 main.go:307: Something went wrong: encountered 1 errors: [error during ../test/e2e/test-cmd.sh: exit status 1]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 764, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 615, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 33 lines ...