This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwangzhen127: Install system lib in build.sh
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2019-03-15 00:10
Elapsed40m37s
Revision89ebd838ff2765fef770514ef9302f216163d1f1
Refs 261
job-versionv1.15.0-alpha.0.1216+b3ec6c17f137ea
revisionv1.15.0-alpha.0.1216+b3ec6c17f137ea

Test Failures


Up 29m42s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 117 lines ...
Fetched 1707 kB in 0s (7353 kB/s)
Selecting previously unselected package bash.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 3937 files and directories currently installed.)
Preparing to unpack .../archives/bash_4.4-5_amd64.deb ...
Unpacking bash (4.4-5) ...
Setting up bash (4.4-5) ...
update-alternatives: error: alternative path /usr/share/man/man7/bash-builtins.7.gz doesn't exist

Selecting previously unselected package libsystemd0:amd64.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 4004 files and directories currently installed.)
Preparing to unpack .../libsystemd0_232-25+deb9u9_amd64.deb ...
Unpacking libsystemd0:amd64 (232-25+deb9u9) ...
Setting up libsystemd0:amd64 (232-25+deb9u9) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
... skipping 18 lines ...
gcloud docker -- push gcr.io/node-problem-detector-staging/pr/pr261-1552608653.682481492/node-problem-detector:v0.6.2-10-g2a44a3a
WARNING: `gcloud docker` will not be supported for Docker client versions above 18.03.

As an alternative, use `gcloud auth configure-docker` to configure `docker` to
use `gcloud` as a credential helper, then use `docker` as you would for non-GCR
registries, e.g. `docker pull gcr.io/project-id/my-image`. Add
`--verbosity=error` to silence this warning: `gcloud docker
--verbosity=error -- pull gcr.io/project-id/my-image`.

See: https://cloud.google.com/container-registry/docs/support/deprecation-notices#gcloud-docker

The push refers to repository [gcr.io/node-problem-detector-staging/pr/pr261-1552608653.682481492/node-problem-detector]
2bda373654fb: Preparing
484fcd1d48b3: Preparing
... skipping 332 lines ...
Trying to find master named 'e2e-8-429e8-master'
Looking for address 'e2e-8-429e8-master-ip'
Using master: e2e-8-429e8-master (external IP: 35.247.94.127)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

..............Kubernetes cluster created.
Cluster "k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8" set.
User "k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8" set.
Context "k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8" created.
Switched to context "k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8".
... skipping 134 lines ...

Specify --start=41176 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-8-429e8-minion-group-v85w
... skipping 7 lines ...
Specify --start=38827 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/kube-proxy.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=e2e-8-429e8-minion-group
NODE_NAMES=e2e-8-429e8-minion-group-6s09 e2e-8-429e8-minion-group-92f4 e2e-8-429e8-minion-group-v85w
Failures for e2e-8-429e8-minion-group
2019/03/15 00:43:54 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 1m29.058978297s
2019/03/15 00:43:54 process.go:153: Running: ./hack/e2e-internal/e2e-down.sh
Project: k8s-jkns-e2e-gce-ci-reboot
... skipping 12 lines ...
Bringing down cluster
Deleting Managed Instance Group...
..............Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/zones/us-west1-b/instanceGroupManagers/e2e-8-429e8-minion-group].
done.
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/global/instanceTemplates/e2e-8-429e8-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/global/instanceTemplates/e2e-8-429e8-windows-node-template].
{"message":"Internal Server Error"}Removing etcd replica, name: e2e-8-429e8-master, port: 2379, result: 0
{"message":"Internal Server Error"}Removing etcd replica, name: e2e-8-429e8-master, port: 4002, result: 0
Updated [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/zones/us-west1-b/instances/e2e-8-429e8-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/zones/us-west1-b/instances/e2e-8-429e8-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/global/firewalls/e2e-8-429e8-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/global/firewalls/e2e-8-429e8-master-etcd].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/global/firewalls/e2e-8-429e8-minion-all].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-jkns-e2e-gce-ci-reboot/regions/us-west1/addresses/e2e-8-429e8-master-ip].
... skipping 10 lines ...
Property "users.k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8-basic-auth" unset.
Property "contexts.k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8" unset.
Cleared config for k8s-jkns-e2e-gce-ci-reboot_e2e-8-429e8 from /workspace/.kube/config
Done
2019/03/15 00:51:11 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m16.219728594s
2019/03/15 00:51:11 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2019/03/15 00:51:23 main.go:307: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 764, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 615, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 35 lines ...