This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2020-01-16 12:47
Elapsed1h11m
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/6e2eaef6-edc6-4aa0-bb34-105f8d37846d/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/6e2eaef6-edc6-4aa0-bb34-105f8d37846d/targets/test
job-versionv1.18.0-alpha.1.812+718714a9359e62
revisionv1.18.0-alpha.1.812+718714a9359e62

Test Failures


Up 59m25s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 12 lines ...
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
DRIVER              VOLUME NAME
Cleaning up binfmt_misc ...
================================================================================
Done setting up docker in docker.
Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --cluster=kubemark-100-canary --extract=ci/latest --gcp-master-size=n1-standard-2 --gcp-node-size=n1-standard-4 --gcp-nodes=3 --gcp-zone=us-west1-b --kubemark --kubemark-nodes=100 --provider=gce --test=false '--test_args=--ginkgo.focus=\[Feature:Performance\] --gather-resource-usage=true --gather-metrics-at-teardown=true' --timeout=240m
starts with local mode
Environment:
ALLOWED_NOTREADY_NODES=1
API_SERVER_TEST_LOG_LEVEL=--v=3
ARTIFACTS=/logs/artifacts
... skipping 167 lines ...
Project: kubernetes-ingress
Network Project: kubernetes-ingress
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network kubemark-100-canary: 
Removing auto-created subnet kubemark-100-canary:kubemark-100-canary-custom-subnet
... skipping 330 lines ...
Looking for address 'kubemark-100-canary-master-ip'
Looking for address 'kubemark-100-canary-master-internal-ip'
Using master: kubemark-100-canary-master (external IP: 34.82.75.222; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "kubernetes-ingress_kubemark-100-canary" set.
User "kubernetes-ingress_kubemark-100-canary" set.
Context "kubernetes-ingress_kubemark-100-canary" created.
Switched to context "kubernetes-ingress_kubemark-100-canary".
... skipping 140 lines ...

Specify --start=47494 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
... skipping 9 lines ...
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

Changing logfiles to be world-readable for download
External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

External IP address was not found; defaulting to using IAP tunneling.
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from kubemark-100-canary-minion-group-sxvs

Specify --start=51075 in the next get-serial-port-output invocation to get only the new output starting from here.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

External IP address was not found; defaulting to using IAP tunneling.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-canary-minion-group-91p7
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-canary-minion-group-b46p
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-91p7' was not found

ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-minion-group-b46p' was not found

INSTANCE_GROUPS=kubemark-100-canary-minion-group
NODE_NAMES=kubemark-100-canary-minion-group-91p7 kubemark-100-canary-minion-group-b46p kubemark-100-canary-minion-group-sxvs
Failures for kubemark-100-canary-minion-group (if any):
2020/01/16 13:51:17 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m49.294787783s
... skipping 25 lines ...
Successfully executed 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-100-canary-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-100-canary-master
Removing etcd replica, name: kubemark-100-canary-master, port: 2379, result: 0
Successfully executed 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-100-canary-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-100-canary-master
Removing etcd replica, name: kubemark-100-canary-master, port: 4002, result: 0
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-master].
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-ingress/zones/us-west1-b/instances/kubemark-100-canary-master].
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-ingress/global/firewalls/kubemark-100-canary-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/kubernetes-ingress/global/firewalls/kubemark-100-canary-master-etcd].
... skipping 15 lines ...
Property "users.kubernetes-ingress_kubemark-100-canary-basic-auth" unset.
Property "contexts.kubernetes-ingress_kubemark-100-canary" unset.
Cleared config for kubernetes-ingress_kubemark-100-canary from /workspace/.kube/config
Done
2020/01/16 13:58:43 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m26.579450889s
2020/01/16 13:58:43 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2020/01/16 13:58:44 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 24 lines ...