This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2020-01-16 14:00
Elapsed1h10m
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/1c56104d-0de8-4879-af9a-33ec544b24ff/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/1c56104d-0de8-4879-af9a-33ec544b24ff/targets/test
job-versionv1.18.0-alpha.1.814+ac1832b7096a7c
revisionv1.18.0-alpha.1.814+ac1832b7096a7c

Test Failures


Up 58m54s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 12 lines ...
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
DRIVER              VOLUME NAME
Cleaning up binfmt_misc ...
================================================================================
Done setting up docker in docker.
Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --cluster=kubemark-100-canary --extract=ci/latest --gcp-master-size=n1-standard-2 --gcp-node-size=n1-standard-4 --gcp-nodes=3 --gcp-zone=us-west1-b --kubemark --kubemark-nodes=100 --provider=gce --test=false '--test_args=--ginkgo.focus=\[Feature:Performance\] --gather-resource-usage=true --gather-metrics-at-teardown=true' --timeout=240m
starts with local mode
Environment:
ALLOWED_NOTREADY_NODES=1
API_SERVER_TEST_LOG_LEVEL=--v=3
ARTIFACTS=/logs/artifacts
... skipping 167 lines ...
Project: k8s-jenkins-gci-kubemark
Network Project: k8s-jenkins-gci-kubemark
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network kubemark-100-canary: 
Removing auto-created subnet kubemark-100-canary:kubemark-100-canary-custom-subnet
... skipping 330 lines ...
Looking for address 'kubemark-100-canary-master-ip'
Looking for address 'kubemark-100-canary-master-internal-ip'
Using master: kubemark-100-canary-master (external IP: 35.197.29.96; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-jenkins-gci-kubemark_kubemark-100-canary" set.
User "k8s-jenkins-gci-kubemark_kubemark-100-canary" set.
Context "k8s-jenkins-gci-kubemark_kubemark-100-canary" created.
Switched to context "k8s-jenkins-gci-kubemark_kubemark-100-canary".
... skipping 135 lines ...

Specify --start=47503 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
... skipping 10 lines ...
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

Changing logfiles to be world-readable for download
External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

External IP address was not found; defaulting to using IAP tunneling.
Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from kubemark-100-canary-minion-group-h1lq

Specify --start=51084 in the next get-serial-port-output invocation to get only the new output starting from here.
External IP address was not found; defaulting to using IAP tunneling.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-canary-minion-group-tmql
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

Copying 'kern.log kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-canary-minion-group-692l
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-tmql' was not found

ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/kubemark-100-canary-minion-group-692l' was not found

INSTANCE_GROUPS=kubemark-100-canary-minion-group
NODE_NAMES=kubemark-100-canary-minion-group-692l kubemark-100-canary-minion-group-h1lq kubemark-100-canary-minion-group-tmql
Failures for kubemark-100-canary-minion-group (if any):
2020/01/16 15:03:14 process.go:155: Step './cluster/log-dump/log-dump.sh /logs/artifacts' finished in 2m48.394341602s
... skipping 48 lines ...
Property "users.k8s-jenkins-gci-kubemark_kubemark-100-canary-basic-auth" unset.
Property "contexts.k8s-jenkins-gci-kubemark_kubemark-100-canary" unset.
Cleared config for k8s-jenkins-gci-kubemark_kubemark-100-canary from /workspace/.kube/config
Done
2020/01/16 15:10:50 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 7m35.703326614s
2020/01/16 15:10:50 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2020/01/16 15:11:08 main.go:316: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 778, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 626, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 262, in start
... skipping 24 lines ...