This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-10-02 23:39
Elapsed15m47s
Revisionmaster
job-versionv1.23.0-alpha.3.105+0ac956ff2bef9c
kubetest-version
revisionv1.23.0-alpha.3.105+0ac956ff2bef9c

Test Failures


kubetest Up 7m47s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 337 lines ...
2021/10/02 23:42:40 [INFO] signed certificate with serial number 504300948290103940974978602774226992137600113384
2021/10/02 23:42:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - Setting minimum CPU platform is not supported for the selected machine type e2-standard-2.
Failed to create master instance due to non-retryable error
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-scalability-2/global/firewalls/kubemark-100-scheduler-highqps-minion-all].
NAME                                       NETWORK                         DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100-scheduler-highqps-minion-all  kubemark-100-scheduler-highqps  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Some commands failed.
Creating nodes.
Using subnet kubemark-100-scheduler-highqps-custom-subnet
Attempt 1 to create kubemark-100-scheduler-highqps-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-scalability-2/global/instanceTemplates/kubemark-100-scheduler-highqps-minion-template].
NAME                                            MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
... skipping 16 lines ...
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 35.231.25.6; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........................................Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
* Expire in 0 ms for 6 (transfer 0x55ead7ce8fb0)
* Expire in 5000 ms for 8 (transfer 0x55ead7ce8fb0)
*   Trying 35.231.25.6...
* TCP_NODELAY set
... skipping 14 lines ...
Dumping logs from master locally to '/tmp/tmp.FHI0UbmCDe/logs'
Trying to find master named 'kubemark-100-scheduler-highqps-master'
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 35.231.25.6; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-scheduler-highqps-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-scalability-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1444446593825640448' using logexporter
Detecting nodes in the cluster
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
... skipping 97 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
External IP address was not found; defaulting to using IAP tunneling.
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Uploading '/tmp/tmp.FHI0UbmCDe/logs' to 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1444446593825640448'
CommandException: One or more URLs matched no objects.
Copying file:///tmp/tmp.FHI0UbmCDe/logs/kubemark-100-scheduler-highqps-minion-group-ln88/kubelet.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.FHI0UbmCDe/logs/kubemark-100-scheduler-highqps-minion-group-ln88/node-problem-detector.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.FHI0UbmCDe/logs/kubemark-100-scheduler-highqps-minion-group-ln88/kube-node-installation.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.FHI0UbmCDe/logs/kubemark-100-scheduler-highqps-minion-group-ln88/docker.log [Content-Type=application/octet-stream]...
... skipping 97 lines ...
W1002 23:55:22.633631   12299 loader.go:221] Config not found: /workspace/.kube/config
Property "contexts.k8s-jenkins-scalability-2_kubemark-100-scheduler-highqps" unset.
Cleared config for k8s-jenkins-scalability-2_kubemark-100-scheduler-highqps from /workspace/.kube/config
Done
2021/10/02 23:55:22 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 2m36.11161961s
2021/10/02 23:55:22 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/10/02 23:55:22 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...