This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 5 succeeded
Started2021-09-13 23:34
Elapsed16m34s
Revisionmaster
job-versionv1.23.0-alpha.1.428+caf853b5964679
kubetest-version
revisionv1.23.0-alpha.1.428+caf853b5964679

Test Failures


kubetest Up 7m52s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 5 Passed Tests

Error lines from build-log.txt

... skipping 336 lines ...
2021/09/13 23:37:58 [INFO] signed certificate with serial number 257460165276904307366852227713175001198999812111
2021/09/13 23:37:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - Setting minimum CPU platform is not supported for the selected machine type e2-standard-2.
Failed to create master instance due to non-retryable error
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-periodic-scale-2/global/firewalls/kubemark-100-scheduler-highqps-minion-all].
NAME                                       NETWORK                         DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100-scheduler-highqps-minion-all  kubemark-100-scheduler-highqps  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Some commands failed.
Creating nodes.
Using subnet kubemark-100-scheduler-highqps-custom-subnet
Attempt 1 to create kubemark-100-scheduler-highqps-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
Created [https://www.googleapis.com/compute/v1/projects/k8s-periodic-scale-2/global/instanceTemplates/kubemark-100-scheduler-highqps-minion-template].
NAME                                            MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
... skipping 15 lines ...
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 34.139.253.184; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........................................Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
* Expire in 0 ms for 6 (transfer 0x561f45e36fb0)
* Expire in 5000 ms for 8 (transfer 0x561f45e36fb0)
*   Trying 34.139.253.184...
* TCP_NODELAY set
... skipping 14 lines ...
Dumping logs from master locally to '/tmp/tmp.MJ1XfZXha3/logs'
Trying to find master named 'kubemark-100-scheduler-highqps-master'
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 34.139.253.184; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-scheduler-highqps-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-periodic-scale-2/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1437559632137555968' using logexporter
Detecting nodes in the cluster
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
ERROR: gcloud crashed (OperationalError): database is locked

If you would like to report this issue, please run the following command:
  gcloud feedback

To check gcloud for common problems, please run the following command:
  gcloud info --run-diagnostics
... skipping 41 lines ...
  File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/creds.py", line 311, in _Execute
    cur.Execute(*args)
  File "/google-cloud-sdk/lib/googlecloudsdk/core/credentials/creds.py", line 225, in Execute
    return self._cursor.execute(*args)
sqlite3.OperationalError: database is locked
External IP address was not found; defaulting to using IAP tunneling.
Fatal Python error: could not acquire lock for <_io.BufferedReader name='<stdin>'> at interpreter shutdown, possibly due to daemon threads

Thread 0x00007fc38e976700 (most recent call first):
  File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/compute/iap_tunnel.py", line 449 in _ReadFromStdinAndEnqueueMessageUnix
  File "/usr/lib/python3.7/threading.py", line 865 in run
  File "/usr/lib/python3.7/threading.py", line 917 in _bootstrap_inner
  File "/usr/lib/python3.7/threading.py", line 885 in _bootstrap

Current thread 0x00007fc3918ac740 (most recent call first):
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ERROR: gcloud crashed (OperationalError): database is locked

If you would like to report this issue, please run the following command:
  gcloud feedback

To check gcloud for common problems, please run the following command:
  gcloud info --run-diagnostics
... skipping 86 lines ...
scp: /var/log/kubelet.log*: No such file or directory
scp: /var/log/supervisor/supervisord.log*: No such file or directory
scp: /var/log/supervisor/kubelet-stdout.log*: No such file or directory
scp: /var/log/supervisor/kubelet-stderr.log*: No such file or directory
scp: /var/log/supervisor/docker-stdout.log*: No such file or directory
scp: /var/log/supervisor/docker-stderr.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
... skipping 49 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from kubemark-100-scheduler-highqps-minion-group-3nbw

Specify --start=104991 in the next get-serial-port-output invocation to get only the new output starting from here.
External IP address was not found; defaulting to using IAP tunneling.
scp: /var/log/containers/konnectivity-agent-*.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Uploading '/tmp/tmp.MJ1XfZXha3/logs' to 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1437559632137555968'
CommandException: One or more URLs matched no objects.
Copying file:///tmp/tmp.MJ1XfZXha3/logs/kubemark-100-scheduler-highqps-minion-group-3nbw/serial-1.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.MJ1XfZXha3/logs/kubemark-100-scheduler-highqps-minion-group-3nbw/kubelet.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.MJ1XfZXha3/logs/kubemark-100-scheduler-highqps-minion-group-3nbw/kube-node-installation.log [Content-Type=application/octet-stream]...
Copying file:///tmp/tmp.MJ1XfZXha3/logs/kubemark-100-scheduler-highqps-minion-group-3nbw/kube-node-configuration.log [Content-Type=application/octet-stream]...
... skipping 88 lines ...
W0913 23:50:47.984024   11917 loader.go:221] Config not found: /workspace/.kube/config
Property "contexts.k8s-periodic-scale-2_kubemark-100-scheduler-highqps" unset.
Cleared config for k8s-periodic-scale-2_kubemark-100-scheduler-highqps from /workspace/.kube/config
Done
2021/09/13 23:50:47 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 2m46.455768945s
2021/09/13 23:50:47 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/09/13 23:50:47 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...