This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2022-08-07 01:23
Elapsed38m32s
Revisionmaster
job-versionv1.25.0-beta.0.16+985c9202ccd250
kubetest-versionv20220804-4fa19ea91a
revisionv1.25.0-beta.0.16+985c9202ccd250

Test Failures


kubetest Up 7m25s

error during ./hack/e2e-internal/e2e-up.sh: exit status 2
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 345 lines ...
2022/08/07 01:25:53 [INFO] signed certificate with serial number 587279480197553056449493388339117225321171082222
2022/08/07 01:25:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
 - Setting minimum CPU platform is not supported for the selected machine type e2-standard-2.
Failed to create master instance due to non-retryable error
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-jenkins-kubemark/global/firewalls/kubemark-100-scheduler-highqps-minion-all].
NAME                                       NETWORK                         DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100-scheduler-highqps-minion-all  kubemark-100-scheduler-highqps  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Some commands failed.
Creating nodes.
/home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1527: WINDOWS_CONTAINER_RUNTIME: unbound variable
/home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1527: WINDOWS_ENABLE_HYPERV: unbound variable
Using subnet kubemark-100-scheduler-highqps-custom-subnet
Attempt 1 to create kubemark-100-scheduler-highqps-minion-template
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.
... skipping 17 lines ...
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 104.196.99.12; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...........................................Cluster failed to initialize within 300 seconds.
Last output from querying API server follows:
-----------------------------------------------------
* Expire in 0 ms for 6 (transfer 0x55694d8970f0)
* Expire in 5000 ms for 8 (transfer 0x55694d8970f0)
*   Trying 104.196.99.12...
* TCP_NODELAY set
... skipping 14 lines ...
Dumping logs from master locally to '/tmp/tmp.AbNrfIldpP/logs'
Trying to find master named 'kubemark-100-scheduler-highqps-master'
Looking for address 'kubemark-100-scheduler-highqps-master-ip'
Looking for address 'kubemark-100-scheduler-highqps-master-internal-ip'
Using master: kubemark-100-scheduler-highqps-master (external IP: 104.196.99.12; internal IP: 10.40.0.2)
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Copying 'kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from kubemark-100-scheduler-highqps-master
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-kubemark/zones/us-east1-b/instances/kubemark-100-scheduler-highqps-master' was not found

Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/ci-kubernetes-kubemark-100-gce-scheduler-highqps/1556087991665954816' using logexporter
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Failed to create logexporter daemonset.. falling back to logdump through SSH
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Dumping logs for nodes provided as args to dump_nodes() function
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
... skipping 212 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
External IP address was not found; defaulting to using IAP tunneling.
External IP address was not found; defaulting to using IAP tunneling.
WARNING: 

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth
... skipping 222 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
External IP address was not found; defaulting to using IAP tunneling.
WARNING: 

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth

... skipping 67 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.start-iap-tunnel) Unexpected error while reconnecting. Check logs for more details.

Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh kubemark-100-scheduler-highqps-minion-group-tx8l --project=k8s-jenkins-kubemark --zone=us-east1-b --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh kubemark-100-scheduler-highqps-minion-group-tx8l --project=k8s-jenkins-kubemark --zone=us-east1-b --troubleshoot --tunnel-through-iap

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log startupscript.log' from kubemark-100-scheduler-highqps-minion-group-tx8l

Specify --start=116486 in the next get-serial-port-output invocation to get only the new output starting from here.
External IP address was not found; defaulting to using IAP tunneling.
WARNING: 

... skipping 5 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Detecting nodes in the cluster
INSTANCE_GROUPS=kubemark-100-scheduler-highqps-minion-group
NODE_NAMES=kubemark-100-scheduler-highqps-minion-group-j1jd kubemark-100-scheduler-highqps-minion-group-ntdv kubemark-100-scheduler-highqps-minion-group-nvhf kubemark-100-scheduler-highqps-minion-group-tx8l
WINDOWS_INSTANCE_GROUPS=
WINDOWS_NODE_NAMES=
The connection to the server localhost:8080 was refused - did you specify the right host or port?
... skipping 110 lines ...
W0807 02:01:19.339798   12736 loader.go:223] Config not found: /workspace/.kube/config
Property "contexts.k8s-jenkins-kubemark_kubemark-100-scheduler-highqps" unset.
Cleared config for k8s-jenkins-kubemark_kubemark-100-scheduler-highqps from /workspace/.kube/config
Done
2022/08/07 02:01:19 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 2m57.814198632s
2022/08/07 02:01:19 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/08/07 02:01:19 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 2
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...