This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2022-11-09 05:41
Elapsed32m2s
Revisionmaster
job-versionv1.26.0-alpha.3.451+e62cfabf9326cd
kubetest-versionv20221107-33c989e684
revisionv1.26.0-alpha.3.451+e62cfabf9326cd

Test Failures


kubetest ClusterLoaderV2 3m0s

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 382 lines ...
Looking for address 'kubemark-100pods-master-ip'
Looking for address 'kubemark-100pods-master-internal-ip'
Using master: kubemark-100pods-master (external IP: 34.73.170.193; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-13_kubemark-100pods" set.
User "k8s-infra-e2e-boskos-scale-13_kubemark-100pods" set.
Context "k8s-infra-e2e-boskos-scale-13_kubemark-100pods" created.
Switched to context "k8s-infra-e2e-boskos-scale-13_kubemark-100pods".
... skipping 45 lines ...
kubemark-100pods-minion-group-m2hs   Ready                         <none>   59s    v1.26.0-alpha.3.451+e62cfabf9326cd
kubemark-100pods-minion-group-qhmb   Ready                         <none>   54s    v1.26.0-alpha.3.451+e62cfabf9326cd
kubemark-100pods-minion-group-vcdr   Ready                         <none>   60s    v1.26.0-alpha.3.451+e62cfabf9326cd
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.73.170.193
GLBCDefaultBackend is running at https://34.73.170.193/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.73.170.193/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.73.170.193/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 225 lines ...
Looking for address 'kubemark-100pods-kubemark-master-ip'
Looking for address 'kubemark-100pods-kubemark-master-internal-ip'
Using master: kubemark-100pods-kubemark-master (external IP: 35.196.236.191; internal IP: 10.40.0.13)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-13_kubemark-100pods-kubemark" set.
User "k8s-infra-e2e-boskos-scale-13_kubemark-100pods-kubemark" set.
Context "k8s-infra-e2e-boskos-scale-13_kubemark-100pods-kubemark" created.
Switched to context "k8s-infra-e2e-boskos-scale-13_kubemark-100pods-kubemark".
... skipping 20 lines ...
Found 1 node(s).
NAME                               STATUS                     ROLES    AGE   VERSION
kubemark-100pods-kubemark-master   Ready,SchedulingDisabled   <none>   22s   v1.26.0-alpha.3.451+e62cfabf9326cd
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 849 lines ...
I1109 05:58:52.284770   57421 cluster.go:86] Name: hollow-node-zndxx, clusterIP: 10.64.2.26, externalIP: , isSchedulable: false
I1109 05:58:52.284777   57421 cluster.go:86] Name: hollow-node-zrcff, clusterIP: 10.64.5.55, externalIP: , isSchedulable: false
I1109 05:58:52.284783   57421 cluster.go:86] Name: hollow-node-ztxbs, clusterIP: 10.64.3.3, externalIP: , isSchedulable: false
I1109 05:58:52.284789   57421 cluster.go:86] Name: hollow-node-zxf46, clusterIP: 10.64.7.51, externalIP: , isSchedulable: false
I1109 05:58:52.284795   57421 cluster.go:86] Name: hollow-node-zzwxk, clusterIP: 10.64.5.36, externalIP: , isSchedulable: false
I1109 05:58:52.284802   57421 cluster.go:86] Name: kubemark-100pods-kubemark-master, clusterIP: 10.40.0.14, externalIP: 35.196.236.191, isSchedulable: false
F1109 05:58:52.499290   57421 clusterloader.go:303] Cluster verification error: no schedulable nodes in the cluster
2022/11/09 05:58:52 process.go:155: Step '/home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml' finished in 3m0.959057711s
2022/11/09 05:58:52 e2e.go:776: Dumping logs for kubemark master to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590217475226603520
2022/11/09 05:58:52 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590217475226603520
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: k8s-infra-e2e-boskos-scale-13
... skipping 8 lines ...
scp: /var/log/glbc.log*: No such file or directory
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Skipping dumping of node logs
Detecting nodes in the cluster
INSTANCE_GROUPS=kubemark-100pods-minion-group
NODE_NAMES=kubemark-100pods-minion-group-3sb4 kubemark-100pods-minion-group-8hn9 kubemark-100pods-minion-group-cxpt kubemark-100pods-minion-group-fth0 kubemark-100pods-minion-group-hrvv kubemark-100pods-minion-group-lknt kubemark-100pods-minion-group-m2hs kubemark-100pods-minion-group-qhmb kubemark-100pods-minion-group-vcdr
WINDOWS_INSTANCE_GROUPS=
WINDOWS_NODE_NAMES=
... skipping 107 lines ...
Specify --start=72102 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590217475226603520' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590217475226603520/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 126 lines ...
W1109 06:13:07.750348   62275 loader.go:222] Config not found: /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
Property "contexts.k8s-infra-e2e-boskos-scale-13_kubemark-100pods" unset.
Cleared config for k8s-infra-e2e-boskos-scale-13_kubemark-100pods from /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
Done
2022/11/09 06:13:07 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m35.603598509s
2022/11/09 06:13:07 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/11/09 06:13:07 main.go:328: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...