Result | FAILURE |
Tests | 1 failed / 19 succeeded |
Started | |
Elapsed | 30m38s |
Revision | master |
job-version | v1.26.0-beta.0.9+cf12a74b18b66e |
kubetest-version | v20221109-489d560851 |
revision | v1.26.0-beta.0.9+cf12a74b18b66e |
error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Kubemark MasterLogDump
kubetest Kubemark Overall
kubetest Kubemark TearDown
kubetest Kubemark TearDown Previous
kubetest Kubemark Up
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest list kubemark nodes
kubetest list nodes
kubetest test setup
... skipping 350 lines ... NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS kubemark-100pods-master us-east1-b n1-standard-2 10.40.0.3 35.196.198.235 RUNNING Setting kubemark-100pods-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2) Updating network interface [nic0] of instance [kubemark-100pods-master]... ..........done. Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-19/zones/us-east1-b/instances/kubemark-100pods-master]. Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-100pods-master despite 5 attempts Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory Creating nodes. /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_CONTAINER_RUNTIME: unbound variable /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_ENABLE_HYPERV: unbound variable /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: ENABLE_AUTH_PROVIDER_GCP: unbound variable Using subnet kubemark-100pods-custom-subnet Attempt 1 to create kubemark-100pods-minion-template ... skipping 18 lines ... Looking for address 'kubemark-100pods-master-ip' Looking for address 'kubemark-100pods-master-internal-ip' Using master: kubemark-100pods-master (external IP: 35.196.198.235; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-19_kubemark-100pods" set. User "k8s-infra-e2e-boskos-scale-19_kubemark-100pods" set. Context "k8s-infra-e2e-boskos-scale-19_kubemark-100pods" created. Switched to context "k8s-infra-e2e-boskos-scale-19_kubemark-100pods". ... skipping 64 lines ... kubemark-100pods-minion-group-qnmp Ready <none> 37s v1.26.0-beta.0.9+cf12a74b18b66e kubemark-100pods-minion-group-x7pq Ready <none> 35s v1.26.0-beta.0.9+cf12a74b18b66e kubemark-100pods-minion-group-z3x0 Ready <none> 37s v1.26.0-beta.0.9+cf12a74b18b66e Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: [0;32mKubernetes control plane[0m is running at [0;33mhttps://35.196.198.235[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://35.196.198.235/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m [0;32mCoreDNS[0m is running at [0;33mhttps://35.196.198.235/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m [0;32mMetrics-server[0m is running at [0;33mhttps://35.196.198.235/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy[0m ... skipping 225 lines ... Looking for address 'kubemark-100pods-kubemark-master-ip' Looking for address 'kubemark-100pods-kubemark-master-internal-ip' Using master: kubemark-100pods-kubemark-master (external IP: 35.237.40.138; internal IP: 10.40.0.13) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-19_kubemark-100pods-kubemark" set. User "k8s-infra-e2e-boskos-scale-19_kubemark-100pods-kubemark" set. Context "k8s-infra-e2e-boskos-scale-19_kubemark-100pods-kubemark" created. Switched to context "k8s-infra-e2e-boskos-scale-19_kubemark-100pods-kubemark". ... skipping 20 lines ... Found 1 node(s). NAME STATUS ROLES AGE VERSION kubemark-100pods-kubemark-master Ready,SchedulingDisabled <none> 19s v1.26.0-beta.0.9+cf12a74b18b66e Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [0;32mCluster validation succeeded[0m Done, listing cluster services: ... skipping 848 lines ... I1111 05:59:24.639256 56990 cluster.go:86] Name: hollow-node-ztxxs, clusterIP: 10.64.6.8, externalIP: , isSchedulable: false I1111 05:59:24.639261 56990 cluster.go:86] Name: hollow-node-zx596, clusterIP: 10.64.7.63, externalIP: , isSchedulable: false I1111 05:59:24.639266 56990 cluster.go:86] Name: hollow-node-zxqhh, clusterIP: 10.64.8.45, externalIP: , isSchedulable: false I1111 05:59:24.639271 56990 cluster.go:86] Name: hollow-node-zz2p6, clusterIP: 10.64.6.22, externalIP: , isSchedulable: false I1111 05:59:24.639276 56990 cluster.go:86] Name: hollow-node-zzk5z, clusterIP: 10.64.9.19, externalIP: , isSchedulable: false I1111 05:59:24.639282 56990 cluster.go:86] Name: kubemark-100pods-kubemark-master, clusterIP: 10.40.0.14, externalIP: 35.237.40.138, isSchedulable: false F1111 05:59:24.845950 56990 clusterloader.go:303] Cluster verification error: no schedulable nodes in the cluster 2022/11/11 05:59:24 process.go:155: Step '/home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml' finished in 3m19.78701677s 2022/11/11 05:59:24 e2e.go:781: Dumping logs for kubemark master to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590942599600934912 2022/11/11 05:59:24 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590942599600934912 Checking for custom logdump instances, if any Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER Project: k8s-infra-e2e-boskos-scale-19 ... skipping 8 lines ... scp: /var/log/glbc.log*: No such file or directory scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/konnectivity-server.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Skipping dumping of node logs Detecting nodes in the cluster INSTANCE_GROUPS=kubemark-100pods-minion-group NODE_NAMES=kubemark-100pods-minion-group-0s8q kubemark-100pods-minion-group-2sv0 kubemark-100pods-minion-group-5k19 kubemark-100pods-minion-group-6b4f kubemark-100pods-minion-group-lcpv kubemark-100pods-minion-group-nxgs kubemark-100pods-minion-group-qnmp kubemark-100pods-minion-group-x7pq kubemark-100pods-minion-group-z3x0 WINDOWS_INSTANCE_GROUPS= WINDOWS_NODE_NAMES= ... skipping 107 lines ... Specify --start=71307 in the next get-serial-port-output invocation to get only the new output starting from here. scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/konnectivity-server.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590942599600934912' using logexporter namespace/logexporter created secret/google-service-account created daemonset.apps/logexporter created Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1590942599600934912/logexported-nodes-registry) for successful nodes... CommandException: One or more URLs matched no objects. ... skipping 126 lines ... W1111 06:12:55.951825 61845 loader.go:222] Config not found: /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark Property "contexts.k8s-infra-e2e-boskos-scale-19_kubemark-100pods" unset. Cleared config for k8s-infra-e2e-boskos-scale-19_kubemark-100pods from /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark Done 2022/11/11 06:12:55 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m12.146086995s 2022/11/11 06:12:55 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2022/11/11 06:12:55 main.go:328: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1] Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 15 lines ...