Result | FAILURE |
Tests | 1 failed / 19 succeeded |
Started | |
Elapsed | 29m39s |
Revision | master |
job-version | v1.26.0-alpha.3.239+1f9e20eb8617e3 |
kubetest-version | v20221024-d0c013ee2d |
revision | v1.26.0-alpha.3.239+1f9e20eb8617e3 |
error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Kubemark MasterLogDump
kubetest Kubemark Overall
kubetest Kubemark TearDown
kubetest Kubemark TearDown Previous
kubetest Kubemark Up
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest list kubemark nodes
kubetest list nodes
kubetest test setup
... skipping 350 lines ... NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS kubemark-100pods-master us-east1-b n1-standard-2 10.40.0.3 34.148.190.237 RUNNING Setting kubemark-100pods-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2) Updating network interface [nic0] of instance [kubemark-100pods-master]... ...........done. Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-15/zones/us-east1-b/instances/kubemark-100pods-master]. Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-100pods-master despite 5 attempts Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory Creating nodes. /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_CONTAINER_RUNTIME: unbound variable /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_ENABLE_HYPERV: unbound variable /home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: ENABLE_AUTH_PROVIDER_GCP: unbound variable Using subnet kubemark-100pods-custom-subnet Attempt 1 to create kubemark-100pods-minion-template ... skipping 19 lines ... Looking for address 'kubemark-100pods-master-ip' Looking for address 'kubemark-100pods-master-internal-ip' Using master: kubemark-100pods-master (external IP: 34.148.190.237; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-15_kubemark-100pods" set. User "k8s-infra-e2e-boskos-scale-15_kubemark-100pods" set. Context "k8s-infra-e2e-boskos-scale-15_kubemark-100pods" created. Switched to context "k8s-infra-e2e-boskos-scale-15_kubemark-100pods". ... skipping 45 lines ... kubemark-100pods-minion-group-plnw Ready <none> 58s v1.26.0-alpha.3.239+1f9e20eb8617e3 kubemark-100pods-minion-group-t6vm Ready <none> 57s v1.26.0-alpha.3.239+1f9e20eb8617e3 kubemark-100pods-minion-group-zk6h Ready <none> 60s v1.26.0-alpha.3.239+1f9e20eb8617e3 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true","reason":""} etcd-1 Healthy {"health":"true","reason":""} controller-manager Healthy ok scheduler Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.148.190.237[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://34.148.190.237/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m [0;32mCoreDNS[0m is running at [0;33mhttps://34.148.190.237/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m [0;32mMetrics-server[0m is running at [0;33mhttps://34.148.190.237/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy[0m ... skipping 225 lines ... Looking for address 'kubemark-100pods-kubemark-master-ip' Looking for address 'kubemark-100pods-kubemark-master-internal-ip' Using master: kubemark-100pods-kubemark-master (external IP: 34.148.239.13; internal IP: 10.40.0.13) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-15_kubemark-100pods-kubemark" set. User "k8s-infra-e2e-boskos-scale-15_kubemark-100pods-kubemark" set. Context "k8s-infra-e2e-boskos-scale-15_kubemark-100pods-kubemark" created. Switched to context "k8s-infra-e2e-boskos-scale-15_kubemark-100pods-kubemark". ... skipping 20 lines ... Found 1 node(s). NAME STATUS ROLES AGE VERSION kubemark-100pods-kubemark-master Ready,SchedulingDisabled <none> 18s v1.26.0-alpha.3.239+1f9e20eb8617e3 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [0;32mCluster validation succeeded[0m Done, listing cluster services: ... skipping 849 lines ... I1106 05:55:44.973163 57228 cluster.go:86] Name: hollow-node-zs469, clusterIP: 10.64.2.7, externalIP: , isSchedulable: false I1106 05:55:44.973169 57228 cluster.go:86] Name: hollow-node-ztk6l, clusterIP: 10.64.4.63, externalIP: , isSchedulable: false I1106 05:55:44.973174 57228 cluster.go:86] Name: hollow-node-ztpd6, clusterIP: 10.64.6.69, externalIP: , isSchedulable: false I1106 05:55:44.973179 57228 cluster.go:86] Name: hollow-node-zttjb, clusterIP: 10.64.6.14, externalIP: , isSchedulable: false I1106 05:55:44.973185 57228 cluster.go:86] Name: hollow-node-zxd5b, clusterIP: 10.64.8.59, externalIP: , isSchedulable: false I1106 05:55:44.973191 57228 cluster.go:86] Name: kubemark-100pods-kubemark-master, clusterIP: 10.40.0.14, externalIP: 34.148.239.13, isSchedulable: false F1106 05:55:45.190718 57228 clusterloader.go:303] Cluster verification error: no schedulable nodes in the cluster 2022/11/06 05:55:45 process.go:155: Step '/home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml' finished in 2m56.236933609s 2022/11/06 05:55:45 e2e.go:776: Dumping logs for kubemark master to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589130055051644928 2022/11/06 05:55:45 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589130055051644928 Checking for custom logdump instances, if any Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER Project: k8s-infra-e2e-boskos-scale-15 ... skipping 8 lines ... scp: /var/log/glbc.log*: No such file or directory scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/konnectivity-server.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Skipping dumping of node logs Detecting nodes in the cluster INSTANCE_GROUPS=kubemark-100pods-minion-group NODE_NAMES=kubemark-100pods-minion-group-3f6w kubemark-100pods-minion-group-7l0h kubemark-100pods-minion-group-7pws kubemark-100pods-minion-group-c418 kubemark-100pods-minion-group-fms5 kubemark-100pods-minion-group-k98q kubemark-100pods-minion-group-plnw kubemark-100pods-minion-group-t6vm kubemark-100pods-minion-group-zk6h WINDOWS_INSTANCE_GROUPS= WINDOWS_NODE_NAMES= ... skipping 107 lines ... Specify --start=71768 in the next get-serial-port-output invocation to get only the new output starting from here. scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/konnectivity-server.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589130055051644928' using logexporter namespace/logexporter created secret/google-service-account created daemonset.apps/logexporter created Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589130055051644928/logexported-nodes-registry) for successful nodes... CommandException: One or more URLs matched no objects. ... skipping 126 lines ... W1106 06:09:22.617637 62076 loader.go:222] Config not found: /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark Property "contexts.k8s-infra-e2e-boskos-scale-15_kubemark-100pods" unset. Cleared config for k8s-infra-e2e-boskos-scale-15_kubemark-100pods from /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark Done 2022/11/06 06:09:22 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m19.642020236s 2022/11/06 06:09:22 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2022/11/06 06:09:22 main.go:328: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1] Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 15 lines ...