This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 19 succeeded
Started2022-11-08 05:40
Elapsed30m3s
Revisionmaster
job-versionv1.26.0-alpha.3.357+aa66cec6fa6e68
kubetest-versionv20221107-33c989e684
revisionv1.26.0-alpha.3.357+aa66cec6fa6e68

Test Failures


kubetest ClusterLoaderV2 3m10s

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 19 Passed Tests

Error lines from build-log.txt

... skipping 350 lines ...
NAME                     ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
kubemark-100pods-master  us-east1-b  n1-standard-2               10.40.0.3    34.73.227.34  RUNNING
Setting kubemark-100pods-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2)
Updating network interface [nic0] of instance [kubemark-100pods-master]...
..........done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-03/zones/us-east1-b/instances/kubemark-100pods-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-100pods-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating nodes.
/home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_CONTAINER_RUNTIME: unbound variable
/home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: WINDOWS_ENABLE_HYPERV: unbound variable
/home/prow/go/src/k8s.io/kubernetes/kubernetes/cluster/../cluster/../cluster/gce/util.sh: line 1551: ENABLE_AUTH_PROVIDER_GCP: unbound variable
Using subnet kubemark-100pods-custom-subnet
Attempt 1 to create kubemark-100pods-minion-template
... skipping 18 lines ...
Looking for address 'kubemark-100pods-master-ip'
Looking for address 'kubemark-100pods-master-internal-ip'
Using master: kubemark-100pods-master (external IP: 34.73.227.34; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-03_kubemark-100pods" set.
User "k8s-infra-e2e-boskos-scale-03_kubemark-100pods" set.
Context "k8s-infra-e2e-boskos-scale-03_kubemark-100pods" created.
Switched to context "k8s-infra-e2e-boskos-scale-03_kubemark-100pods".
... skipping 45 lines ...
kubemark-100pods-minion-group-dt00   Ready                         <none>   49s   v1.26.0-alpha.3.357+aa66cec6fa6e68
kubemark-100pods-minion-group-hxp4   Ready                         <none>   46s   v1.26.0-alpha.3.357+aa66cec6fa6e68
kubemark-100pods-minion-group-n53d   Ready                         <none>   47s   v1.26.0-alpha.3.357+aa66cec6fa6e68
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.73.227.34
GLBCDefaultBackend is running at https://34.73.227.34/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.73.227.34/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.73.227.34/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 194 lines ...
NAME                              ZONE        MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
kubemark-100pods-kubemark-master  us-east1-b  n1-standard-32               10.40.0.14   34.23.22.195  RUNNING
Setting kubemark-100pods-kubemark-master's aliases to '10.40.0.13/32' (added 10.40.0.13)
Updating network interface [nic0] of instance [kubemark-100pods-kubemark-master]...
.........done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-03/zones/us-east1-b/instances/kubemark-100pods-kubemark-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-100pods-kubemark-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-03/global/firewalls/kubemark-100pods-kubemark-minion-all].
done.
NAME                                  NETWORK           DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-100pods-kubemark-minion-all  kubemark-100pods  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
Creating nodes.
... skipping 18 lines ...
Looking for address 'kubemark-100pods-kubemark-master-ip'
Looking for address 'kubemark-100pods-kubemark-master-internal-ip'
Using master: kubemark-100pods-kubemark-master (external IP: 34.23.22.195; internal IP: 10.40.0.13)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-03_kubemark-100pods-kubemark" set.
User "k8s-infra-e2e-boskos-scale-03_kubemark-100pods-kubemark" set.
Context "k8s-infra-e2e-boskos-scale-03_kubemark-100pods-kubemark" created.
Switched to context "k8s-infra-e2e-boskos-scale-03_kubemark-100pods-kubemark".
... skipping 20 lines ...
Found 1 node(s).
NAME                               STATUS                     ROLES    AGE   VERSION
kubemark-100pods-kubemark-master   Ready,SchedulingDisabled   <none>   23s   v1.26.0-alpha.3.357+aa66cec6fa6e68
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 849 lines ...
I1108 05:56:09.717962   57230 cluster.go:86] Name: hollow-node-zhj8g, clusterIP: 10.64.4.13, externalIP: , isSchedulable: false
I1108 05:56:09.717967   57230 cluster.go:86] Name: hollow-node-zjqwc, clusterIP: 10.64.9.11, externalIP: , isSchedulable: false
I1108 05:56:09.717973   57230 cluster.go:86] Name: hollow-node-ztt8s, clusterIP: 10.64.5.60, externalIP: , isSchedulable: false
I1108 05:56:09.717979   57230 cluster.go:86] Name: hollow-node-zvf45, clusterIP: 10.64.9.54, externalIP: , isSchedulable: false
I1108 05:56:09.717985   57230 cluster.go:86] Name: hollow-node-zvhmn, clusterIP: 10.64.3.26, externalIP: , isSchedulable: false
I1108 05:56:09.717995   57230 cluster.go:86] Name: kubemark-100pods-kubemark-master, clusterIP: 10.40.0.14, externalIP: 34.23.22.195, isSchedulable: false
F1108 05:56:09.952929   57230 clusterloader.go:303] Cluster verification error: no schedulable nodes in the cluster
2022/11/08 05:56:09 process.go:155: Step '/home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml' finished in 3m10.198399269s
2022/11/08 05:56:09 e2e.go:776: Dumping logs for kubemark master to GCS directly at path: gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589854839012069376
2022/11/08 05:56:09 process.go:153: Running: /workspace/log-dump.sh /logs/artifacts gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589854839012069376
Checking for custom logdump instances, if any
Using gce provider, skipping check for LOG_DUMP_SSH_KEY and LOG_DUMP_SSH_USER
Project: k8s-infra-e2e-boskos-scale-03
... skipping 8 lines ...
scp: /var/log/glbc.log*: No such file or directory
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Skipping dumping of node logs
Detecting nodes in the cluster
INSTANCE_GROUPS=kubemark-100pods-minion-group
NODE_NAMES=kubemark-100pods-minion-group-0d5v kubemark-100pods-minion-group-12l4 kubemark-100pods-minion-group-9ddc kubemark-100pods-minion-group-bbpr kubemark-100pods-minion-group-cp9l kubemark-100pods-minion-group-dszv kubemark-100pods-minion-group-dt00 kubemark-100pods-minion-group-hxp4 kubemark-100pods-minion-group-n53d
WINDOWS_INSTANCE_GROUPS=
WINDOWS_NODE_NAMES=
... skipping 107 lines ...
Specify --start=71546 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589854839012069376' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-high-density-100-gce/1589854839012069376/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 128 lines ...
W1108 06:09:51.840947   62228 loader.go:222] Config not found: /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
Property "contexts.k8s-infra-e2e-boskos-scale-03_kubemark-100pods" unset.
Cleared config for k8s-infra-e2e-boskos-scale-03_kubemark-100pods from /home/prow/go/src/k8s.io/kubernetes/kubernetes/test/kubemark/resources/kubeconfig.kubemark
Done
2022/11/08 06:09:51 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 6m42.413159005s
2022/11/08 06:09:51 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2022/11/08 06:09:51 main.go:328: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=600 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/density/high-density-config.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/600_nodes_high_density.yaml: exit status 1]
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 15 lines ...