This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtkashem: [WIP] [TEST ONLY] run kubemark-500 test with watch enabled
ResultFAILURE
Tests 3 failed / 60 succeeded
Started2021-10-19 16:06
Elapsed1h8m
Revision04680e13135f14a46b521a73f8eff514af567cb0
Refs 105768
job-versionv1.23.0-alpha.3.389+f35b6786a05893
kubetest-version
revisionv1.23.0-alpha.3.389+f35b6786a05893

Test Failures


ClusterLoaderV2 load overall (testing/load/config.yaml) 22m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\soverall\s\(testing\/load\/config\.yaml\)$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:pods Subresource:binding Verb:POST Scope:resource Latency:perc50: 179.060056ms, perc90: 634.539877ms, perc99: 2.926463414s Count:19489 SlowCount:789}; expected perc99 <= 1s got: &{Resource:pods Subresource: Verb:POST Scope:resource Latency:perc50: 176.423027ms, perc90: 1.076033464s, perc99: 2.868969849s Count:19489 SlowCount:2104}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files | View test history on testgrid


ClusterLoaderV2 load: [step: 31] gathering measurements 3.79s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\:\s\[step\:\s31\]\sgathering\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:pods Subresource:binding Verb:POST Scope:resource Latency:perc50: 179.060056ms, perc90: 634.539877ms, perc99: 2.926463414s Count:19489 SlowCount:789}; expected perc99 <= 1s got: &{Resource:pods Subresource: Verb:POST Scope:resource Latency:perc50: 176.423027ms, perc90: 1.076033464s, perc99: 2.868969849s Count:19489 SlowCount:2104}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files | View test history on testgrid


kubetest ClusterLoaderV2 32m39s

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=pull-kubernetes-kubemark-e2e-gce-big-1450492775228248064 --nodes=500 --provider=kubemark --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/kubemark_500_nodes.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 60 Passed Tests