PR | tsmetana: Kubelet: Fix volumemanager test race |
Result | FAILURE |
Tests | 1 failed / 7 succeeded |
Started | |
Elapsed | 11m28s |
Revision | |
Builder | gke-prow-containerd-pool-99179761-3w8w |
Refs |
master:0c2613c7 73404:45464f03 |
pod | 888d8335-2763-11e9-bac2-0a580a6c013e |
infra-commit | 40269330c |
job-version | v1.14.0-alpha.2.225+e48ac9f04ca0bc |
pod | 888d8335-2763-11e9-bac2-0a580a6c013e |
repo | k8s.io/kubernetes |
repo-commit | e48ac9f04ca0bcecf79a9dc7f17c63324e037b10 |
repos | {u'k8s.io/kubernetes': u'master:0c2613c71a87f850190a8c1084d4de1e18336c07,73404:45464f03494bf5c7efb09e44cc07e7125c748cb3', u'k8s.io/release': u'master'} |
revision | v1.14.0-alpha.2.225+e48ac9f04ca0bc |
kops configuration failed: error during /workspace/kops create cluster --name e2e-121481-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones us-west-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.188.149.18/32 --cloud aws --override cluster.spec.nodePortAccess=35.188.149.18/32: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Build
Deferred TearDown
DumpClusterLogs (--up failed)
Extract
Stage
TearDown Previous
Timeout
... skipping 961 lines ... I0203 03:38:10.494] sha1sum(kubernetes-test.tar.gz)=e57f0e5bf73e8e5025254e47bbc1f43a67890423 I0203 03:38:10.494] I0203 03:38:10.494] Extracting kubernetes-test.tar.gz into /go/src/k8s.io/kubernetes/kubernetes W0203 03:38:17.657] 2019/02/03 03:38:17 process.go:155: Step '/workspace/get-kube.sh' finished in 13.318952923s W0203 03:38:17.657] 2019/02/03 03:38:17 process.go:153: Running: /workspace/kops get clusters e2e-121481-dba53.test-cncf-aws.k8s.io W0203 03:38:27.733] W0203 03:38:27.733] error reading cluster configuration "e2e-121481-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121481-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials W0203 03:38:27.733] status code: 401, request id: ad3cae39-beb1-4c9d-9c28-ae365bd03780 W0203 03:38:27.739] 2019/02/03 03:38:27 process.go:155: Step '/workspace/kops get clusters e2e-121481-dba53.test-cncf-aws.k8s.io' finished in 10.082498086s W0203 03:38:27.740] 2019/02/03 03:38:27 process.go:153: Running: /workspace/kops create cluster --name e2e-121481-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones us-west-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.188.149.18/32 --cloud aws --override cluster.spec.nodePortAccess=35.188.149.18/32 W0203 03:38:27.866] I0203 03:38:27.865984 4181 create_cluster.go:1448] Using SSH public key: /workspace/.ssh/kube_aws_rsa.pub W0203 03:38:28.312] W0203 03:38:28.312] error reading cluster configuration "e2e-121481-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121481-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials W0203 03:38:28.312] status code: 401, request id: d9f98d8a-5613-4b1a-8aae-934d33310d73 W0203 03:38:28.318] 2019/02/03 03:38:28 process.go:155: Step '/workspace/kops create cluster --name e2e-121481-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones us-west-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.188.149.18/32 --cloud aws --override cluster.spec.nodePortAccess=35.188.149.18/32' finished in 578.218164ms W0203 03:38:28.358] 2019/02/03 03:38:28 process.go:153: Running: /workspace/kops export kubecfg e2e-121481-dba53.test-cncf-aws.k8s.io W0203 03:38:28.897] W0203 03:38:28.898] error reading cluster configuration: error reading cluster configuration "e2e-121481-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121481-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials W0203 03:38:28.898] status code: 401, request id: 3bb6d7bb-f298-48d3-b453-c72db865cda8 W0203 03:38:28.902] 2019/02/03 03:38:28 process.go:155: Step '/workspace/kops export kubecfg e2e-121481-dba53.test-cncf-aws.k8s.io' finished in 544.206533ms W0203 03:38:28.902] 2019/02/03 03:38:28 process.go:153: Running: /workspace/kops get clusters e2e-121481-dba53.test-cncf-aws.k8s.io W0203 03:38:29.470] W0203 03:38:29.471] error reading cluster configuration "e2e-121481-dba53.test-cncf-aws.k8s.io": error reading s3://k8s-kops-prow/e2e-121481-dba53.test-cncf-aws.k8s.io/config: Unable to list AWS regions: AuthFailure: AWS was not able to validate the provided access credentials W0203 03:38:29.471] status code: 401, request id: 0355f57c-22b0-4444-893b-02379e4e6bc5 W0203 03:38:29.475] 2019/02/03 03:38:29 process.go:155: Step '/workspace/kops get clusters e2e-121481-dba53.test-cncf-aws.k8s.io' finished in 573.038654ms W0203 03:38:29.496] 2019/02/03 03:38:29 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W0203 03:38:29.496] 2019/02/03 03:38:29 main.go:297: Something went wrong: starting e2e cluster: kops configuration failed: error during /workspace/kops create cluster --name e2e-121481-dba53.test-cncf-aws.k8s.io --ssh-public-key /workspace/.ssh/kube_aws_rsa.pub --node-count 4 --node-volume-size 48 --master-volume-size 48 --master-count 1 --zones us-west-1b --master-size c4.large --kubernetes-version https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc --admin-access 35.188.149.18/32 --cloud aws --override cluster.spec.nodePortAccess=35.188.149.18/32: exit status 1 W0203 03:38:29.499] Traceback (most recent call last): W0203 03:38:29.499] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module> W0203 03:38:29.538] main(parse_args()) W0203 03:38:29.538] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main W0203 03:38:29.538] mode.start(runner_args) W0203 03:38:29.538] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start W0203 03:38:29.538] check_env(env, self.command, *args) W0203 03:38:29.538] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W0203 03:38:29.538] subprocess.check_call(cmd, env=env) W0203 03:38:29.539] File "/usr/lib/python2.7/subprocess.py", line 186, in check_call W0203 03:38:29.560] raise CalledProcessError(retcode, cmd) W0203 03:38:29.561] subprocess.CalledProcessError: Command '('/workspace/kops-e2e-runner.sh', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=bazel', '--stage=gs://kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws', '--kops-kubernetes-version=https://storage.googleapis.com/kubernetes-release-pull/ci/pull-kubernetes-e2e-kops-aws/v1.14.0-alpha.2.225+e48ac9f04ca0bc', '--up', '--down', '--test', '--provider=aws', '--cluster=e2e-121481-dba53', '--gcp-network=e2e-121481-dba53', '--extract=local', '--ginkgo-parallel', '--test_args=--ginkgo.flakeAttempts=2 --ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]|\\[HPA\\]|Dashboard|Services.*functioning.*NodePort', '--timeout=55m', '--kops-cluster=e2e-121481-dba53.test-cncf-aws.k8s.io', '--kops-zones=us-west-1b', '--kops-state=s3://k8s-kops-prow/', '--kops-nodes=4', '--kops-ssh-key=/workspace/.ssh/kube_aws_rsa', '--kops-ssh-user=admin')' returned non-zero exit status 1 E0203 03:38:29.569] Command failed I0203 03:38:29.569] process 539 exited with code 1 after 10.4m E0203 03:38:29.570] FAIL: pull-kubernetes-e2e-kops-aws I0203 03:38:29.570] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0203 03:38:30.311] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0203 03:38:30.356] process 4231 exited with code 0 after 0.0m I0203 03:38:30.357] Call: gcloud config get-value account I0203 03:38:30.788] process 4243 exited with code 0 after 0.0m I0203 03:38:30.788] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0203 03:38:30.788] Upload result and artifacts... I0203 03:38:30.788] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121481 I0203 03:38:30.789] Call: gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121481/artifacts W0203 03:38:31.712] CommandException: One or more URLs matched no objects. E0203 03:38:31.825] Command failed I0203 03:38:31.825] process 4255 exited with code 1 after 0.0m W0203 03:38:31.825] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121481/artifacts not exist yet I0203 03:38:31.825] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73404/pull-kubernetes-e2e-kops-aws/121481/artifacts I0203 03:38:33.591] process 4397 exited with code 0 after 0.0m I0203 03:38:33.592] Call: git rev-parse HEAD I0203 03:38:33.596] process 4921 exited with code 0 after 0.0m ... skipping 21 lines ...