This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 25 succeeded
Started2020-01-17 14:40
Elapsed30m38s
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/31836290-2b63-47bb-8524-bfd6b2bc600f/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/31836290-2b63-47bb-8524-bfd6b2bc600f/targets/test
job-versionv1.18.0-alpha.1.888+50f9ea79994634
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.18.0-alpha.1.888+50f9ea79994634

Test Failures


listResources After 1m21s

Failed to list resources (error during ./cluster/gce/list-resources.sh: exit status 2):
Project: k8s-boskos-gce-project-09
Region: 
Zone: 
Instance prefix: canary-e2e
Network: canary-e2e
Provider: gce


[ compute instance-templates ]



[ compute instance-groups ]



[ compute instances ]

				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 25 Passed Tests

Show 4835 Skipped Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --check-leaked-resources --check-version-skew=false --cluster=canary-e2e --env=ENABLE_POD_SECURITY_POLICY=true --extract=ci/k8s-stable1 --extract=ci/latest --gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging --gcp-nodes=4 --gcp-zone=us-west1-b --provider=gce '--test_args=--ginkgo.focus=Variable.Expansion --ginkgo.skip=\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8' --timeout=40m
starts with local mode
Environment:
ARTIFACTS=/logs/artifacts
BAZEL_REMOTE_CACHE_ENABLED=false
BAZEL_VERSION=0.23.2
... skipping 440 lines ...
Project: k8s-boskos-gce-project-09
Network Project: k8s-boskos-gce-project-09
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network canary-e2e: 
W0117 14:43:47.734391    1601 loader.go:223] Config not found: /workspace/.kube/config
... skipping 159 lines ...
Trying to find master named 'canary-e2e-master'
Looking for address 'canary-e2e-master-ip'
Using master: canary-e2e-master (external IP: 34.83.43.254; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...............Kubernetes cluster created.
Cluster "k8s-boskos-gce-project-09_canary-e2e" set.
User "k8s-boskos-gce-project-09_canary-e2e" set.
Context "k8s-boskos-gce-project-09_canary-e2e" created.
Switched to context "k8s-boskos-gce-project-09_canary-e2e".
User "k8s-boskos-gce-project-09_canary-e2e-basic-auth" set.
Wrote config for k8s-boskos-gce-project-09_canary-e2e to /workspace/.kube/config
ERROR: (gcloud.compute.instances.add-metadata) Could not fetch resource:
 - Required 'compute.instances.get' permission for 'projects/k8s-prow-builds/zones/us-west1-b/instances/canary-e2e-master'


Kubernetes cluster is running.  The master is running at:

  https://34.83.43.254
... skipping 11 lines ...
canary-e2e-master              Ready,SchedulingDisabled   <none>   22s   v1.18.0-alpha.1.888+50f9ea79994634
canary-e2e-minion-group-f6gd   Ready                      <none>   20s   v1.18.0-alpha.1.888+50f9ea79994634
canary-e2e-minion-group-glrw   Ready                      <none>   19s   v1.18.0-alpha.1.888+50f9ea79994634
canary-e2e-minion-group-nv25   Ready                      <none>   20s   v1.18.0-alpha.1.888+50f9ea79994634
canary-e2e-minion-group-wjn1   Ready                      <none>   20s   v1.18.0-alpha.1.888+50f9ea79994634
Validate output:
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 95 lines ...
Using master: canary-e2e-master (external IP: 34.83.43.254; internal IP: (not set))
Jan 17 14:48:54.457: INFO: Fetching cloud provider for "gce"
I0117 14:48:54.457645    9896 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0117 14:48:54.458595    9896 gce.go:860] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc0000fa010), conf:(*jwt.Config)(0xc001d6f540)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W0117 14:48:54.545515    9896 gce.go:460] No network name or URL specified.
I0117 14:48:54.545663    9896 e2e.go:109] Starting e2e run "4f42f42f-c58b-4b0d-83bf-263607126937" on Ginkgo node 1
{"msg":"Test Suite starting","total":9,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1579272532 - Will randomize all specs
Will run 9 of 4844 specs

Jan 17 14:48:59.851: INFO: cluster-master-image: cos-77-12371-89-0
... skipping 10 lines ...
Jan 17 14:49:00.283: INFO: kube-apiserver version: v1.18.0-alpha.1.888+50f9ea79994634
Jan 17 14:49:00.283: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 14:49:00.330: INFO: Cluster IP family: ipv4
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:221
[BeforeEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 17 14:49:00.336: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename var-expansion
Jan 17 14:49:00.551: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Jan 17 14:49:00.719: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-486
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:221
Jan 17 14:51:01.160: INFO: Deleting pod "var-expansion-b6145e42-f187-49b0-891b-c979a02bcab6" in namespace "var-expansion-486"
Jan 17 14:51:01.203: INFO: Wait up to 5m0s for pod "var-expansion-b6145e42-f187-49b0-891b-c979a02bcab6" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 14:51:05.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-486" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]","total":9,"completed":1,"skipped":226,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:161
[BeforeEach] [k8s.io] Variable Expansion
... skipping 16 lines ...
Jan 17 14:51:08.005: INFO: Waiting for pod var-expansion-04f56d5a-b394-4041-bf99-0afb0f421efc to disappear
Jan 17 14:51:08.045: INFO: Pod var-expansion-04f56d5a-b394-4041-bf99-0afb0f421efc no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 14:51:08.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-945" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]","total":9,"completed":2,"skipped":301,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS[