This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 25 succeeded
Started2020-01-17 07:20
Elapsed31m8s
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e6a5eece-5c40-4673-9dc6-f29dda861b90/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/e6a5eece-5c40-4673-9dc6-f29dda861b90/targets/test
job-versionv1.18.0-alpha.1.848+916edd922e528f
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.18.0-alpha.1.848+916edd922e528f

Test Failures


listResources After 1m23s

Failed to list resources (error during ./cluster/gce/list-resources.sh: exit status 2):
Project: k8s-boskos-gce-project-19
Region: 
Zone: 
Instance prefix: canary-e2e
Network: canary-e2e
Provider: gce


[ compute instance-templates ]



[ compute instance-groups ]



[ compute instances ]

				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 25 Passed Tests

Show 4835 Skipped Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --check-leaked-resources --check-version-skew=false --cluster=canary-e2e --env=ENABLE_POD_SECURITY_POLICY=true --extract=ci/k8s-stable1 --extract=ci/latest --gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging --gcp-nodes=4 --gcp-zone=us-west1-b --provider=gce '--test_args=--ginkgo.focus=Variable.Expansion --ginkgo.skip=\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8' --timeout=40m
starts with local mode
Environment:
ARTIFACTS=/logs/artifacts
BAZEL_REMOTE_CACHE_ENABLED=false
BAZEL_VERSION=0.23.2
... skipping 440 lines ...
Project: k8s-boskos-gce-project-19
Network Project: k8s-boskos-gce-project-19
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network canary-e2e: 
W0117 07:22:59.446166    1601 loader.go:223] Config not found: /workspace/.kube/config
... skipping 159 lines ...
Trying to find master named 'canary-e2e-master'
Looking for address 'canary-e2e-master-ip'
Using master: canary-e2e-master (external IP: 34.82.96.180; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

...............Kubernetes cluster created.
Cluster "k8s-boskos-gce-project-19_canary-e2e" set.
User "k8s-boskos-gce-project-19_canary-e2e" set.
Context "k8s-boskos-gce-project-19_canary-e2e" created.
Switched to context "k8s-boskos-gce-project-19_canary-e2e".
User "k8s-boskos-gce-project-19_canary-e2e-basic-auth" set.
Wrote config for k8s-boskos-gce-project-19_canary-e2e to /workspace/.kube/config
ERROR: (gcloud.compute.instances.add-metadata) Could not fetch resource:
 - Required 'compute.instances.get' permission for 'projects/k8s-prow-builds/zones/us-west1-b/instances/canary-e2e-master'


Kubernetes cluster is running.  The master is running at:

  https://34.82.96.180
... skipping 12 lines ...
canary-e2e-master              Ready,SchedulingDisabled   <none>   29s   v1.18.0-alpha.1.848+916edd922e528f
canary-e2e-minion-group-8npv   Ready                      <none>   23s   v1.18.0-alpha.1.848+916edd922e528f
canary-e2e-minion-group-cxrh   Ready                      <none>   23s   v1.18.0-alpha.1.848+916edd922e528f
canary-e2e-minion-group-kkmz   Ready                      <none>   23s   v1.18.0-alpha.1.848+916edd922e528f
canary-e2e-minion-group-qk1v   Ready                      <none>   33s   v1.18.0-alpha.1.848+916edd922e528f
Validate output:
NAME                 STATUS    MESSAGE             ERROR
etcd-1               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 90 lines ...
Using master: canary-e2e-master (external IP: 34.82.96.180; internal IP: (not set))
Jan 17 07:28:26.343: INFO: Fetching cloud provider for "gce"
I0117 07:28:26.343536    9963 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0117 07:28:26.344272    9963 gce.go:860] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc000060090), conf:(*jwt.Config)(0xc002550820)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W0117 07:28:26.424394    9963 gce.go:460] No network name or URL specified.
I0117 07:28:26.424565    9963 e2e.go:109] Starting e2e run "23ab4ad3-d97f-47fd-a41e-71e56dd8deed" on Ginkgo node 1
{"msg":"Test Suite starting","total":9,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1579246104 - Will randomize all specs
Will run 9 of 4844 specs

Jan 17 07:28:31.189: INFO: cluster-master-image: cos-77-12371-89-0
Jan 17 07:28:31.189: INFO: cluster-node-image: cos-77-12371-89-0
Jan 17 07:28:31.190: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 07:28:31.192: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jan 17 07:28:31.335: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
Jan 17 07:28:31.500: INFO: The status of Pod fluentd-gcp-v3.2.0-fp4sg is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan 17 07:28:31.500: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 17 07:28:31.500: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan 17 07:28:31.500: INFO: POD                       NODE               PHASE    GRACE  CONDITIONS
Jan 17 07:28:31.500: INFO: fluentd-gcp-v3.2.0-fp4sg  canary-e2e-master  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 07:28:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 07:28:27 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 07:28:27 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 07:28:27 +0000 UTC  }]
Jan 17 07:28:31.500: INFO: 
Jan 17 07:28:33.648: INFO: 31 / 31 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
... skipping 32 lines ...
Jan 17 07:28:36.646: INFO: Waiting for pod var-expansion-11cb3783-b826-47f7-aa05-f69fe185576e to disappear
Jan 17 07:28:36.682: INFO: Pod var-expansion-11cb3783-b826-47f7-aa05-f69fe185576e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 07:28:36.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6962" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":9,"completed":1,"skipped":1333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
... skipping 16 lines ...
Jan 17 07:28:39.352: INFO: Waiting for pod var-expansion-bac31596-e1f5-4181-8e6c-981cc45c92ee to disappear
Jan 17 07:28:39.388: INFO: Pod var-expansion-bac31596-e1f5-4181-8e6c-981cc45c92ee no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 07:28:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1104" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":9,"completed":2,"skipped":1430,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS[36