This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 25 succeeded
Started2020-01-17 03:34
Elapsed31m20s
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/75e61406-0b76-4b41-b3c2-699e31cbf486/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/75e61406-0b76-4b41-b3c2-699e31cbf486/targets/test
job-versionv1.18.0-alpha.1.844+90d6484f1c5911
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.18.0-alpha.1.844+90d6484f1c5911

Test Failures


listResources After 1m22s

Failed to list resources (error during ./cluster/gce/list-resources.sh: exit status 2):
Project: k8s-gci-gce-reboot-1-5
Region: 
Zone: 
Instance prefix: canary-e2e
Network: canary-e2e
Provider: gce


[ compute instance-templates ]



[ compute instance-groups ]



[ compute instances ]

				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 25 Passed Tests

Show 4835 Skipped Tests

Error lines from build-log.txt

Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
fatal: not a git repository (or any of the parent directories): .git
+ /workspace/scenarios/kubernetes_e2e.py --check-leaked-resources --check-version-skew=false --cluster=canary-e2e --env=ENABLE_POD_SECURITY_POLICY=true --extract=ci/k8s-stable1 --extract=ci/latest --gcp-cloud-sdk=gs://cloud-sdk-testing/ci/staging --gcp-nodes=4 --gcp-zone=us-west1-b --provider=gce '--test_args=--ginkgo.focus=Variable.Expansion --ginkgo.skip=\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8' --timeout=40m
starts with local mode
Environment:
ARTIFACTS=/logs/artifacts
BAZEL_REMOTE_CACHE_ENABLED=false
BAZEL_VERSION=0.23.2
... skipping 440 lines ...
Project: k8s-gci-gce-reboot-1-5
Network Project: k8s-gci-gce-reboot-1-5
Zone: us-west1-b
INSTANCE_GROUPS=
NODE_NAMES=
Bringing down cluster
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
 - Invalid value for field 'zone': 'asia-northeast3-a'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-b'. Unknown zone.
 - Invalid value for field 'zone': 'asia-northeast3-c'. Unknown zone.

Deleting firewall rules remaining in network canary-e2e: 
W0117 03:37:42.480767    1602 loader.go:223] Config not found: /workspace/.kube/config
... skipping 159 lines ...
Trying to find master named 'canary-e2e-master'
Looking for address 'canary-e2e-master-ip'
Using master: canary-e2e-master (external IP: 34.83.103.102; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.............Kubernetes cluster created.
Cluster "k8s-gci-gce-reboot-1-5_canary-e2e" set.
User "k8s-gci-gce-reboot-1-5_canary-e2e" set.
Context "k8s-gci-gce-reboot-1-5_canary-e2e" created.
Switched to context "k8s-gci-gce-reboot-1-5_canary-e2e".
User "k8s-gci-gce-reboot-1-5_canary-e2e-basic-auth" set.
Wrote config for k8s-gci-gce-reboot-1-5_canary-e2e to /workspace/.kube/config
ERROR: (gcloud.compute.instances.add-metadata) Could not fetch resource:
 - Required 'compute.instances.get' permission for 'projects/k8s-prow-builds/zones/us-west1-b/instances/canary-e2e-master'


Kubernetes cluster is running.  The master is running at:

  https://34.83.103.102
... skipping 17 lines ...
canary-e2e-master              Ready,SchedulingDisabled   <none>   35s   v1.18.0-alpha.1.844+90d6484f1c5911
canary-e2e-minion-group-58sv   Ready                      <none>   32s   v1.18.0-alpha.1.844+90d6484f1c5911
canary-e2e-minion-group-lc6g   Ready                      <none>   33s   v1.18.0-alpha.1.844+90d6484f1c5911
canary-e2e-minion-group-qjqm   Ready                      <none>   32s   v1.18.0-alpha.1.844+90d6484f1c5911
canary-e2e-minion-group-zq6w   Ready                      <none>   32s   v1.18.0-alpha.1.844+90d6484f1c5911
Validate output:
NAME                 STATUS    MESSAGE             ERROR
etcd-1               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 90 lines ...
Using master: canary-e2e-master (external IP: 34.83.103.102; internal IP: (not set))
Jan 17 03:43:11.640: INFO: Fetching cloud provider for "gce"
I0117 03:43:11.640582    9945 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0117 03:43:11.641285    9945 gce.go:860] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc000060090), conf:(*jwt.Config)(0xc001ed5ae0)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W0117 03:43:11.722398    9945 gce.go:460] No network name or URL specified.
I0117 03:43:11.722564    9945 e2e.go:109] Starting e2e run "04d0e89e-30ab-42d1-9482-2d44a41538d6" on Ginkgo node 1
{"msg":"Test Suite starting","total":9,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1579232590 - Will randomize all specs
Will run 9 of 4844 specs

Jan 17 03:43:16.332: INFO: cluster-master-image: cos-77-12371-89-0
Jan 17 03:43:16.332: INFO: cluster-node-image: cos-77-12371-89-0
Jan 17 03:43:16.332: INFO: >>> kubeConfig: /workspace/.kube/config
Jan 17 03:43:16.335: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jan 17 03:43:16.477: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
Jan 17 03:43:16.619: INFO: The status of Pod fluentd-gcp-v3.2.0-f4vs2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan 17 03:43:16.619: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan 17 03:43:16.619: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan 17 03:43:16.619: INFO: POD                       NODE               PHASE    GRACE  CONDITIONS
Jan 17 03:43:16.619: INFO: fluentd-gcp-v3.2.0-f4vs2  canary-e2e-master  Pending  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:41:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:07 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:07 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:41:58 +0000 UTC  }]
Jan 17 03:43:16.619: INFO: 
Jan 17 03:43:18.747: INFO: The status of Pod fluentd-gcp-v3.2.0-f4vs2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan 17 03:43:18.747: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Jan 17 03:43:18.747: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan 17 03:43:18.747: INFO: POD                       NODE               PHASE    GRACE  CONDITIONS
Jan 17 03:43:18.747: INFO: fluentd-gcp-v3.2.0-f4vs2  canary-e2e-master  Pending  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:41:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:07 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:07 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:41:58 +0000 UTC  }]
Jan 17 03:43:18.747: INFO: 
Jan 17 03:43:20.760: INFO: The status of Pod fluentd-gcp-v3.2.0-bjx4s is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan 17 03:43:20.760: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Jan 17 03:43:20.760: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan 17 03:43:20.760: INFO: POD                       NODE               PHASE    GRACE  CONDITIONS
Jan 17 03:43:20.760: INFO: fluentd-gcp-v3.2.0-bjx4s  canary-e2e-master  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC  }]
Jan 17 03:43:20.760: INFO: 
Jan 17 03:43:22.747: INFO: The status of Pod fluentd-gcp-v3.2.0-bjx4s is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan 17 03:43:22.747: INFO: 30 / 31 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Jan 17 03:43:22.747: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan 17 03:43:22.747: INFO: POD                       NODE               PHASE    GRACE  CONDITIONS
Jan 17 03:43:22.747: INFO: fluentd-gcp-v3.2.0-bjx4s  canary-e2e-master  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-17 03:43:19 +0000 UTC  }]
Jan 17 03:43:22.747: INFO: 
Jan 17 03:43:24.744: INFO: 31 / 31 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
... skipping 32 lines ...
Jan 17 03:43:27.681: INFO: Waiting for pod var-expansion-879d8da0-9f0a-488a-81a5-5319801e3191 to disappear
Jan 17 03:43:27.716: INFO: Pod var-expansion-879d8da0-9f0a-488a-81a5-5319801e3191 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 17 03:43:27.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2996" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":9,"completed":1,"skipped":255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS[0