This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2021-12-05 02:10
Elapsed1h23m
Revisionmaster
job-versionv1.23.0-rc.1.1+532d2a36e39fb0
kubetest-version
revisionv1.23.0-rc.1.1+532d2a36e39fb0

Test Failures


kubetest Up 58m50s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 501 lines ...
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
ERROR: (gcloud.compute.instance-groups.managed.wait-until) Timeout while waiting for group to become stable.
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
... skipping 44 lines ...
Trying to find master named 'e2e-a5b44f8b64-c355d-master'
Looking for address 'e2e-a5b44f8b64-c355d-master-ip'
Using master: e2e-a5b44f8b64-c355d-master (external IP: 34.83.200.175; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "kubernetes-ingress_e2e-a5b44f8b64-c355d" set.
User "kubernetes-ingress_e2e-a5b44f8b64-c355d" set.
Context "kubernetes-ingress_e2e-a5b44f8b64-c355d" created.
Switched to context "kubernetes-ingress_e2e-a5b44f8b64-c355d".
... skipping 133 lines ...

Specify --start=52988 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Warning: Permanently added 'compute.4247720481400135265' (ECDSA) to the list of known hosts.
Warning: Permanently added 'compute.6912573944256110177' (ECDSA) to the list of known hosts.
Get-EventLog : No matches found
At line:1 char:9
... skipping 22 lines ...
At line:1 char:9
+ $logs=$(docker image list); $logs | Out-File -FilePath 'C:\etc\kubern ...
+         ~~~~~~
    + CategoryInfo          : ObjectNotFound: (docker:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
 
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found


Specify --start=242594 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=301252 in the next get-serial-port-output invocation to get only the new output starting from here.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from e2e-a5b44f8b64-c355d-minion-group-gcnm
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from e2e-a5b44f8b64-c355d-minion-group-6sk7
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-gcnm' was not found

ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/kubernetes-ingress/zones/us-west1-b/instances/e2e-a5b44f8b64-c355d-minion-group-6sk7' was not found

ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection refused
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ssh: connect to host 35.227.182.51 port 22: Connection refused
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ssh: connect to host 35.227.182.51 port 22: Connection refused
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ssh: connect to host 35.227.182.51 port 22: Connection timed out
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Warning: Permanently added 'compute.5232020311540604513' (ECDSA) to the list of known hosts.
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].

Specify --start=290098 in the next get-serial-port-output invocation to get only the new output starting from here.
INSTANCE_GROUPS=e2e-a5b44f8b64-c355d-minion-group
NODE_NAMES=e2e-a5b44f8b64-c355d-minion-group-6sk7 e2e-a5b44f8b64-c355d-minion-group-gcnm
Failures for e2e-a5b44f8b64-c355d-minion-group (if any):
Failures for e2e-a5b44f8b64-c355d-windows-node-group (if any):
... skipping 48 lines ...
Property "users.kubernetes-ingress_e2e-a5b44f8b64-c355d-basic-auth" unset.
Property "contexts.kubernetes-ingress_e2e-a5b44f8b64-c355d" unset.
Cleared config for kubernetes-ingress_e2e-a5b44f8b64-c355d from /workspace/.kube/config
Done
2021/12/05 03:34:16 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m38.179543711s
2021/12/05 03:34:16 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/12/05 03:34:16 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...