This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 6 succeeded
Started2021-12-07 18:14
Elapsed1h7m
Revisionmaster
job-versionv1.23.0-rc.1.1+532d2a36e39fb0
kubetest-version
revisionv1.23.0-rc.1.1+532d2a36e39fb0

Test Failures


kubetest Up 58m52s

error during ./hack/e2e-internal/e2e-up.sh: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 6 Passed Tests

Error lines from build-log.txt

... skipping 501 lines ...
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
ERROR: (gcloud.compute.instance-groups.managed.wait-until) Timeout while waiting for group to become stable.
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
Waiting for group to become stable, current operations: creating: 2
... skipping 44 lines ...
Trying to find master named 'e2e-80a55e4d83-c355d-master'
Looking for address 'e2e-80a55e4d83-c355d-master-ip'
Using master: e2e-80a55e4d83-c355d-master (external IP: 104.196.241.138; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d" set.
User "k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d" set.
Context "k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d" created.
Switched to context "k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d".
... skipping 135 lines ...

Specify --start=53006 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts'
Detecting nodes in the cluster
Warning: Permanently added 'compute.1582899656000674670' (ECDSA) to the list of known hosts.
Warning: Permanently added 'compute.4697118453067291502' (ECDSA) to the list of known hosts.
Warning: Permanently added 'compute.1196461360817511278' (ECDSA) to the list of known hosts.
Get-EventLog : No matches found
... skipping 38 lines ...
At line:1 char:9
+ $logs=$(docker image list); $logs | Out-File -FilePath 'C:\etc\kubern ...
+         ~~~~~~
    + CategoryInfo          : ObjectNotFound: (docker:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException
 
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found


Specify --start=300983 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=302135 in the next get-serial-port-output invocation to get only the new output starting from here.

Specify --start=301840 in the next get-serial-port-output invocation to get only the new output starting from here.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.ssh) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from e2e-80a55e4d83-c355d-minion-group-2snx
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log kern.log docker/log kubelet.log supervisor/supervisord.log supervisor/kubelet-stdout.log supervisor/kubelet-stderr.log supervisor/docker-stdout.log supervisor/docker-stderr.log' from e2e-80a55e4d83-c355d-minion-group-h5j2
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found
ERROR: (gcloud.compute.instances.get-serial-port-output) Could not fetch serial port output: The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found
ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-2snx' was not found

ERROR: (gcloud.compute.scp) Could not fetch resource:
 - The resource 'projects/k8s-jenkins-gci-kubemark/zones/us-west1-b/instances/e2e-80a55e4d83-c355d-minion-group-h5j2' was not found

INSTANCE_GROUPS=e2e-80a55e4d83-c355d-minion-group
NODE_NAMES=e2e-80a55e4d83-c355d-minion-group-2snx e2e-80a55e4d83-c355d-minion-group-h5j2
Failures for e2e-80a55e4d83-c355d-minion-group (if any):
Failures for e2e-80a55e4d83-c355d-windows-node-group (if any):
... skipping 48 lines ...
Property "users.k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d-basic-auth" unset.
Property "contexts.k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d" unset.
Cleared config for k8s-jenkins-gci-kubemark_e2e-80a55e4d83-c355d from /workspace/.kube/config
Done
2021/12/07 19:22:27 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 4m42.011875727s
2021/12/07 19:22:27 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml.
2021/12/07 19:22:27 main.go:331: Something went wrong: starting e2e cluster: error during ./hack/e2e-internal/e2e-up.sh: exit status 1
Traceback (most recent call last):
  File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module>
    main(parse_args())
  File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main
    mode.start(runner_args)
  File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start
... skipping 9 lines ...