This job view page is being replaced by Spyglass soon. Check out the new job view.
PRyouhonglian: update github.com/pkg/errors to go native errors pkg in staging
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2021-06-22 05:01
Elapsed5m22s
Revision
Builderdd58645a-d316-11eb-8198-82b50b807b78
Refs master:f4e78286
103079:8685559a
infra-commitf9f3b1310
job-versionv1.22.0-alpha.3.368+4a5fd292d930df
kubetest-version
repok8s.io/kubernetes
repo-commit4a5fd292d930dfae0e41f21f9aea60838f87de55
repos{u'k8s.io/kubernetes': u'master:f4e7828674d1526f2b04def99c20242f1548c2d8,103079:8685559a20d444a4ba3de0cc44f125cf243d68cb'}
revisionv1.22.0-alpha.3.368+4a5fd292d930df

Test Failures


kubetest Node Tests 3m54s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]|\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 173 lines ...
W0622 05:03:06.336]   "ignition": {
W0622 05:03:06.336]     "version": "3.1.0"
W0622 05:03:06.336]   },
W0622 05:03:06.336]   "systemd": {
W0622 05:03:06.337]     "units": [
W0622 05:03:06.337]       {
W0622 05:03:06.337]         "contents": "[Unit]\nDescription=Download and install crio binaries and configurations.\nAfter=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/sbin/setenforce 1\nExecStartPre=/usr/bin/bash -c '/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /usr/local/crio-install.sh  https://raw.githubusercontent.com/cri-o/cri-o/master/scripts/get'\nExecStartPre=/usr/bin/bash /usr/local/crio-install.sh\nExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t var_lib_t /var/lib/kubelet\nExecStartPre=/usr/bin/mount /tmp /tmp -o remount,exec,suid\nExecStartPre=/usr/bin/chcon -u system_u -r object_r -t container_runtime_exec_t /usr/local/bin/crio /usr/local/bin/crio-status /usr/local/bin/runc /usr/local/bin/crun\nExecStartPre=/usr/bin/chcon -u system_u -r object_r -t bin_t /usr/local/bin/conmon /usr/local/bin/crictl /usr/local/bin/pinns\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t bin_t /opt/cni/bin/\nExecStartPre=/usr/bin/rm -f  /etc/cni/net.d/87-podman-bridge.conflist\nExecStartPre=/usr/bin/bash -c 'echo -e \"[crio.runtime]\\n  default_runtime = \\\\\\\"runc\\\\\\\"\\n[crio.runtime.runtimes]\\n  [crio.runtime.runtimes.runc]\\n    runtime_path=\\\\\\\"/usr/local/bin/runc\\\\\\\"\" \u003e /etc/crio/crio.conf.d/20-runc.conf'\nExecStartPre=/usr/bin/bash -c 'echo -e \"[crio.runtime]\\n  [crio.runtime.runtimes]\\n  [crio.runtime.runtimes.test-handler]\\n    runtime_path=\\\\\\\"/usr/local/bin/crun\\\\\\\"\" \u003e /etc/crio/crio.conf.d/10-crun.conf'\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t container_config_t /etc/crio /etc/crio/crio.conf /usr/local/share/oci-umount/oci-umount.d/crio-umount.conf\nExecStartPre=/usr/bin/systemctl enable crio.service\nExecStartPre=/usr/bin/chcon -R -u system_u -r object_r -t systemd_unit_file_t /usr/local/lib/systemd/system/crio.service\nExecStart=/usr/bin/systemctl start crio.service\n\n[Install]\nWantedBy=multi-user.target\n",
W0622 05:03:06.337]         "enabled": true,
W0622 05:03:06.338]         "name": "crio-install.service"
W0622 05:03:06.338]       }
W0622 05:03:06.338]     ]
W0622 05:03:06.338]   }
W0622 05:03:06.338] }
... skipping 4 lines ...
I0622 05:03:06.439] make: Entering directory '/go/src/k8s.io/kubernetes'
I0622 05:03:06.439] make[1]: Entering directory '/go/src/k8s.io/kubernetes'
W0622 05:03:06.560] I0622 05:03:06.560385    6139 run_remote.go:579] Creating instance {image:fedora-coreos-34-20210529-3-0-gcp-x86-64 imageDesc:fedora-coreos-34-20210529-3-0-gcp-x86-64 kernelArguments:[] project:fedora-coreos-cloud resources:{Accelerators:[]} metadata:0xc0002351f0 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
I0622 05:03:16.027] +++ [0622 05:03:16] Building go targets for linux/amd64:
I0622 05:03:16.027]     ./vendor/k8s.io/code-generator/cmd/prerelease-lifecycle-gen
I0622 05:03:26.202] Generating prerelease lifecycle code for 27 targets
W0622 05:03:28.240] I0622 05:03:28.240211    6139 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.127.78.140 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
I0622 05:03:28.754] +++ [0622 05:03:28] Building go targets for linux/amd64:
I0622 05:03:28.754]     ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
I0622 05:03:30.529] Generating deepcopy code for 229 targets
I0622 05:03:39.328] +++ [0622 05:03:39] Building go targets for linux/amd64:
I0622 05:03:39.329]     ./vendor/k8s.io/code-generator/cmd/defaulter-gen
I0622 05:03:40.659] Generating defaulter code for 91 targets
... skipping 13 lines ...
I0622 05:04:49.266] +++ [0622 05:04:49] Building go targets for linux/amd64:
I0622 05:04:49.267]     cmd/kubelet
I0622 05:04:49.267]     test/e2e_node/e2e_node.test
I0622 05:04:49.267]     vendor/github.com/onsi/ginkgo/ginkgo
I0622 05:04:49.267]     cluster/gce/gci/mounter
W0622 05:05:32.564] # k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol
W0622 05:05:32.565] vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:659:15: undefined: fmt.Error
W0622 05:05:32.565] vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:663:15: undefined: fmt.Error
W0622 05:05:32.565] vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:669:15: undefined: fmt.Error
W0622 05:05:39.140] E0622 05:05:39.140134    6139 ssh.go:116] failed to run SSH command: out: ssh: connect to host 34.127.78.140 port 22: Connection timed out

W0622 05:05:39.140] , err: exit status 255
W0622 05:05:59.531] I0622 05:05:59.530886    6139 ssh.go:113] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@34.127.78.140 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
W0622 05:06:20.059] !!! [0622 05:06:20] Call tree:
W0622 05:06:20.062] !!! [0622 05:06:20]  1: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:731 kube::golang::build_some_binaries(...)
W0622 05:06:20.066] !!! [0622 05:06:20]  2: /go/src/k8s.io/kubernetes/hack/lib/golang.sh:875 kube::golang::build_binaries_for_platform(...)
W0622 05:06:20.069] !!! [0622 05:06:20]  3: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
W0622 05:06:20.073] !!! [0622 05:06:20] Call tree:
W0622 05:06:20.075] !!! [0622 05:06:20]  1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
W0622 05:06:20.081] !!! [0622 05:06:20] Call tree:
W0622 05:06:20.083] !!! [0622 05:06:20]  1: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
W0622 05:06:20.084] make: *** [Makefile:92: all] Error 1
I0622 05:06:20.188] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0622 05:06:20.653] I0622 05:06:20.652955    6139 run_remote.go:856] Deleting instance "tmp-node-e2e-a5d22fb6-fedora-coreos-34-20210529-3-0-gcp-x86-64"
I0622 05:06:21.129] 
I0622 05:06:21.129] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0622 05:06:21.129] >                              START TEST                                >
I0622 05:06:21.129] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0622 05:06:21.130] Start Test Suite on Host 
I0622 05:06:21.130] 
I0622 05:06:21.130] Failure Finished Test Suite on Host 
I0622 05:06:21.130] unable to create test archive: failed to setup test package "/tmp/node-e2e-archive096581908": failed to build the dependencies: failed to build go packages exit status 2
I0622 05:06:21.130] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0622 05:06:21.130] <                              FINISH TEST                               <
I0622 05:06:21.130] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0622 05:06:21.130] 
I0622 05:06:21.130] Failure: 1 errors encountered.
W0622 05:06:21.230] exit status 1
... skipping 11 lines ...
I0622 05:06:21.443] Sourcing kube-util.sh
I0622 05:06:21.443] Detecting project
I0622 05:06:21.444] Project: k8s-jkns-pr-node-e2e
I0622 05:06:21.444] Network Project: k8s-jkns-pr-node-e2e
I0622 05:06:21.444] Zone: us-west1-b
I0622 05:06:21.444] Dumping logs from master locally to '/workspace/_artifacts'
W0622 05:06:22.136] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0622 05:06:22.136]  - The resource 'projects/k8s-jkns-pr-node-e2e/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0622 05:06:22.136] 
W0622 05:06:22.306] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0622 05:06:22.407] Master not detected. Is the cluster up?
I0622 05:06:22.407] Dumping logs from nodes locally to '/workspace/_artifacts'
I0622 05:06:22.407] Detecting nodes in the cluster
... skipping 4 lines ...
W0622 05:06:26.320] NODE_NAMES=
W0622 05:06:26.323] 2021/06/22 05:06:26 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 5.048480453s
W0622 05:06:26.323] 2021/06/22 05:06:26 node.go:53: Noop - Node Down()
W0622 05:06:26.323] 2021/06/22 05:06:26 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0622 05:06:26.324] 2021/06/22 05:06:26 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0622 05:06:26.546] 2021/06/22 05:06:26 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 222.566695ms
W0622 05:06:26.547] 2021/06/22 05:06:26 main.go:327: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]|\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml: exit status 1]
W0622 05:06:26.552] Traceback (most recent call last):
W0622 05:06:26.552]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0622 05:06:26.552]     main(parse_args())
W0622 05:06:26.552]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0622 05:06:26.552]     mode.start(runner_args)
W0622 05:06:26.553]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0622 05:06:26.553]     check_env(env, self.command, *args)
W0622 05:06:26.553]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0622 05:06:26.553]     subprocess.check_call(cmd, env=env)
W0622 05:06:26.553]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0622 05:06:26.553]     raise CalledProcessError(retcode, cmd)
W0622 05:06:26.554] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --non-masquerade-cidr=0.0.0.0/0" --extra-log="{\\"name\\": \\"crio.log\\", \\"journalctl\\": [\\"-u\\", \\"crio\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]|\\[NodeFeature:.+\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]"', '--timeout=180m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1.yaml')' returned non-zero exit status 1
E0622 05:06:26.558] Command failed
I0622 05:06:26.559] process 561 exited with code 1 after 4.0m
E0622 05:06:26.559] FAIL: pull-kubernetes-node-crio-e2e
I0622 05:06:26.559] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0622 05:06:27.145] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0622 05:06:27.286] process 24371 exited with code 0 after 0.0m
I0622 05:06:27.286] Call:  gcloud config get-value account
I0622 05:06:27.804] process 24384 exited with code 0 after 0.0m
I0622 05:06:27.805] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0622 05:06:27.805] Upload result and artifacts...
I0622 05:06:27.805] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/103079/pull-kubernetes-node-crio-e2e/1407201983751786496
I0622 05:06:27.806] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/103079/pull-kubernetes-node-crio-e2e/1407201983751786496/artifacts
W0622 05:06:28.790] CommandException: One or more URLs matched no objects.
E0622 05:06:29.126] Command failed
I0622 05:06:29.126] process 24397 exited with code 1 after 0.0m
W0622 05:06:29.126] Remote dir gs://kubernetes-jenkins/pr-logs/pull/103079/pull-kubernetes-node-crio-e2e/1407201983751786496/artifacts not exist yet
I0622 05:06:29.126] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/103079/pull-kubernetes-node-crio-e2e/1407201983751786496/artifacts
I0622 05:06:31.060] process 24544 exited with code 0 after 0.0m
I0622 05:06:31.061] Call:  git rev-parse HEAD
I0622 05:06:31.066] process 25088 exited with code 0 after 0.0m
... skipping 20 lines ...