This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultNot Finished
Started2022-08-15 22:26
Revision

Test Failures


task-10-reset 3.75s

exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 42 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 298 lines ...
time="22:29:07" level=debug msg="generated config:\napiServer:\n  certSANs:\n  - localhost\n  - 172.17.0.2\napiVersion: kubeadm.k8s.io/v1beta3\nclusterName: kinder-discovery\ncontrolPlaneEndpoint: 172.17.0.2:6443\ncontrollerManager:\n  extraArgs:\n    enable-hostpath-provisioner: \"true\"\nkind: ClusterConfiguration\nkubernetesVersion: v1.26.0-alpha.0.18+d5fdf3135e7c99\nmetadata:\n  name: config\nnetworking:\n  podSubnet: 192.168.0.0/16\n  serviceSubnet: \"\"\nscheduler:\n  extraArgs: null\n---\napiVersion: kubeadm.k8s.io/v1beta3\nbootstrapTokens:\n- token: abcdef.0123456789abcdef\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 172.17.0.2\n  bindPort: 6443\nmetadata:\n  name: config\nnodeRegistration:\n  criSocket: /run/containerd/containerd.sock\n  kubeletExtraArgs:\n    node-ip: 172.17.0.2\npatches:\n  directory: /kinder/patches\n---\napiVersion: kubelet.config.k8s.io/v1beta1\ncgroupDriver: systemd\nevictionHard:\n  imagefs.available: 0%\n  nodefs.available: 0%\n  nodefs.inodesFree: 0%\nfailSwapOn: false\nimageGCHighThresholdPercent: 100\nkind: KubeletConfiguration\nmetadata:\n  name: config\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nconntrack:\n  maxPerCore: 0\nkind: KubeProxyConfiguration\nmetadata:\n  name: config\n"
time="22:29:07" level=debug msg="Running: docker cp /tmp/kinder-discovery-control-plane-1-712991737 kinder-discovery-control-plane-1:/kind/kubeadm.conf"

kinder-discovery-control-plane-1:$ kubeadm init --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --config=/kind/kubeadm.conf --v=6
time="22:29:07" level=debug msg="Running: docker exec kinder-discovery-control-plane-1 kubeadm init --ignore-preflight-errors=Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables --config=/kind/kubeadm.conf --v=6"
I0815 22:29:07.967354     175 initconfiguration.go:254] loading configuration from "/kind/kubeadm.conf"
W0815 22:29:07.969059     175 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.969758     175 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"ClusterConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.970467     175 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"InitConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.971452     175 initconfiguration.go:305] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.971607     175 configset.go:177] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeproxy.config.k8s.io", Version:"v1alpha1", Kind:"KubeProxyConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.972119     175 configset.go:177] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: strict decoding error: unknown field "metadata"
W0815 22:29:07.972312     175 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0815 22:29:07.980938     175 common.go:128] WARNING: tolerating control plane version v1.26.0-alpha.0.18+d5fdf3135e7c99 as a pre-release version
[init] Using Kubernetes version: v1.26.0-alpha.0.18+d5fdf3135e7c99
[preflight] Running pre-flight checks
I0815 22:29:07.981544     175 checks.go:568] validating Kubernetes and kubeadm version
I0815 22:29:07.981615     175 checks.go:168] validating if the firewall is enabled and active
... skipping 20 lines ...
I0815 22:29:08.005095     175 checks.go:370] validating the presence of executable ebtables
I0815 22:29:08.005138     175 checks.go:370] validating the presence of executable ethtool
I0815 22:29:08.005181     175 checks.go:370] validating the presence of executable socat
I0815 22:29:08.005222     175 checks.go:370] validating the presence of executable tc
I0815 22:29:08.005258     175 checks.go:370] validating the presence of executable touch
I0815 22:29:08.005371     175 checks.go:516] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-1068-gke\n", err: exit status 1
I0815 22:29:08.010466     175 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.4.0-1068-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 102 lines ...
I0815 22:29:18.907369     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:19.407447     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:19.907462     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:20.407517     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:20.907603     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:21.407610     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0815 22:29:25.228992     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 3321 milliseconds
I0815 22:29:25.409479     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0815 22:29:25.909475     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0815 22:29:26.409996     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0815 22:29:26.910242     175 round_trippers.go:553] GET https://172.17.0.2:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0815 22:29:26.910349     175 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 14.504110 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0815 22:29:26.915407     175 round_trippers.go:553] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 milliseconds
I0815 22:29:26.919185     175 round_trippers.go:553] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 1480 lines ...
    [ReportAfterSuite] TOP-LEVEL
      test/e2e/framework/test_context.go:559
  << End Captured GinkgoWriter Output
------------------------------

Ran 31 of 37 Specs in 0.258 seconds
SUCCESS! -- 31 Passed | 0 Failed | 0 Pending | 6 Skipped
PASS

Ginkgo ran 1 suite in 316.325408ms
Test Suite Passed
[--skip=\[copy-certs\] /home/prow/go/src/k8s.io/kubernetes/_output/bin/e2e_kubeadm.test -- --report-dir=/logs/artifacts --kubeconfig=/root/.kube/kind-config-kinder-discovery]
 completed!

# task-09-get-logs
kinder export logs --loglevel=debug --name=kinder-discovery /logs/artifacts

Error: [command "docker exec --privileged kinder-discovery-worker-1 sh -c 'tar --hard-dereference -C /var/log/ -chf - . || (r=$?; [ $r -eq 1 ] || exit $r)'" failed with error: exit status 1, [command "docker exec --privileged kinder-discovery-worker-1 journalctl --no-pager -u containerd.service" failed with error: exit status 1, command "docker exec --privileged kinder-discovery-worker-1 journalctl --no-pager -u kubelet.service" failed with error: exit status 1, command "docker exec --privileged kinder-discovery-worker-1 journalctl --no-pager" failed with error: exit status 1, command "docker exec --privileged kinder-discovery-worker-1 cat /kind/version" failed with error: exit status 1]]
 completed!

# task-10-reset
kinder do kubeadm-reset --name=kinder-discovery --loglevel=debug --kubeadm-verbosity=6

time="22:34:03" level=debug msg="Running: docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Label \"io.k8s.sigs.kind.cluster\"}}"
... skipping 64 lines ...

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

kinder-discovery-worker-1:$ kubeadm reset --force --v=6
time="22:34:07" level=debug msg="Running: docker exec kinder-discovery-worker-1 kubeadm reset --force --v=6"
Error response from daemon: Container d077002a46f5f2e7d3d38cdac8ecc9a7e4c7a9f39273932a563fd0e2d7691553 is not running
Error: failed to exec action kubeadm-reset: exit status 1
 exit status 1

# task-11-delete
kinder delete cluster --name=kinder-discovery --loglevel=debug

Deleting cluster "kinder-discovery" ...
 completed!

Ran 12 of 12 tasks in 0.000 seconds
FAIL! -- 11 tasks Passed | 1 Failed | 0 Skipped

see junit-runner.xml and task logs files for more details

Error: failed executing the workflow
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...