This job view page is being replaced by Spyglass soon. Check out the new job view.
PRsaschagrunert: Refactor event creating into own method
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-08-12 12:57
Elapsed6m56s
Revision3c8c90a5899e676d13df4ddf30fbf165e045fc74
Refs 92

No Test Failures!


Error lines from build-log.txt

... skipping 106 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: ""
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: so-e2e-1597237074-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: so-e2e-1597237074-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 44 lines ...
I0812 12:58:50.454084     192 checks.go:376] validating the presence of executable ebtables
I0812 12:58:50.454266     192 checks.go:376] validating the presence of executable ethtool
I0812 12:58:50.454492     192 checks.go:376] validating the presence of executable socat
I0812 12:58:50.454633     192 checks.go:376] validating the presence of executable tc
I0812 12:58:50.454745     192 checks.go:376] validating the presence of executable touch
I0812 12:58:50.454829     192 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0812 12:58:50.464899     192 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0812 12:58:50.466177     192 checks.go:618] validating kubelet version
I0812 12:58:50.565004     192 checks.go:128] validating if the service is enabled and active
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... skipping 77 lines ...
I0812 12:58:56.332957     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:58:56.833029     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:58:57.332665     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:58:57.832794     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:58:58.332918     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:58:58.832471     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0812 12:59:04.497385     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 5166 milliseconds
I0812 12:59:04.834126     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I0812 12:59:05.334344     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I0812 12:59:05.838626     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 7 milliseconds
I0812 12:59:06.334058     192 round_trippers.go:443] GET https://so-e2e-1597237074-control-plane:6443/healthz?timeout=10s 200 OK in 3 milliseconds
I0812 12:59:06.334233     192 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 12.008202 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0812 12:59:06.342020     192 round_trippers.go:443] POST https://so-e2e-1597237074-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 milliseconds
I0812 12:59:06.347736     192 round_trippers.go:443] POST https://so-e2e-1597237074-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
... skipping 108 lines ...
I0812 12:59:12.917456     419 checks.go:376] validating the presence of executable ebtables
I0812 12:59:12.917557     419 checks.go:376] validating the presence of executable ethtool
I0812 12:59:12.917586     419 checks.go:376] validating the presence of executable socat
I0812 12:59:12.917660     419 checks.go:376] validating the presence of executable tc
I0812 12:59:12.917698     419 checks.go:376] validating the presence of executable touch
I0812 12:59:12.917740     419 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0812 12:59:12.928896     419 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 106 lines ...
I0812 12:59:12.918661     408 checks.go:376] validating the presence of executable ebtables
I0812 12:59:12.918706     408 checks.go:376] validating the presence of executable ethtool
I0812 12:59:12.918738     408 checks.go:376] validating the presence of executable socat
I0812 12:59:12.918783     408 checks.go:376] validating the presence of executable tc
I0812 12:59:12.918818     408 checks.go:376] validating the presence of executable touch
I0812 12:59:12.918855     408 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0812 12:59:12.926010     408 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 351 lines ...
pod/test-pod created
I0812 13:02:45.765964    6149 suite_test.go:164]  "msg"="Waiting for test pod to be ready"  
time="2020-08-12T13:02:45Z" level=info msg="+ /usr/local/bin/kubectl wait --for condition=ready pod --all"
pod/test-pod condition met
I0812 13:03:02.290360    6149 suite_test.go:164]  "msg"="Testing that `rmdir` is not possible inside the pod"  
time="2020-08-12T13:03:02Z" level=info msg="+ /usr/local/bin/kubectl exec test-pod -- rmdir /home"
rmdir: failed to remove '/home': Operation not permitted
command terminated with exit code 1
time="2020-08-12T13:03:03Z" level=info msg="+ /usr/local/bin/kubectl delete -f examples/pod.yaml"
pod "test-pod" deleted
I0812 13:03:15.472882    6149 suite_test.go:164]  "msg"="> Running testcase: Re-deploy the operator"  
I0812 13:03:15.472916    6149 suite_test.go:164]  "msg"="Cleaning up operator"  
time="2020-08-12T13:03:15Z" level=info msg="+ /usr/local/bin/kubectl delete -f deploy/operator.yaml"
... skipping 45 lines ...
42s         Normal    Pulling                   pod/test-pod                           Pulling image "nginx:1.19.1"
35s         Normal    Pulled                    pod/test-pod                           Successfully pulled image "nginx:1.19.1"
34s         Normal    Created                   pod/test-pod                           Created container test-container
32s         Normal    Started                   pod/test-pod                           Started container test-container
29s         Normal    Killing                   pod/test-pod                           Stopping container test-container
    tc_invalid_profile_test.go:58: 
        	Error Trace:	tc_invalid_profile_test.go:58
        	            				e2e_test.go:60
        	Error:      	"LAST SEEN   TYPE      REASON                    OBJECT                                 MESSAGE\n0s          Warning   cannot validate profile   configmap/invalid-profile              decoding seccomp profile: json: cannot unmarshal bool into Go struct field Seccomp.defaultAction of type seccomp.Action\n0s          Warning   cannot validate profile   configmap/invalid-profile              decoding seccomp profile: json: cannot unmarshal bool into Go struct field Seccomp.defaultAction of type seccomp.Action\n4m25s       Normal    Starting                  node/so-e2e-1597237074-control-plane   Starting kubelet.\n4m25s       Normal    NodeHasSufficientMemory   node/so-e2e-1597237074-control-plane   Node so-e2e-1597237074-control-plane status is now: NodeHasSufficientMemory\n4m25s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597237074-control-plane   Node so-e2e-1597237074-control-plane status is now: NodeHasNoDiskPressure\n4m25s       Normal    NodeHasSufficientPID      node/so-e2e-1597237074-control-plane   Node so-e2e-1597237074-control-plane status is now: NodeHasSufficientPID\n4m25s       Normal    NodeAllocatableEnforced   node/so-e2e-1597237074-control-plane   Updated Node Allocatable limit across pods\n4m10s       Normal    RegisteredNode            node/so-e2e-1597237074-control-plane   Node so-e2e-1597237074-control-plane event: Registered Node so-e2e-1597237074-control-plane in Controller\n4m7s        Normal    Starting                  node/so-e2e-1597237074-control-plane   Starting kube-proxy.\n3m55s       Normal    NodeReady                 node/so-e2e-1597237074-control-plane   Node so-e2e-1597237074-control-plane status is now: NodeReady\n3m54s       Normal    NodeHasSufficientMemory   node/so-e2e-1597237074-worker          Node so-e2e-1597237074-worker status is now: NodeHasSufficientMemory\n3m54s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597237074-worker          Node so-e2e-1597237074-worker status is now: NodeHasNoDiskPressure\n3m50s       Normal    RegisteredNode            node/so-e2e-1597237074-worker          Node so-e2e-1597237074-worker event: Registered Node so-e2e-1597237074-worker in Controller\n3m50s       Normal    Starting                  node/so-e2e-1597237074-worker          Starting kube-proxy.\n3m54s       Normal    NodeHasSufficientMemory   node/so-e2e-1597237074-worker2         Node so-e2e-1597237074-worker2 status is now: NodeHasSufficientMemory\n3m54s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597237074-worker2         Node so-e2e-1597237074-worker2 status is now: NodeHasNoDiskPressure\n3m50s       Normal    RegisteredNode            node/so-e2e-1597237074-worker2         Node so-e2e-1597237074-worker2 event: Registered Node so-e2e-1597237074-worker2 in Controller\n3m47s       Normal    Starting                  node/so-e2e-1597237074-worker2         Starting kube-proxy.\n48s         Normal    Scheduled                 pod/test-pod                           Successfully assigned default/test-pod to so-e2e-1597237074-worker\n42s         Normal    Pulling                   pod/test-pod                           Pulling image \"nginx:1.19.1\"\n35s         Normal    Pulled                    pod/test-pod                           Successfully pulled image \"nginx:1.19.1\"\n34s         Normal    Created                   pod/test-pod                           Created container test-container\n32s         Normal    Started                   pod/test-pod                           Started container test-container\n29s         Normal    Killing                   pod/test-pod                           Stopping container test-container" does not contain "cannot validate profile profile-invalid.json"
        	Test:       	TestSuite/TestSeccompOperator
time="2020-08-12T13:03:34Z" level=info msg="+ /usr/local/bin/kubectl delete -f /tmp/invalid-profile-433131258"
configmap "invalid-profile" deleted
time="2020-08-12T13:03:34Z" level=info msg="+ /usr/bin/git checkout deploy/operator.yaml"
I0812 13:03:34.183405    6149 suite_test.go:164]  "msg"="Destroying cluster"  
time="2020-08-12T13:03:34Z" level=info msg="+ /home/prow/go/src/github.com/kubernetes-sigs/seccomp-operator/build/kind delete cluster --name=so-e2e-1597237074 -v=3"
Deleting cluster "so-e2e-1597237074" ...
--- FAIL: TestSuite (349.45s)
    --- FAIL: TestSuite/TestSeccompOperator (348.83s)
FAIL
FAIL	sigs.k8s.io/seccomp-operator/test	349.483s
FAIL
make: *** [Makefile:134: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...