This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhasheddan: Update eventing and reconcile logic
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-08-13 21:10
Elapsed7m29s
Revisionbcda0f608026de7e7e175ca352386ddfec03d87a
Refs 101

No Test Failures!


Error lines from build-log.txt

... skipping 106 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: ""
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: so-e2e-1597353116-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: ""
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: so-e2e-1597353116-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 44 lines ...
I0813 21:13:07.854199     205 checks.go:376] validating the presence of executable ebtables
I0813 21:13:07.854251     205 checks.go:376] validating the presence of executable ethtool
I0813 21:13:07.854291     205 checks.go:376] validating the presence of executable socat
I0813 21:13:07.854350     205 checks.go:376] validating the presence of executable tc
I0813 21:13:07.854387     205 checks.go:376] validating the presence of executable touch
I0813 21:13:07.854739     205 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0813 21:13:07.865359     205 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0813 21:13:07.868825     205 checks.go:618] validating kubelet version
I0813 21:13:07.962843     205 checks.go:128] validating if the service is enabled and active
I0813 21:13:07.979640     205 checks.go:201] validating availability of port 10250
I0813 21:13:07.979745     205 checks.go:201] validating availability of port 2379
I0813 21:13:07.979786     205 checks.go:201] validating availability of port 2380
... skipping 80 lines ...
I0813 21:13:16.200284     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0813 21:13:16.701221     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0813 21:13:17.202977     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 4 milliseconds
I0813 21:13:17.701597     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 2 milliseconds
I0813 21:13:18.201256     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 2 milliseconds
I0813 21:13:18.706298     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s  in 7 milliseconds
I0813 21:13:25.605744     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6406 milliseconds
I0813 21:13:25.723297     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 20 milliseconds
I0813 21:13:26.202027     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I0813 21:13:26.702011     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0813 21:13:27.201519     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0813 21:13:27.702189     205 round_trippers.go:443] GET https://so-e2e-1597353116-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0813 21:13:27.702849     205 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 15.006338 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0813 21:13:27.710215     205 round_trippers.go:443] POST https://so-e2e-1597353116-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 5 milliseconds
I0813 21:13:27.717981     205 round_trippers.go:443] POST https://so-e2e-1597353116-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 6 milliseconds
... skipping 108 lines ...
I0813 21:13:35.302040     476 checks.go:376] validating the presence of executable ebtables
I0813 21:13:35.302076     476 checks.go:376] validating the presence of executable ethtool
I0813 21:13:35.302212     476 checks.go:376] validating the presence of executable socat
I0813 21:13:35.302286     476 checks.go:376] validating the presence of executable tc
I0813 21:13:35.302325     476 checks.go:376] validating the presence of executable touch
I0813 21:13:35.302384     476 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0813 21:13:35.313342     476 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 107 lines ...
I0813 21:13:35.297920     459 checks.go:376] validating the presence of executable ebtables
I0813 21:13:35.297966     459 checks.go:376] validating the presence of executable ethtool
I0813 21:13:35.298134     459 checks.go:376] validating the presence of executable socat
I0813 21:13:35.298207     459 checks.go:376] validating the presence of executable tc
I0813 21:13:35.298233     459 checks.go:376] validating the presence of executable touch
I0813 21:13:35.298294     459 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0813 21:13:35.308096     459 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 356 lines ...
pod/test-pod created
I0813 21:17:09.289536    6187 suite_test.go:164]  "msg"="Waiting for test pod to be ready"  
time="2020-08-13T21:17:09Z" level=info msg="+ /usr/local/bin/kubectl wait --for condition=ready pod --all"
pod/test-pod condition met
I0813 21:17:17.776348    6187 suite_test.go:164]  "msg"="Testing that `rmdir` is not possible inside the pod"  
time="2020-08-13T21:17:17Z" level=info msg="+ /usr/local/bin/kubectl exec test-pod -- rmdir /home"
rmdir: failed to remove '/home': Operation not permitted
command terminated with exit code 1
time="2020-08-13T21:17:18Z" level=info msg="+ /usr/local/bin/kubectl delete -f examples/pod.yaml"
pod "test-pod" deleted
I0813 21:17:28.255729    6187 suite_test.go:164]  "msg"="> Running testcase: Re-deploy the operator"  
I0813 21:17:28.255776    6187 suite_test.go:164]  "msg"="Cleaning up operator"  
time="2020-08-13T21:17:28Z" level=info msg="+ /usr/local/bin/kubectl delete -f deploy/operator.yaml"
... skipping 57 lines ...
26s         Normal    Started                   pod/test-pod                           Started container test-container
23s         Normal    Killing                   pod/test-pod                           Stopping container test-container
38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk
38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk
38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk
    tc_invalid_profile_test.go:72: 
        	Error Trace:	tc_invalid_profile_test.go:72
        	            				e2e_test.go:63
        	Error:      	"LAST SEEN   TYPE      REASON                    OBJECT                                 MESSAGE\n0s          Warning   InvalidSeccompProfile     configmap/invalid-profile              decoding seccomp profile: json: cannot unmarshal bool into Go struct field Seccomp.defaultAction of type seccomp.Action\n0s          Warning   InvalidSeccompProfile     configmap/invalid-profile              decoding seccomp profile: json: cannot unmarshal bool into Go struct field Seccomp.defaultAction of type seccomp.Action\n0s          Warning   InvalidSeccompProfile     configmap/invalid-profile              decoding seccomp profile: json: cannot unmarshal bool into Go struct field Seccomp.defaultAction of type seccomp.Action\n4m29s       Normal    NodeHasSufficientMemory   node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasSufficientMemory\n4m29s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasNoDiskPressure\n4m29s       Normal    NodeHasSufficientPID      node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasSufficientPID\n4m13s       Normal    Starting                  node/so-e2e-1597353116-control-plane   Starting kubelet.\n4m13s       Normal    NodeHasSufficientMemory   node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasSufficientMemory\n4m13s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasNoDiskPressure\n4m13s       Normal    NodeHasSufficientPID      node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeHasSufficientPID\n4m13s       Normal    NodeAllocatableEnforced   node/so-e2e-1597353116-control-plane   Updated Node Allocatable limit across pods\n3m57s       Normal    RegisteredNode            node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane event: Registered Node so-e2e-1597353116-control-plane in Controller\n3m54s       Normal    Starting                  node/so-e2e-1597353116-control-plane   Starting kube-proxy.\n3m43s       Normal    NodeReady                 node/so-e2e-1597353116-control-plane   Node so-e2e-1597353116-control-plane status is now: NodeReady\n3m40s       Normal    NodeHasSufficientMemory   node/so-e2e-1597353116-worker          Node so-e2e-1597353116-worker status is now: NodeHasSufficientMemory\n3m40s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597353116-worker          Node so-e2e-1597353116-worker status is now: NodeHasNoDiskPressure\n3m37s       Normal    RegisteredNode            node/so-e2e-1597353116-worker          Node so-e2e-1597353116-worker event: Registered Node so-e2e-1597353116-worker in Controller\n3m31s       Normal    Starting                  node/so-e2e-1597353116-worker          Starting kube-proxy.\n3m40s       Normal    NodeHasSufficientMemory   node/so-e2e-1597353116-worker2         Node so-e2e-1597353116-worker2 status is now: NodeHasSufficientMemory\n3m40s       Normal    NodeHasNoDiskPressure     node/so-e2e-1597353116-worker2         Node so-e2e-1597353116-worker2 status is now: NodeHasNoDiskPressure\n3m37s       Normal    RegisteredNode            node/so-e2e-1597353116-worker2         Node so-e2e-1597353116-worker2 event: Registered Node so-e2e-1597353116-worker2 in Controller\n3m31s       Normal    Starting                  node/so-e2e-1597353116-worker2         Starting kube-proxy.\n33s         Normal    Scheduled                 pod/test-pod                           Successfully assigned default/test-pod to so-e2e-1597353116-worker\n32s         Normal    Pulling                   pod/test-pod                           Pulling image \"nginx:1.19.1\"\n27s         Normal    Pulled                    pod/test-pod                           Successfully pulled image \"nginx:1.19.1\"\n26s         Normal    Created                   pod/test-pod                           Created container test-container\n26s         Normal    Started                   pod/test-pod                           Started container test-container\n23s         Normal    Killing                   pod/test-pod                           Stopping container test-container\n38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk\n38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk\n38s         Normal    SavedSeccompProfile       configmap/test-profile                 Successfully saved profile to disk" does not contain "cannot validate profile profile-invalid.json"
        	Test:       	TestSuite/TestSeccompOperator
I0813 21:17:42.632302    6187 suite_test.go:164]  "msg"="Verifying node content"  
time="2020-08-13T21:17:42Z" level=info msg="+ /usr/local/bin/kubectl -n default get configmap invalid-profile -o json"
{
    "apiVersion": "v1",
    "data": {
... skipping 44 lines ...
time="2020-08-13T21:17:45Z" level=info msg="+ /usr/local/bin/kubectl delete -f /tmp/invalid-profile325478232"
configmap "invalid-profile" deleted
time="2020-08-13T21:17:45Z" level=info msg="+ /usr/bin/git checkout deploy/operator.yaml"
I0813 21:17:45.932503    6187 suite_test.go:164]  "msg"="Destroying cluster"  
time="2020-08-13T21:17:45Z" level=info msg="+ /home/prow/go/src/github.com/kubernetes-sigs/seccomp-operator/build/kind delete cluster --name=so-e2e-1597353116 -v=3"
Deleting cluster "so-e2e-1597353116" ...
--- FAIL: TestSuite (358.34s)
    --- FAIL: TestSuite/TestSeccompOperator (357.68s)
FAIL
FAIL	sigs.k8s.io/seccomp-operator/test	358.377s
FAIL
make: *** [Makefile:136: test-e2e] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...