This job view page is being replaced by Spyglass soon. Check out the new job view.
PRBenTheElder: fix extraPortMappings / extraMounts and add more tests
ResultFAILURE
Tests 0 failed / 531 succeeded
Started2019-10-16 06:58
Elapsed28m18s
Revision07f9fbb534566d1f5e16c69dc4d2fb83e584e455
Refs 954

No Test Failures!


Show 531 Passed Tests

Show 4216 Skipped Tests

Error lines from build-log.txt

... skipping 632 lines ...
localAPIEndpoint:
  advertiseAddress: "172.17.0.2"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
... skipping 57 lines ...
localAPIEndpoint:
  advertiseAddress: "172.17.0.4"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
... skipping 58 lines ...
localAPIEndpoint:
  advertiseAddress: "172.17.0.3"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
... skipping 2 lines ...
  localAPIEndpoint:
    advertiseAddress: "172.17.0.3"
    bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
... skipping 44 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 30 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 30 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 122 lines ...
I1016 07:04:32.831418      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:33.331614      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:33.831302      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:34.331885      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:34.831280      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:35.331048      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1016 07:04:39.757681      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 3926 milliseconds
I1016 07:04:39.837675      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 6 milliseconds
I1016 07:04:40.333665      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
I1016 07:04:40.833056      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
I1016 07:04:41.333928      81 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=32s 200 OK in 2 milliseconds
I1016 07:04:41.334206      81 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 11.509138 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1016 07:04:41.339513      81 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 3 milliseconds
I1016 07:04:41.345545      81 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 5 milliseconds
... skipping 145 lines ...
I1016 07:04:46.405821     282 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt
I1016 07:04:46.405839     282 checks.go:433] validating if the connectivity type is via proxy or direct
I1016 07:04:46.405886     282 join.go:441] [preflight] Discovering cluster-info
I1016 07:04:46.406013     282 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:46.406803     282 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:46.414293     282 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 7 milliseconds
I1016 07:04:46.415351     282 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:04:51.415532     282 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:51.416345     282 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:51.419865     282 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds
I1016 07:04:51.420197     282 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:04:56.420412     282 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:56.421343     282 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:56.424810     282 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 3 milliseconds
I1016 07:04:56.425096     282 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:05:01.425462     282 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:05:01.426066     282 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:05:01.431655     282 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 5 milliseconds
I1016 07:05:01.432847     282 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443"
I1016 07:05:01.432870     282 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443"
I1016 07:05:01.432892     282 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 70 lines ...
I1016 07:04:46.406993     277 checks.go:287] validating the existence of file /etc/kubernetes/pki/ca.crt
I1016 07:04:46.407055     277 checks.go:433] validating if the connectivity type is via proxy or direct
I1016 07:04:46.407136     277 join.go:441] [preflight] Discovering cluster-info
I1016 07:04:46.407287     277 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:46.407929     277 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:46.417763     277 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds
I1016 07:04:46.418489     277 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:04:51.418651     277 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:51.419518     277 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:51.421556     277 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 1 milliseconds
I1016 07:04:51.421812     277 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:04:56.421994     277 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:04:56.422688     277 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:04:56.425459     277 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds
I1016 07:04:56.425770     277 token.go:202] [discovery] Failed to connect to API Server "172.17.0.3:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1016 07:05:01.426019     277 token.go:199] [discovery] Trying to connect to API Server "172.17.0.3:6443"
I1016 07:05:01.426644     277 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.3:6443"
I1016 07:05:01.428983     277 round_trippers.go:443] GET https://172.17.0.3:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 2 milliseconds
I1016 07:05:01.430662     277 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.3:6443"
I1016 07:05:01.430694     277 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.3:6443"
I1016 07:05:01.430737     277 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 970 lines ...
  test/e2e/kubectl/kubectl.go:180
Oct 16 07:05:51.704: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc001933f80), Code:404}}
[AfterEach] Kubectl run CronJob
  test/e2e/kubectl/kubectl.go:176
Oct 16 07:05:51.705: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-7225'
Oct 16 07:05:51.844: INFO: rc: 1
Oct 16 07:05:51.844: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-7225] []  <nil>  Error from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n [] <nil> 0xc0021589f0 exit status 1 <nil> <nil> true [0xc000f94920 0xc000f94938 0xc000f94950] [0xc000f94920 0xc000f94938 0xc000f94950] [0xc000f94930 0xc000f94948] [0x10f14b0 0x10f14b0] 0xc001e9a720 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl [kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-7225] []  <nil>  Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
     [] <nil> 0xc0021589f0 exit status 1 <nil> <nil> true [0xc000f94920 0xc000f94938 0xc000f94950] [0xc000f94920 0xc000f94938 0xc000f94950] [0xc000f94930 0xc000f94948] [0x10f14b0 0x10f14b0] 0xc001e9a720 <nil>}:
    Command stdout:
    
    stderr:
    Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
    
    error:
    exit status 1
occurred
[AfterEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:151
Oct 16 07:05:51.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7225" for this suite.
... skipping 19 lines ...
Oct 16 07:05:50.541: INFO: >>> kubeConfig: /root/.kube/kind-config-kind
STEP: Building a namespace api object, basename job
Oct 16 07:05:52.106: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled.
Oct 16 07:05:52.119: INFO: Found ClusterRoles; assuming RBAC is enabled.
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3299
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  test/e2e/apps/job.go:133
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:151
Oct 16 07:05:54.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 585 lines ...
Oct 16 07:06:07.675: INFO: >>> kubeConfig: /root/.kube/kind-config-kind
STEP: Building a namespace api object, basename volume-provisioning
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-provisioning-3878
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/storage/volume_provisioning.go:136
[It] should report an error and create no PV
  test/e2e/storage/volume_provisioning.go:778
Oct 16 07:06:07.848: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:151
Oct 16 07:06:07.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-3878" for this suite.


S [SKIPPING] [0.182 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:777
    should report an error and create no PV [It]
    test/e2e/storage/volume_provisioning.go:778

    Only supported for providers [aws] (not skeleton)

    test/e2e/storage/volume_provisioning.go:779
------------------------------
... skipping 565 lines ...
Oct 16 07:06:15.107: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0"
Oct 16 07:06:15.108: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0" in namespace "pods-7282" to be "terminated due to deadline exceeded"
Oct 16 07:06:15.113: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0": Phase="Running", Reason="", readiness=true. Elapsed: 5.169925ms
Oct 16 07:06:17.157: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0": Phase="Running", Reason="", readiness=true. Elapsed: 2.049070744s
Oct 16 07:06:19.162: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0": Phase="Running", Reason="", readiness=true. Elapsed: 4.054318616s
Oct 16 07:06:21.166: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0": Phase="Running", Reason="", readiness=true. Elapsed: 6.0580525s
Oct 16 07:06:23.169: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 8.061726461s
Oct 16 07:06:23.169: INFO: Pod "pod-update-activedeadlineseconds-6e4531c3-968a-41e1-b06a-358d64f7beb0" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:151
Oct 16 07:06:23.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7282" for this suite.

... skipping 644 lines ...
Oct 16 07:06:30.442: INFO: >>> kubeConfig: /root/.kube/kind-config-kind
STEP: Building a namespace api object, basename node-problem-detector
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in node-problem-detector-9191
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:49
Oct 16 07:06:30.575: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''
[AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/framework/framework.go:151
Oct 16 07:06:30.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-problem-detector-9191" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.152 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:686
  should run without error [BeforeEach]
  test/e2e/node/node_problem_detector.go:57

  No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''

  test/e2e/node/node_problem_detector.go:50
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:98
Oct 16 07:06:30.596: INFO: Only supported for providers [aws] (not skeleton)
... skipping 5436 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:151
... skipping 189 lines ...
Oct 16 07:07:54.155: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind explain e2e-test-crd-publish-openapi-5750-crds.spec'
Oct 16 07:07:54.387: INFO: stderr: ""
Oct 16 07:07:54.387: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5750-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Oct 16 07:07:54.387: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind explain e2e-test-crd-publish-openapi-5750-crds.spec.bars'
Oct 16 07:07:54.632: INFO: stderr: ""
Oct 16 07:07:54.632: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5750-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Oct 16 07:07:54.632: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44471 --kubeconfig=/root/.kube/kind-config-kind explain e2e-test-crd-publish-openapi-5750-crds.spec.bars2'
Oct 16 07:07:54.851: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:151
Oct 16 07:07:58.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9821" for this suite.
... skipping 1248 lines ...
Oct 16 07:07:24.528: INFO: PersistentVolumeClaim pvc-hwf8f found but phase is Pending instead of Bound.
Oct 16 07:07:26.533: INFO: PersistentVolumeClaim pvc-hwf8f found but phase is Pending instead of Bound.
Oct 16 07:07:28.551: INFO: PersistentVolumeClaim pvc-hwf8f found but phase is Pending instead of Bound.
Oct 16 07:07:30.554: INFO: PersistentVolumeClaim pvc-hwf8f found but phase is Pending instead of Bound.
Oct 16 07:07:32.557: INFO: PersistentVolumeClaim pvc-hwf8f found and phase=Bound (18.123899928s)
STEP: checking for CSIInlineVolumes feature
Oct 16 07:08:00.611: INFO: Error getting logs for pod csi-inline-volume-nfgnr: the server rejected our request for an unknown reason (get pods csi-inline-volume-nfgnr)
STEP: Deleting pod csi-inline-volume-nfgnr in namespace csi-mock-volumes-8784
WARNING: pod log: csi-inline-volume-nfgnr/csi-volume-tester: pods "csi-inline-volume-nfgnr" not found
STEP: Deleting the previously created pod
Oct 16 07:08:04.650: INFO: Deleting pod "pvc-volume-tester-hnsg5" in namespace "csi-mock-volumes-8784"
Oct 16 07:08:04.656: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hnsg5" to be fully deleted
STEP: Checking CSI driver logs
Oct 16 07:08:14.680: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8784","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8784","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8784","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities",