ResultFAILURE
Tests 1 failed / 145 succeeded
Started2017-05-07 03:25
Elapsed7m48s
Revision

Test Failures


[k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] 7.83s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Proxy\sversion\sv1\sshould\sproxy\slogs\son\snode\susing\sproxy\ssubresource\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected error:
    <*errors.errorString | 0xc420c75db0>: {
        s: "no nodes exist, can't test node proxy",
    }
    no nodes exist, can't test node proxy
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:316
				
				Click to see stdout/stderrfrom junit_10.xml

Filter through log files | View test history on testgrid


Show 145 Passed Tests

Show 442 Skipped Tests

Error lines from build-log.txt

... skipping 1868 lines ...
[BeforeEach] [k8s.io] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:52
[It] should serve multiport endpoints from pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:215
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-3psjv
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-3psjv to expose endpoints map[]
May  7 03:25:15.675: INFO: Get endpoints failed (31.689948ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May  7 03:25:16.678: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-3psjv exposes endpoints map[] (1.033887739s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-3psjv
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-3psjv to expose endpoints map[pod1:[100]]
May  7 03:25:20.740: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.046859889s elapsed, will retry)
May  7 03:25:25.832: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.139111717s elapsed, will retry)
May  7 03:25:27.842: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-3psjv exposes endpoints map[pod1:[100]] (11.149051186s elapsed)
... skipping 3043 lines ...
May  7 03:27:30.436: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  7 03:27:30.436: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config describe pod redis-master-qmq6j --namespace=e2e-tests-kubectl-640bm'
May  7 03:27:30.622: INFO: stderr: ""
May  7 03:27:30.622: INFO: stdout: "Name:\t\tredis-master-qmq6j\nNamespace:\te2e-tests-kubectl-640bm\nNode:\t\tjuju-5b54c5-2/10.240.0.16\nStart Time:\tSun, 07 May 2017 03:26:58 +0000\nLabels:\t\tapp=redis\n\t\trole=master\nAnnotations:\tkubernetes.io/created-by={\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-kubectl-640bm\",\"name\":\"redis-master\",\"uid\":\"06c43c5c-32d5-11...\nStatus:\t\tRunning\nIP:\t\t10.1.62.51\nControllers:\tReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:\tdocker://83209a4e992c440e309e0487b2997d419b966fe5c88789648a8eeaf82705da22\n    Image:\t\tgcr.io/google_containers/redis:e2e\n    Image ID:\t\tdocker-pullable://gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25\n    Port:\t\t6379/TCP\n    State:\t\tRunning\n      Started:\t\tSun, 07 May 2017 03:27:23 +0000\n    Ready:\t\tTrue\n    Restart Count:\t0\n    Environment:\t<none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rjp4p (ro)\nConditions:\n  Type\t\tStatus\n  Initialized \tTrue \n  Ready \tTrue \n  PodScheduled \tTrue \nVolumes:\n  default-token-rjp4p:\n    Type:\tSecret (a volume populated by a Secret)\n    SecretName:\tdefault-token-rjp4p\n    Optional:\tfalse\nQoS Class:\tBestEffort\nNode-Selectors:\t<none>\nTolerations:\tnode.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s\n\t\tnode.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\t\t\tType\t\tReason\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t-------------\t\t\t--------\t------\t\t-------\n  32s\t\t32s\t\t1\tdefault-scheduler\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-qmq6j to juju-5b54c5-2\n  31s\t\t31s\t\t1\tkubelet, juju-5b54c5-2\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"gcr.io/google_containers/redis:e2e\"\n  7s\t\t7s\t\t1\tkubelet, juju-5b54c5-2\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"gcr.io/google_containers/redis:e2e\"\n  7s\t\t7s\t\t1\tkubelet, juju-5b54c5-2\tspec.containers{redis-master}\tNormal\t\tCreated\t\tCreated container with id 83209a4e992c440e309e0487b2997d419b966fe5c88789648a8eeaf82705da22\n  7s\t\t7s\t\t1\tkubelet, juju-5b54c5-2\tspec.containers{redis-master}\tNormal\t\tStarted\t\tStarted container with id 83209a4e992c440e309e0487b2997d419b966fe5c88789648a8eeaf82705da22\n"
May  7 03:27:30.622: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-640bm'
May  7 03:27:30.922: INFO: stderr: ""
May  7 03:27:30.922: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-640bm\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis\n\t\trole=master\nAnnotations:\t<none>\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:\tapp=redis\n\t\trole=master\n  Containers:\n   redis-master:\n    Image:\t\tgcr.io/google_containers/redis:e2e\n    Port:\t\t6379/TCP\n    Environment:\t<none>\n    Mounts:\t\t<none>\n  Volumes:\t\t<none>\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n  32s\t\t32s\t\t1\treplication-controller\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-qmq6j\n"
May  7 03:27:30.923: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-640bm'
May  7 03:27:31.356: INFO: stderr: ""
May  7 03:27:31.356: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-640bm\nLabels:\t\t\tapp=redis\n\t\t\trole=master\nAnnotations:\t\t<none>\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t10.152.183.95\nPort:\t\t\t<unset>\t6379/TCP\nEndpoints:\t\t10.1.62.51:6379\nSession Affinity:\tNone\nEvents:\t\t\t<none>\n"
May  7 03:27:31.360: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config describe node juju-5b54c5-1'
May  7 03:27:31.840: INFO: stderr: ""
May  7 03:27:31.840: INFO: stdout: "Name:\t\t\tjuju-5b54c5-1\nRole:\t\t\t\nLabels:\t\t\tbeta.kubernetes.io/arch=amd64\n\t\t\tbeta.kubernetes.io/os=linux\n\t\t\tkubernetes.io/hostname=juju-5b54c5-1\nAnnotations:\t\tnode.alpha.kubernetes.io/ttl=0\n\t\t\tvolumes.kubernetes.io/controller-managed-attach-detach=true\nTaints:\t\t\t<none>\nCreationTimestamp:\tSun, 07 May 2017 03:23:20 +0000\nPhase:\t\t\t\nConditions:\n  Type\t\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n  ----\t\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n  OutOfDisk \t\tFalse \tSun, 07 May 2017 03:27:24 +0000 \tSun, 07 May 2017 03:23:20 +0000 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n  MemoryPressure \tFalse \tSun, 07 May 2017 03:27:24 +0000 \tSun, 07 May 2017 03:23:20 +0000 \tKubeletHasSufficientMemory \tkubelet has sufficient memory available\n  DiskPressure \t\tFalse \tSun, 07 May 2017 03:27:24 +0000 \tSun, 07 May 2017 03:23:20 +0000 \tKubeletHasNoDiskPressure \tkubelet has no disk pressure\n  Ready \t\tTrue \tSun, 07 May 2017 03:27:24 +0000 \tSun, 07 May 2017 03:23:20 +0000 \tKubeletReady \t\t\tkubelet is posting ready status. AppArmor enabled\nAddresses:\t\t10.240.0.14,10.240.0.14,juju-5b54c5-1\nCapacity:\n cpu:\t\t1\n memory:\t1736672Ki\n pods:\t\t110\nAllocatable:\n cpu:\t\t1\n memory:\t1634272Ki\n pods:\t\t110\nSystem Info:\n Machine ID:\t\t\t\t7c93538356bb86714530301b6c38013d\n System UUID:\t\t\t\t7C935383-56BB-8671-4530-301B6C38013D\n Boot ID:\t\t\t\t739de311-baad-469e-b1cf-c453c545cc9d\n Kernel Version:\t\t\t4.8.0-51-generic\n OS Image:\t\t\t\tUbuntu 16.04.2 LTS\n Operating System:\t\t\tlinux\n Architecture:\t\t\t\tamd64\n Container Runtime Version:\t\tdocker://1.12.6\n Kubelet Version:\t\t\tv1.6.2\n Kube-Proxy Version:\t\t\tv1.6.2\nExternalID:\t\t\t\tjuju-5b54c5-1\nNon-terminated Pods:\t\t\t(9 in total)\n  Namespace\t\t\t\tName\t\t\t\t\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ---------\t\t\t\t----\t\t\t\t\t\t\t\t------------\t----------\t---------------\t-------------\n  default\t\t\t\tnginx-ingress-controller-13sq2\t\t\t\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-container-probe-bc88h\tliveness-http\t\t\t\t\t\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-container-probe-kb1p7\ttest-webserver-0e069365-32d5-11e7-831b-42010af00011\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-downward-api-7g355\t\tannotationupdate16db56dc-32d5-11e7-a743-42010af00011\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-downward-api-j3x4b\t\tlabelsupdate1a1becb4-32d5-11e7-a794-42010af00011\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-kubectl-q3zs1\t\tredis-master-hh8s6\t\t\t\t\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-kubectl-txqb4\t\tupdate-demo-nautilus-mbnm5\t\t\t\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  e2e-tests-pod-network-test-2s4rh\tnetserver-1\t\t\t\t\t\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\n  kube-system\t\t\t\tkubernetes-dashboard-2917854236-cglv2\t\t\t\t100m (10%)\t100m (10%)\t50Mi (3%)\t50Mi (3%)\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n  ------------\t----------\t---------------\t-------------\n  100m (10%)\t100m (10%)\t50Mi (3%)\t50Mi (3%)\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t\t\t-------\n  4m\t\t4m\t\t1\tkubelet, juju-5b54c5-1\t\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n  4m\t\t4m\t\t1\tkubelet, juju-5b54c5-1\t\t\t\tWarning\t\tImageGCFailed\t\tunable to find data for container /\n  4m\t\t4m\t\t2\tkubelet, juju-5b54c5-1\t\t\t\tNormal\t\tNodeHasSufficientDisk\tNode juju-5b54c5-1 status is now: NodeHasSufficientDisk\n  4m\t\t4m\t\t2\tkubelet, juju-5b54c5-1\t\t\t\tNormal\t\tNodeHasSufficientMemory\tNode juju-5b54c5-1 status is now: NodeHasSufficientMemory\n  4m\t\t4m\t\t2\tkubelet, juju-5b54c5-1\t\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode juju-5b54c5-1 status is now: NodeHasNoDiskPressure\n  4m\t\t4m\t\t1\tkube-proxy, juju-5b54c5-1\t\t\tNormal\t\tStarting\t\tStarting kube-proxy.\n"
... skipping 1137 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:650
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:275
    should proxy logs on node using proxy subresource [Conformance] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67

    Expected error:
        <*errors.errorString | 0xc420c75db0>: {
            s: "no nodes exist, can't test node proxy",
        }
        no nodes exist, can't test node proxy
    not to have occurred

... skipping 1122 lines ...
May  7 03:29:03.742: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config create -f - --namespace=e2e-tests-kubectl-95jnr'
May  7 03:29:03.971: INFO: stderr: ""
May  7 03:29:03.971: INFO: stdout: "service \"redis-slave\" created\n"
STEP: validating guestbook app
May  7 03:29:03.971: INFO: Waiting for all frontend pods to be Running.
May  7 03:29:43.973: INFO: Waiting for frontend to serve content.
May  7 03:29:43.982: INFO: Failed to get response from guestbook. err: an error on the server ("Error: 'dial tcp 10.1.53.85:80: getsockopt: connection refused'\nTrying to reach: 'http://10.1.53.85:80/guestbook.php?cmd=get&key=messages&value='") has prevented the request from succeeding (get services frontend), response: 
May  7 03:29:49.140: INFO: Trying to add a new entry to the guestbook.
May  7 03:29:49.284: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
May  7 03:29:49.355: INFO: Running '/usr/local/bin/kubectl --server=https://10.240.0.7:6443 --kubeconfig=/home/ubuntu/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-95jnr'
May  7 03:29:52.572: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  7 03:29:52.572: INFO: stdout: "deployment \"frontend\" deleted\n"
... skipping 468 lines ...
------------------------------
May  7 03:32:50.767: INFO: Running AfterSuite actions on all node


May  7 03:27:28.559: INFO: Running AfterSuite actions on all node
May  7 03:32:50.804: INFO: Running AfterSuite actions on node 1
May  7 03:32:50.806: INFO: Error running cluster/log-dump.sh: fork/exec ../../cluster/log-dump.sh: no such file or directory



Summarizing 1 Failure:

[Fail] [k8s.io] Proxy version v1 [It] should proxy logs on node using proxy subresource [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:316

Ran 146 of 588 Specs in 457.349 seconds
FAIL! -- 145 Passed | 1 Failed | 0 Pending | 442 Skipped 

Ginkgo ran 1 suite in 7m48.506287923s
Test Suite Failed
JUJU_E2E_END=1494127970