| Result | FAILURE |
| Tests | 33 failed / 80 succeeded |
| Started | |
| Elapsed | 43m49s |
| Version | 1487733394 |
| Builder | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ConfigMap\sshould\sbe\sconsumable\sin\smultiple\svolumes\sin\sthe\ssame\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269 wait for pod "pod-configmaps-95868ed8-f8ad-11e6-87ca-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201270>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_13.xml
Find pod-configmaps-95868ed8-f8ad-11e6-87ca-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ConfigMap\sshould\sbe\sconsumable\svia\senvironment\svariable\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:197 wait for pod "pod-configmaps-2f1bfc1a-f8ae-11e6-ab79-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420203210>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_12.xml
Find pod-configmaps-2f1bfc1a-f8ae-11e6-ab79-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ConfigMap\supdates\sshould\sbe\sreflected\sin\svolume\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:154 Expected error: <*errors.errorString | 0xc420203220>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67 from junit_15.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\svolume\sshould\sprovide\scontainer\'s\scpu\slimit\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:162 wait for pod "downwardapi-volume-7d06d70c-f8ad-11e6-81db-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201080>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_04.xml
Find downwardapi-volume-7d06d70c-f8ad-11e6-81db-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\svolume\sshould\sprovide\scontainer\'s\smemory\srequest\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189 wait for pod "downwardapi-volume-9bd50c25-f8ae-11e6-81db-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201080>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_04.xml
Find downwardapi-volume-9bd50c25-f8ae-11e6-81db-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\svolume\sshould\sprovide\snode\sallocatable\s\(memory\)\sas\sdefault\smemory\slimit\sif\sthe\slimit\sis\snot\sset\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203 wait for pod "downwardapi-volume-8a6e19b7-f8ad-11e6-9deb-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc4201df4d0>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_02.xml
Find downwardapi-volume-8a6e19b7-f8ad-11e6-9deb-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\svolume\sshould\supdate\slabels\son\smodification\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124 Expected error: <*errors.errorString | 0xc4201df6c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67 from junit_18.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EmptyDir\svolumes\sshould\ssupport\s\(non\-root\,0644\,default\)\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109 wait for pod "pod-8f9aab88-f8ad-11e6-83e1-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420429800>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_03.xml
Find pod-8f9aab88-f8ad-11e6-83e1-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EmptyDir\svolumes\sshould\ssupport\s\(root\,0644\,default\)\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97 wait for pod "pod-8176b2f0-f8ad-11e6-91bf-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc4201df260>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_06.xml
Find pod-8176b2f0-f8ad-11e6-91bf-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EmptyDir\svolumes\sshould\ssupport\s\(root\,0777\,default\)\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105 wait for pod "pod-79a83ae3-f8ad-11e6-9cdd-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201240>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_01.xml
Find pod-79a83ae3-f8ad-11e6-9cdd-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EmptyDir\svolumes\svolume\son\sdefault\smedium\sshould\shave\sthe\scorrect\smode\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93 wait for pod "pod-2f34b53c-f8af-11e6-b2c6-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc4201defa0>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_14.xml
Find pod-2f34b53c-f8af-11e6-b2c6-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sGuestbook\sapplication\sshould\screate\sand\sstop\sa\sworking\sapplication\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366 Expected error: <*errors.errorString | 0xc420992090>: { s: "Timeout while waiting for pods with labels \"app=guestbook,tier=frontend\" to be running", } Timeout while waiting for pods with labels "app=guestbook,tier=frontend" to be running not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1577 from junit_18.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sKubectl\sdescribe\sshould\scheck\sif\skubectl\sdescribe\sprints\srelevant\sinformation\sfor\src\sand\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:733 Feb 22 03:24:34.708: Verified 0 of 1 pods , error : timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293 from junit_22.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sKubectl\sexpose\sshould\screate\sservices\sfor\src\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:812 Feb 22 03:24:44.204: Verified 0 of 1 pods , error : timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293 from junit_25.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sKubectl\spatch\sshould\sadd\sannotations\sfor\spods\sin\src\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:943 Feb 22 03:49:46.376: Verified 0 of 1 pods , error : timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:293 from junit_14.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sUpdate\sDemo\sshould\screate\sand\sstop\sa\sreplication\scontroller\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310 Feb 22 03:29:33.076: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985 from junit_17.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334 Feb 22 03:22:57.246: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985 from junit_14.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubectl\sclient\s\[k8s\.io\]\sUpdate\sDemo\sshould\sscale\sa\sreplication\scontroller\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Feb 22 03:23:00.710: Couldn't delete ns: "e2e-tests-kubectl-z2jtp": namespace e2e-tests-kubectl-z2jtp was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubectl-z2jtp was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"}) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353 from junit_11.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=KubeletManagedEtcHosts\sshould\stest\skubelet\smanaged\s\/etc\/hosts\sfile\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54 Expected error: <*errors.errorString | 0xc420201290>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67 from junit_19.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Networking\s\[k8s\.io\]\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\shttp\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38 Expected error: <*errors.errorString | 0xc4201df110>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423 from junit_24.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Networking\s\[k8s\.io\]\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Feb 22 03:22:47.866: Couldn't delete ns: "e2e-tests-pod-network-test-64tgc": namespace e2e-tests-pod-network-test-64tgc was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-pod-network-test-64tgc was not deleted with limit: timed out waiting for the condition, pods remaining: 2, pods missing deletion timestamp: 0"}) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353 from junit_12.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Networking\s\[k8s\.io\]\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52 Expected error: <*errors.errorString | 0xc4202034b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544 from junit_11.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Networking\s\[k8s\.io\]\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59 Expected error: <*errors.errorString | 0xc4201df6c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544 from junit_18.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Pods\sshould\sget\sa\shost\sIP\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:143 Expected error: <*errors.errorString | 0xc4201df260>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67 from junit_06.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=PreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Feb 22 03:22:22.901: Couldn't delete ns: "e2e-tests-prestop-gjl2g": namespace e2e-tests-prestop-gjl2g was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-prestop-gjl2g was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"}) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353 from junit_19.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Probing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[Conformance\]\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:205 Feb 22 03:22:13.858: pod e2e-tests-container-probe-48l3v/liveness-http - expected number of restarts: 5, found restarts: 1 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:403 from junit_17.xml
Find e2e-tests-container-probe-48l3v/liveness-http mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Probing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Feb 22 03:23:32.029: Couldn't delete ns: "e2e-tests-container-probe-dcgf7": namespace e2e-tests-container-probe-dcgf7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-container-probe-dcgf7 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"}) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353 from junit_05.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ReplicationController\sshould\sserve\sa\sbasic\simage\son\seach\sreplica\swith\sa\spublic\simage\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:40 Expected error: <*errors.errorString | 0xc420201190>: { s: "timed out waiting for the condition", } timed out waiting for the condition not to have occurred /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:140 from junit_09.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Secrets\sshould\sbe\sconsumable\sfrom\spods\sin\svolume\swith\smappings\sand\sItem\sMode\sset\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56 wait for pod "pod-secrets-3790d1b4-f8af-11e6-b069-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201090>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_21.xml
Find pod-secrets-3790d1b4-f8af-11e6-b069-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ServiceAccounts\sshould\smount\san\sAPI\stoken\sinto\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service_accounts.go:240 wait for pod "pod-service-account-77ed6228-f8ad-11e6-b069-42010af0001f-61dm1" to disappear Expected success, but got an error: <*errors.errorString | 0xc420201090>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_21.xml
Find pod-service-account-77ed6228-f8ad-11e6-b069-42010af0001f-61dm1 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Services\sshould\sserve\sa\sbasic\sendpoint\sfrom\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:169 Feb 22 03:24:04.218: Timed out waiting for service endpoint-test2 in namespace e2e-tests-services-335rq to expose endpoints map[pod1:[80] pod2:[80]] (1m0s elapsed) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1603 from junit_11.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Variable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\sargs\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:131 wait for pod "var-expansion-c7a29f47-f8ad-11e6-818a-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc420203230>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_10.xml
Find var-expansion-c7a29f47-f8ad-11e6-818a-42010af0001f mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Variable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:100 wait for pod "var-expansion-835e8bf8-f8ad-11e6-8c5a-42010af0001f" to disappear Expected success, but got an error: <*errors.errorString | 0xc4201df1f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:121 from junit_16.xml
Find var-expansion-835e8bf8-f8ad-11e6-8c5a-42010af0001f mentions in log files | View test history on testgrid
[k8s.io] ConfigMap should be consumable from pods in volume [Conformance]
[k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance]
[k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance]
[k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance]
[k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance]
[k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
[k8s.io] DNS should provide DNS for services [Conformance]
[k8s.io] DNS should provide DNS for the cluster [Conformance]
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
[k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
[k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
[k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
[k8s.io] Downward API should provide pod IP as an env var [Conformance]
[k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
[k8s.io] Downward API volume should provide container's cpu request [Conformance]
[k8s.io] Downward API volume should provide container's memory limit [Conformance]
[k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
[k8s.io] Downward API volume should provide podname only [Conformance]
[k8s.io] Downward API volume should set DefaultMode on files [Conformance]
[k8s.io] Downward API volume should set mode on item file [Conformance]
[k8s.io] Downward API volume should update annotations on modification [Conformance]
[k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance]
[k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance]
[k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance]
[k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
[k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
[k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
[k8s.io] HostPath should give a volume the correct mode [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
[k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
[k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
[k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
[k8s.io] Networking should provide Internet connection for containers [Conformance]
[k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance]
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
[k8s.io] Pods should be submitted and removed [Conformance]
[k8s.io] Pods should be updated [Conformance]
[k8s.io] Pods should contain environment variables for services [Conformance]
[k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance]
[k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance]
[k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance]
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
[k8s.io] Proxy version v1 should proxy logs on node [Conformance]
[k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
[k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
[k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
[k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
[k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
[k8s.io] Secrets should be consumable from pods in env vars [Conformance]
[k8s.io] Secrets should be consumable from pods in volume [Conformance]
[k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
[k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
[k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance]
[k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance]
[k8s.io] Service endpoints latency should not be very high [Conformance]
[k8s.io] Services should provide secure master service [Conformance]
[k8s.io] Services should serve multiport endpoints from pods [Conformance]
[k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
ThirdParty resources [Flaky] [Disruptive] Simple Third Party creating/deleting thirdparty objects works [Conformance]
[k8s.io] Addon update should propagate add-on file changes [Slow]
[k8s.io] Cadvisor should be healthy on every node.
[k8s.io] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch
[k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
[k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
[k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
[k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
[k8s.io] ClusterDns [Feature:Example] should create pod that uses dns
[k8s.io] ConfigMap should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup]
[k8s.io] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup]
[k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup]
[k8s.io] CronJob should not emit unexpected warnings
[k8s.io] CronJob should not schedule jobs when suspended [Slow]
[k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
[k8s.io] CronJob should replace jobs when ReplaceConcurrent
[k8s.io] CronJob should schedule multiple jobs concurrently
[k8s.io] DNS config map should be able to change configuration
[k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
[k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
[k8s.io] DNS should provide DNS for ExternalName services
[k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
[k8s.io] Daemon set [Serial] should run and stop complex daemon
[k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
[k8s.io] Daemon set [Serial] should run and stop simple daemon
[k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
[k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
[k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
[k8s.io] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node
[k8s.io] Density [Feature:ManualPerformance] should allow running maximum capacity pods on nodes
[k8s.io] Density [Feature:ManualPerformance] should allow starting 100 pods per node
[k8s.io] Density [Feature:ManualPerformance] should allow starting 3 pods per node
[k8s.io] Density [Feature:ManualPerformance] should allow starting 50 pods per node
[k8s.io] Density [Feature:Performance] should allow starting 30 pods per node
[k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
[k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
[k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order
[k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
[k8s.io] Deployment deployment should create new pods
[k8s.io] Deployment deployment should delete old replica sets
[k8s.io] Deployment deployment should label adopted RSs and pods
[k8s.io] Deployment deployment should support rollback
[k8s.io] Deployment deployment should support rollback when there's replica set with no revision
[k8s.io] Deployment deployment should support rollover
[k8s.io] Deployment iterative rollouts should eventually progress
[k8s.io] Deployment lack of progress should be reported in the deployment status
[k8s.io] Deployment overlapping deployment should not fight with each other
[k8s.io] Deployment paused deployment should be able to scale
[k8s.io] Deployment paused deployment should be ignored by the controller
[k8s.io] Deployment scaled rollout deployment should not block on annotation check
[k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
[k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
[k8s.io] DisruptionController evictions: no PDB => should allow an eviction
[k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
[k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
[k8s.io] DisruptionController should create a PodDisruptionBudget
[k8s.io] DisruptionController should update PodDisruptionBudget status
[k8s.io] Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup]
[k8s.io] Downward API volume should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup]
[k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow]
[k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow]
[k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should handle updates to source ip annotation [Slow][Feature:ExternalTrafficLocalOnly]
[k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should only target nodes with endpoints [Slow][Feature:ExternalTrafficLocalOnly]
[k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should work for type=LoadBalancer [Slow][Feature:ExternalTrafficLocalOnly]
[k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should work for type=NodePort [Slow][Feature:ExternalTrafficLocalOnly]
[k8s.io] ESIPP [Slow][Feature:ExternalTrafficLocalOnly] should work from pods [Slow][Feature:ExternalTrafficLocalOnly]
[k8s.io] Empty [Feature:Empty] does nothing
[k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] new files should be created with FSGroup ownership when container is root
[k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on default medium should have the correct mode using FSGroup
[k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow]
[k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
[k8s.io] EmptyDir wrapper volumes should not conflict
[k8s.io] Etcd failure [Disruptive] should recover from SIGKILL
[k8s.io] Etcd failure [Disruptive] should recover from network partition with master
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses Ingress connectivity and DNS should be able to discover a federated ingress service via DNS
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be created and deleted successfully
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should create and update matching ingresses in underlying clusters
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federated ingresses [Feature:Federation] Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] Federation apiserver [Feature:Federation] Admission control should not be able to create resources if namespace does not exist
[k8s.io] Federation apiserver [Feature:Federation] Cluster objects should be created and deleted successfully
[k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be created and deleted successfully
[k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federation daemonsets [Feature:Federation] DaemonSet objects should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] Federation deployments [Feature:Federation] Deployment objects should be created and deleted successfully
[k8s.io] Federation deployments [Feature:Federation] Federated Deployment should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federation deployments [Feature:Federation] Federated Deployment should create and update matching deployments in underling clusters
[k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federation deployments [Feature:Federation] Federated Deployment should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] Federation events [Feature:Federation] Event objects should be created and deleted successfully
[k8s.io] Federation namespace [Feature:Federation] Namespace objects all resources in the namespace should be deleted when namespace is deleted
[k8s.io] Federation namespace [Feature:Federation] Namespace objects should be created and deleted successfully
[k8s.io] Federation namespace [Feature:Federation] Namespace objects should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federation namespace [Feature:Federation] Namespace objects should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should create and update matching replicasets in underling clusters
[k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federation replicasets [Feature:Federation] Federated ReplicaSet should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] Federation replicasets [Feature:Federation] ReplicaSet objects should be created and deleted successfully
[k8s.io] Federation secrets [Feature:Federation] Secret objects should be created and deleted successfully
[k8s.io] Federation secrets [Feature:Federation] Secret objects should be deleted from underlying clusters when OrphanDependents is false
[k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is nil
[k8s.io] Federation secrets [Feature:Federation] Secret objects should not be deleted from underlying clusters when OrphanDependents is true
[k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable
[k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4
[k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
[k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
[k8s.io] Garbage collector [Feature:GarbageCollector] should delete pods created by rc when not orphaning
[k8s.io] Garbage collector [Feature:GarbageCollector] should orphan pods created by rc if delete options say so
[k8s.io] Garbage collector [Feature:GarbageCollector] should orphan pods created by rc if deleteOptions.OrphanDependents is nil
[k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods
[k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
[k8s.io] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
[k8s.io] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
[k8s.io] HostPath should support r/w
[k8s.io] HostPath should support subPath
[k8s.io] InitContainer should invoke init containers on a RestartAlways pod
[k8s.io] InitContainer should invoke init containers on a RestartNever pod
[k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
[k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
[k8s.io] Initial Resources [Feature:InitialResources] [Flaky] should set initial resources based on historical data
[k8s.io] Job should delete a job
[k8s.io] Job should fail a job
[k8s.io] Job should keep restarting failed pods
[k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
[k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
[k8s.io] Job should run a job to completion when tasks succeed
[k8s.io] Job should scale a job down
[k8s.io] Job should scale a job up
[k8s.io] Kibana Logging Instances Is Alive [Feature:Elasticsearch] should check that the Kibana logging instance is alive
[k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
[k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
[k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
[k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
[k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node
[k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node
[k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
[k8s.io] Kubectl client [k8s.io] Simple pod should support exec
[k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
[k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
[k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
[k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
[k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
[k8s.io] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node
[k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node
[k8s.io] Loadbalancing: L7 [Feature:Ingress] [k8s.io] GCE [Slow] [Feature: Ingress] shoud create ingress with given static-ip
[k8s.io] Loadbalancing: L7 [Feature:Ingress] [k8s.io] GCE [Slow] [Feature: Ingress] should conform to Ingress spec
[k8s.io] Loadbalancing: L7 [Feature:Ingress] [k8s.io] Nginx [Slow] should conform to Ingress spec
[k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node
[k8s.io] Mesos applies slave attributes as labels
[k8s.io] Mesos schedules pods annotated with roles on correct slaves
[k8s.io] Mesos starts static pods on every node in the mesos cluster
[k8s.io] MetricsGrabber should grab all metrics from API server.
[k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
[k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
[k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
[k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
[k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones
[k8s.io] Multi-AZ Clusters should spread the pods of a service across zones
[k8s.io] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
[k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
[k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
[k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive]
[k8s.io] Network should set TCP CLOSE_WAIT timeout
[k8s.io] Networking IPerf [Experimental] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow]
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
[k8s.io] Networking should check kube-proxy urls
[k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
[k8s.io] NodeOutOfDisk [Serial] [Flaky] [Disruptive] runs out of disk space
[k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors
[k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
[k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
[k8s.io] Opaque resources [Feature:OpaqueResources] should account opaque integer resources in pods with multiple containers.
[k8s.io] Opaque resources [Feature:OpaqueResources] should not break pods that do not consume opaque integer resources.
[k8s.io] Opaque resources [Feature:OpaqueResources] should not schedule pods that exceed the available amount of opaque integer resource.
[k8s.io] Opaque resources [Feature:OpaqueResources] should schedule pods that do consume opaque integer resources.
[k8s.io] PersistentVolumes with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access [Flaky]
[k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access [Flaky]
[k8s.io] PersistentVolumes with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access [Flaky]
[k8s.io] PersistentVolumes with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access [Flaky]
[k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access[Flaky]
[k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access[Flaky]
[k8s.io] PersistentVolumes with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access[Flaky]
[k8s.io] Pet Store [Feature:Example] should scale to persist a nominal number ( 50 ) of transactions in 1m0s seconds
[k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow]
[k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow]
[k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow]
[k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow]
[k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow]
[k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
[k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
[k8s.io] Pods should support remote command execution over websockets
[k8s.io] Pods should support retrieving logs from the container over websockets
[k8s.io] PrivilegedPod should test privileged pod
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout [Conformance]
[k8s.io] Proxy version v1 should proxy to cadvisor
[k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
[k8s.io] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
[k8s.io] ReplicaSet should serve a basic image on each replica with a private image
[k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
[k8s.io] ReplicationController should serve a basic image on each replica with a private image
[k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
[k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
[k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
[k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
[k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
[k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
[k8s.io] SSH should SSH to all nodes and run commands
[k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
[k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
[k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
[k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
[k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
[k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
[k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
[k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
[k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
[k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
[k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
[k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace
[k8s.io] Security Context [Feature:SecurityContext] should support container.SecurityContext.RunAsUser
[k8s.io] Security Context [Feature:SecurityContext] should support pod.Spec.SecurityContext.RunAsUser
[k8s.io] Security Context [Feature:SecurityContext] should support pod.Spec.SecurityContext.SupplementalGroups
[k8s.io] Security Context [Feature:SecurityContext] should support seccomp alpha docker/default annotation [Feature:Seccomp]
[k8s.io] Security Context [Feature:SecurityContext] should support seccomp alpha unconfined annotation on the container [Feature:Seccomp]
[k8s.io] Security Context [Feature:SecurityContext] should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp]
[k8s.io] Security Context [Feature:SecurityContext] should support seccomp default which is unconfined [Feature:Seccomp]
[k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling
[k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostIPC
[k8s.io] Security Context [Feature:SecurityContext] should support volume SELinux relabeling when using hostPID
[k8s.io] ServiceAccounts should ensure a single API token exists
[k8s.io] ServiceLoadBalancer [Feature:ServiceLoadBalancer] should support simple GET on Ingress ips
[k8s.io] Services should be able to change the type and ports of a service [Slow]
[k8s.io] Services should be able to create a functioning NodePort service
[k8s.io] Services should be able to up and down services
[k8s.io] Services should check NodePort out-of-range
[k8s.io] Services should create endpoints for unready pods
[k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
[k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
[k8s.io] Services should prevent NodePort collisions
[k8s.io] Services should release NodePorts on delete
[k8s.io] Services should use same NodePort with same port but different protocols
[k8s.io] Services should work after restarting apiserver [Disruptive]
[k8s.io] Services should work after restarting kube-proxy [Disruptive]
[k8s.io] Staging client repo client should create pods, delete pods, watch pods
[k8s.io] Stateful Set recreate should recreate evicted statefulset
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity
[k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
[k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
[k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
[k8s.io] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
[k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls should reject invalid sysctls
[k8s.io] Sysctls should support sysctls
[k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain responsive services [Feature:ExperimentalClusterUpgrade]
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade]
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade]
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] node upgrade should maintain responsive services [Feature:ExperimentalNodeUpgrade]
[k8s.io] V1Job should delete a job
[k8s.io] V1Job should fail a job
[k8s.io] V1Job should keep restarting failed pods
[k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted
[k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted
[k8s.io] V1Job should run a job to completion when tasks succeed
[k8s.io] V1Job should scale a job down
[k8s.io] V1Job should scale a job up
[k8s.io] Volumes [Feature:Volumes] [k8s.io] Ceph RBD should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] CephFS should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] Cinder should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] GlusterFS should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] NFS should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] PD should be mountable
[k8s.io] Volumes [Feature:Volumes] [k8s.io] iSCSI should be mountable
[k8s.io] [Feature:Example] [k8s.io] Cassandra should create and scale cassandra
[k8s.io] [Feature:Example] [k8s.io] CassandraStatefulSet should create statefulset
[k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace
[k8s.io] [Feature:Example] [k8s.io] Hazelcast should create and scale hazelcast
[k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted
[k8s.io] [Feature:Example] [k8s.io] Redis should create and stop redis servers
[k8s.io] [Feature:Example] [k8s.io] RethinkDB should create and stop rethinkdb servers
[k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret
[k8s.io] [Feature:Example] [k8s.io] Spark should start spark master, driver and workers
[k8s.io] [Feature:Example] [k8s.io] Storm should create and stop Zookeeper, Nimbus and Storm worker servers
[k8s.io] [Feature:Federation] Federated Services DNS non-local federated service [Slow] missing local service should never find DNS entries for a missing local service
[k8s.io] [Feature:Federation] Federated Services DNS non-local federated service should be able to discover a non-local federated service
[k8s.io] [Feature:Federation] Federated Services DNS should be able to discover a federated service
[k8s.io] [Feature:Federation] Federated Services Service creation should create matching services in underlying clusters
[k8s.io] [Feature:Federation] Federated Services Service creation should not be deleted from underlying clusters when it is deleted
[k8s.io] [Feature:Federation] Federated Services Service creation should succeed
[k8s.io] [Feature:Federation] Federation API server authentication should accept cluster resources when the client has right authentication credentials
[k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials
[k8s.io] [Feature:Federation] Federation API server authentication should not accept cluster resources when the client has no authentication credentials
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
[k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.