This job view page is being replaced by Spyglass soon. Check out the new job view.
PRpohly: hostpath topology + CSI ephemeral inline volume scheduling
ResultFAILURE
Tests 1 failed / 15 succeeded
Started2020-02-14 13:56
Elapsed37m36s
Revision
Buildergke-prow-default-pool-cf4891d4-msqg
Refs master:208df382
88160:0080ff32
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/99811c1a-22e8-407f-ac08-95861ceea4d9/targets/test'}}
podaa8a4d1b-4f31-11ea-94dd-2ad10083244e
resultstorehttps://source.cloud.google.com/results/invocations/99811c1a-22e8-407f-ac08-95861ceea4d9/targets/test
infra-commitdb7f45788
job-versionv1.18.0-alpha.5.112+9f573b1e79599a
master_os_imagecos-77-12371-175-0
node_os_imagecos-77-12371-175-0
podaa8a4d1b-4f31-11ea-94dd-2ad10083244e
repok8s.io/kubernetes
repo-commit9f573b1e79599a485b533dab5a198a34649ca0c5
repos{u'k8s.io/kubernetes': u'master:208df3828d7dbdf2872550b8bd2e947748d7c0e7,88160:0080ff32edf5c51dc9bafbbd05ade4676e5e7864', u'k8s.io/release': u'master'}
revisionv1.18.0-alpha.5.112+9f573b1e79599a

Test Failures


DumpClusterLogs 3m14s

error during ./cluster/log-dump/log-dump.sh /workspace/_artifacts: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 15 Passed Tests

Show 4841 Skipped Tests

Error lines from build-log.txt

... skipping 235 lines ...
W0214 14:02:33.811] INFO: Build completed successfully, 4546 total actions
W0214 14:02:33.813] INFO: Build completed successfully, 4546 total actions
W0214 14:02:33.847] $TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
I0214 14:02:36.061] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0214 14:02:36.221] 2020/02/14 14:02:36 process.go:155: Step 'make -C /go/src/k8s.io/kubernetes bazel-release' finished in 4m35.785472629s
W0214 14:02:36.221] 2020/02/14 14:02:36 util.go:265: Flushing memory.
W0214 14:02:48.368] 2020/02/14 14:02:48 util.go:275: flushMem error (page cache): exit status 1
W0214 14:02:48.369] 2020/02/14 14:02:48 process.go:153: Running: /go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce-device-plugin-gpu --allow-dup
I0214 14:02:48.473] push-build.sh: BEGIN main on aa8a4d1b-4f31-11ea-94dd-2ad10083244e Fri Feb 14 14:02:48 UTC 2020
I0214 14:02:48.473] 
I0214 14:02:48.500] 
I0214 14:02:48.507] push-build.sh is running a *REAL* push!!
I0214 14:02:48.517] 
... skipping 754 lines ...
W0214 14:11:30.514] Trying to find master named 'e2e-30dd4105c2-8baae-master'
W0214 14:11:30.515] Looking for address 'e2e-30dd4105c2-8baae-master-ip'
W0214 14:11:31.507] Using master: e2e-30dd4105c2-8baae-master (external IP: 35.230.55.109; internal IP: (not set))
I0214 14:11:31.608] Waiting up to 300 seconds for cluster initialization.
I0214 14:11:31.608] 
I0214 14:11:31.609]   This will continually check to see if the API for kubernetes is reachable.
I0214 14:11:31.609]   This may time out if there was some uncaught error during start up.
I0214 14:11:31.609] 
I0214 14:11:31.727] Kubernetes cluster created.
I0214 14:11:31.947] Cluster "k8s-jkns-pr-gce-gpus_e2e-30dd4105c2-8baae" set.
I0214 14:11:32.162] User "k8s-jkns-pr-gce-gpus_e2e-30dd4105c2-8baae" set.
I0214 14:11:32.361] Context "k8s-jkns-pr-gce-gpus_e2e-30dd4105c2-8baae" created.
I0214 14:11:32.563] Switched to context "k8s-jkns-pr-gce-gpus_e2e-30dd4105c2-8baae".
... skipping 22 lines ...
I0214 14:13:00.379] e2e-30dd4105c2-8baae-master              Ready,SchedulingDisabled   <none>   88s   v1.18.0-alpha.5.112+9f573b1e79599a
I0214 14:13:00.380] e2e-30dd4105c2-8baae-minion-group-0lbs   Ready                      <none>   38s   v1.18.0-alpha.5.112+9f573b1e79599a
I0214 14:13:00.380] e2e-30dd4105c2-8baae-minion-group-1qqc   Ready                      <none>   25s   v1.18.0-alpha.5.112+9f573b1e79599a
I0214 14:13:00.381] e2e-30dd4105c2-8baae-minion-group-jk7m   Ready                      <none>   69s   v1.18.0-alpha.5.112+9f573b1e79599a
I0214 14:13:00.381] e2e-30dd4105c2-8baae-minion-group-jnp6   Ready                      <none>   23s   v1.18.0-alpha.5.112+9f573b1e79599a
I0214 14:13:00.829] Validate output:
I0214 14:13:01.266] NAME                 STATUS    MESSAGE             ERROR
I0214 14:13:01.266] controller-manager   Healthy   ok                  
I0214 14:13:01.266] etcd-0               Healthy   {"health":"true"}   
I0214 14:13:01.267] scheduler            Healthy   ok                  
I0214 14:13:01.267] etcd-1               Healthy   {"health":"true"}   
I0214 14:13:01.278] Cluster validation succeeded
W0214 14:13:01.378] Done, listing cluster services:
... skipping 2548 lines ...
I0214 14:22:09.452] Feb 14 14:20:13.990: INFO: gpuResourceName nvidia.com/gpu
I0214 14:22:09.452] Feb 14 14:20:13.990: INFO: Nvidia GPUs exist on all schedulable nodes
I0214 14:22:09.452] Feb 14 14:20:14.007: INFO: Get container nvidia-driver-installer-tsdps/pause usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0, MemoryUsageInBytes: 2158592, MemoryWorkingSetInBytes: 2158592
I0214 14:22:09.453] Feb 14 14:20:14.007: INFO: Get container nvidia-gpu-device-plugin-rntgn/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0.000575298, MemoryUsageInBytes: 9719808, MemoryWorkingSetInBytes: 5435392
I0214 14:22:09.453] Feb 14 14:20:14.026: INFO: Creating 4 pods and have the pods run a CUDA app
I0214 14:22:09.453] Feb 14 14:20:14.193: INFO: Wait for all test pods to succeed
I0214 14:22:09.453] Feb 14 14:20:14.193: INFO: Waiting up to 5m0s for pod "nvidia-gpu-6863b6bf-dba3-45d0-b1fa-4fa2a90e3f1f" in namespace "device-plugin-gpus-4561" to be "Succeeded or Failed"
I0214 14:22:09.453] Feb 14 14:20:14.241: INFO: Pod "nvidia-gpu-6863b6bf-dba3-45d0-b1fa-4fa2a90e3f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.102586ms
I0214 14:22:09.454] Feb 14 14:20:14.604: INFO: Get container nvidia-driver-installer-mfp7r/pause usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0, MemoryUsageInBytes: 2252800, MemoryWorkingSetInBytes: 2252800
I0214 14:22:09.454] Feb 14 14:20:14.604: INFO: Get container nvidia-gpu-device-plugin-5h2tf/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0.000799517, MemoryUsageInBytes: 9781248, MemoryWorkingSetInBytes: 5525504
I0214 14:22:09.454] Feb 14 14:20:15.439: INFO: Get container nvidia-gpu-device-plugin-gjp6z/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jnp6. CPUUsageInCores: 0.000398736, MemoryUsageInBytes: 9678848, MemoryWorkingSetInBytes: 5992448
I0214 14:22:09.455] Feb 14 14:20:15.439: INFO: Get container nvidia-driver-installer-stbkq/pause usage on node e2e-30dd4105c2-8baae-minion-group-jnp6. CPUUsageInCores: 0, MemoryUsageInBytes: 2166784, MemoryWorkingSetInBytes: 2166784
I0214 14:22:09.455] Feb 14 14:20:15.867: INFO: Get container nvidia-driver-installer-rx8m8/pause usage on node e2e-30dd4105c2-8baae-minion-group-1qqc. CPUUsageInCores: 0, MemoryUsageInBytes: 2084864, MemoryWorkingSetInBytes: 2084864
... skipping 469 lines ...
I0214 14:22:09.612] Feb 14 14:22:02.682: INFO: Get container nvidia-driver-installer-rx8m8/pause usage on node e2e-30dd4105c2-8baae-minion-group-1qqc. CPUUsageInCores: 0, MemoryUsageInBytes: 2084864, MemoryWorkingSetInBytes: 2084864
I0214 14:22:09.612] Feb 14 14:22:03.350: INFO: Get container nvidia-driver-installer-mfp7r/pause usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0, MemoryUsageInBytes: 2252800, MemoryWorkingSetInBytes: 2252800
I0214 14:22:09.613] Feb 14 14:22:03.350: INFO: Get container nvidia-gpu-device-plugin-5h2tf/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0.00026954, MemoryUsageInBytes: 8814592, MemoryWorkingSetInBytes: 5804032
I0214 14:22:09.613] Feb 14 14:22:03.470: INFO: Get container nvidia-driver-installer-tsdps/pause usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0, MemoryUsageInBytes: 2158592, MemoryWorkingSetInBytes: 2158592
I0214 14:22:09.613] Feb 14 14:22:03.470: INFO: Get container nvidia-gpu-device-plugin-rntgn/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0.000226303, MemoryUsageInBytes: 9490432, MemoryWorkingSetInBytes: 6189056
I0214 14:22:09.614] Feb 14 14:22:04.206: INFO: Pod "nvidia-gpu-6863b6bf-dba3-45d0-b1fa-4fa2a90e3f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m50.012133335s
I0214 14:22:09.614] Feb 14 14:22:04.206: INFO: Pod "nvidia-gpu-6863b6bf-dba3-45d0-b1fa-4fa2a90e3f1f" satisfied condition "Succeeded or Failed"
I0214 14:22:09.614] Feb 14 14:22:04.261: INFO: Got container logs for vector-addition-cuda8:
I0214 14:22:09.614] [Vector addition of 50000 elements]
I0214 14:22:09.614] Copy input data from the host memory to the CUDA device
I0214 14:22:09.614] CUDA kernel launch with 196 blocks of 256 threads
I0214 14:22:09.615] Copy output data from the CUDA device to the host memory
I0214 14:22:09.615] Test PASSED
... skipping 6 lines ...
I0214 14:22:09.616] Copy input data from the host memory to the CUDA device
I0214 14:22:09.617] CUDA kernel launch with 196 blocks of 256 threads
I0214 14:22:09.617] Copy output data from the CUDA device to the host memory
I0214 14:22:09.617] Test PASSED
I0214 14:22:09.617] Done
I0214 14:22:09.617] 
I0214 14:22:09.617] Feb 14 14:22:04.302: INFO: Waiting up to 5m0s for pod "nvidia-gpu-b0d6f007-8b3d-46f8-a82a-abfc83e10975" in namespace "device-plugin-gpus-4561" to be "Succeeded or Failed"
I0214 14:22:09.618] Feb 14 14:22:04.337: INFO: Pod "nvidia-gpu-b0d6f007-8b3d-46f8-a82a-abfc83e10975": Phase="Pending", Reason="", readiness=false. Elapsed: 35.295567ms
I0214 14:22:09.618] Feb 14 14:22:04.736: INFO: Get container nvidia-driver-installer-rx8m8/pause usage on node e2e-30dd4105c2-8baae-minion-group-1qqc. CPUUsageInCores: 0, MemoryUsageInBytes: 2084864, MemoryWorkingSetInBytes: 2084864
I0214 14:22:09.618] Feb 14 14:22:04.736: INFO: Get container nvidia-gpu-device-plugin-wtxgm/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-1qqc. CPUUsageInCores: 0.000170658, MemoryUsageInBytes: 9371648, MemoryWorkingSetInBytes: 6090752
I0214 14:22:09.619] Feb 14 14:22:05.397: INFO: Get container nvidia-gpu-device-plugin-5h2tf/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0.00026954, MemoryUsageInBytes: 8814592, MemoryWorkingSetInBytes: 5804032
I0214 14:22:09.619] Feb 14 14:22:05.397: INFO: Get container nvidia-driver-installer-mfp7r/pause usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0, MemoryUsageInBytes: 2252800, MemoryWorkingSetInBytes: 2252800
I0214 14:22:09.620] Feb 14 14:22:05.521: INFO: Get container nvidia-gpu-device-plugin-rntgn/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0.000226303, MemoryUsageInBytes: 9490432, MemoryWorkingSetInBytes: 6189056
... skipping 7 lines ...
I0214 14:22:09.622] Feb 14 14:22:07.447: INFO: Get container nvidia-gpu-device-plugin-5h2tf/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-0lbs. CPUUsageInCores: 0.00026954, MemoryUsageInBytes: 8814592, MemoryWorkingSetInBytes: 5804032
I0214 14:22:09.623] Feb 14 14:22:07.586: INFO: Get container nvidia-driver-installer-tsdps/pause usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0, MemoryUsageInBytes: 2158592, MemoryWorkingSetInBytes: 2158592
I0214 14:22:09.623] Feb 14 14:22:07.586: INFO: Get container nvidia-gpu-device-plugin-rntgn/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jk7m. CPUUsageInCores: 0.000226303, MemoryUsageInBytes: 9490432, MemoryWorkingSetInBytes: 6189056
I0214 14:22:09.624] Feb 14 14:22:08.393: INFO: Get container nvidia-gpu-device-plugin-gjp6z/nvidia-gpu-device-plugin usage on node e2e-30dd4105c2-8baae-minion-group-jnp6. CPUUsageInCores: 0.000291957, MemoryUsageInBytes: 9449472, MemoryWorkingSetInBytes: 6201344
I0214 14:22:09.624] Feb 14 14:22:08.393: INFO: Get container nvidia-driver-installer-stbkq/pause usage on node e2e-30dd4105c2-8baae-minion-group-jnp6. CPUUsageInCores: 2.319e-06, MemoryUsageInBytes: 2166784, MemoryWorkingSetInBytes: 2166784
I0214 14:22:09.624] Feb 14 14:22:08.408: INFO: Pod "nvidia-gpu-b0d6f007-8b3d-46f8-a82a-abfc83e10975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.106146821s
I0214 14:22:09.625] Feb 14 14:22:08.408: INFO: Pod "nvidia-gpu-b0d6f007-8b3d-46f8-a82a-abfc83e10975" satisfied condition "Succeeded or Failed"
I0214 14:22:09.625] Feb 14 14:22:08.452: INFO: Got container logs for vector-addition-cuda8:
I0214 14:22:09.625] [Vector addition of 50000 elements]
I0214 14:22:09.625]