This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: use named array instead of array in normalizing score
ResultFAILURE
Tests 0 failed / 730 succeeded
Started2019-08-14 06:30
Elapsed51m7s
Revisionaa5f9fda52d0171e45682254e0d37b16f58ae6fc
Refs 80901

No Test Failures!


Show 730 Passed Tests

Show 4069 Skipped Tests

Error lines from build-log.txt

... skipping 142 lines ...
INFO: 5089 processes: 4986 remote cache hit, 29 processwrapper-sandbox, 74 remote.
INFO: Build completed successfully, 5182 total actions
INFO: Build completed successfully, 5182 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2019/08/14 06:36:17 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 5m54.635870557s
2019/08/14 06:36:17 util.go:255: Flushing memory.
2019/08/14 06:36:24 util.go:265: flushMem error (page cache): exit status 1
2019/08/14 06:36:24 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 96ce2452-be5c-11e9-bd2d-f6f3c4187ecc Wed Aug 14 06:36:24 UTC 2019

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: dac4131a-fa8e-4c34-835c-c8265ed470fa
Loading: 
... skipping 850 lines ...
Trying to find master named 'e2e-711a4a969c-abe28-master'
Looking for address 'e2e-711a4a969c-abe28-master-ip'
Using master: e2e-711a4a969c-abe28-master (external IP: 35.247.117.216; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-slow-1-2_e2e-711a4a969c-abe28" set.
User "k8s-jkns-gci-gce-slow-1-2_e2e-711a4a969c-abe28" set.
Context "k8s-jkns-gci-gce-slow-1-2_e2e-711a4a969c-abe28" created.
Switched to context "k8s-jkns-gci-gce-slow-1-2_e2e-711a4a969c-abe28".
... skipping 3749 lines ...
Aug 14 06:48:03.799: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-6p97 no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-6p97
Aug 14 06:48:03.799: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-6p97" in namespace "volume-4097"
STEP: Deleting pv and pvc
Aug 14 06:48:03.895: INFO: Deleting PersistentVolumeClaim "pvc-5df89"
Aug 14 06:48:03.976: INFO: Deleting PersistentVolume "gcepd-whtq7"
Aug 14 06:48:05.103: INFO: error deleting PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:05.103: INFO: Couldn't delete PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:11.682: INFO: error deleting PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:11.682: INFO: Couldn't delete PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:17.623: INFO: error deleting PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:17.623: INFO: Couldn't delete PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/disks/e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9' is already being used by 'projects/k8s-jkns-gci-gce-slow-1-2/zones/us-west1-b/instances/e2e-711a4a969c-abe28-minion-group-rp1h', resourceInUseByAnotherResource
Aug 14 06:48:25.142: INFO: Successfully deleted PD "e2e-711a4a969c-abe28-0aaa5b00-3bcb-4f7c-887f-6d6a86900db9".
Aug 14 06:48:25.142: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:153
Aug 14 06:48:25.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4097" for this suite.
... skipping 654 lines ...
Aug 14 06:48:20.463: INFO: rc: 1
STEP: cleaning the environment after flex
Aug 14 06:48:20.463: INFO: Deleting pod "flex-client" in namespace "flexvolume-7608"
Aug 14 06:48:20.516: INFO: Wait up to 5m0s for pod "flex-client" to be fully deleted
STEP: waiting for flex client pod to terminate
Aug 14 06:48:26.603: INFO: Waiting up to 5m0s for pod "flex-client" in namespace "flexvolume-7608" to be "terminated due to deadline exceeded"
Aug 14 06:48:26.644: INFO: Pod "flex-client" in namespace "flexvolume-7608" not found. Error: pods "flex-client" not found
STEP: uninstalling flexvolume dummy-attachable-flexvolume-7608 from node e2e-711a4a969c-abe28-minion-group-hccg
Aug 14 06:48:36.645: INFO: Getting external IP address for e2e-711a4a969c-abe28-minion-group-hccg
Aug 14 06:48:37.100: INFO: ssh prow@35.233.255.91:22: command:   sudo rm -r /home/kubernetes/flexvolume/k8s~dummy-attachable-flexvolume-7608
Aug 14 06:48:37.100: INFO: ssh prow@35.233.255.91:22: stdout:    ""
Aug 14 06:48:37.100: INFO: ssh prow@35.233.255.91:22: stderr:    ""
Aug 14 06:48:37.100: INFO: ssh prow@35.233.255.91:22: exit code: 0
... skipping 459 lines ...
Aug 14 06:48:05.657: INFO: PersistentVolumeClaim pvc-82d8t found but phase is Pending instead of Bound.
Aug 14 06:48:07.707: INFO: PersistentVolumeClaim pvc-82d8t found and phase=Bound (2.106131203s)
STEP: checking for CSIInlineVolumes feature
STEP: Checking CSI driver logs
Aug 14 06:48:20.090: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5483","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5483","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5483","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5483","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5483","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-5483","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc","storage.kubernetes.io/csiProvisionerIdentity":"1565765284406-8081-csi-mock-csi-mock-volumes-5483"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc","storage.kubernetes.io/csiProvisionerIdentity":"1565765284406-8081-csi-mock-csi-mock-volumes-5483"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc/globalmount","target_path":"/var/lib/kubelet/pods/0e803222-79c4-4c49-a011-bb05dd483147/volumes/kubernetes.io~csi/pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/pod.name":"pvc-volume-tester-fdg8c","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-5483","csi.storage.k8s.io/pod.uid":"0e803222-79c4-4c49-a011-bb05dd483147","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-150bcbce-942c-471b-a90d-0fc1c9a0dcbc","storage.kubernetes.io/csiProvisionerIdentity":"1565765284406-8081-csi-mock-csi-mock-volumes-5483"}},"Response":{},"Error":""}

Aug 14 06:48:20.090: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-fdg8c
Aug 14 06:48:20.090: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5483
Aug 14 06:48:20.090: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 0e803222-79c4-4c49-a011-bb05dd483147
Aug 14 06:48:20.090: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
STEP: Deleting pod pvc-volume-tester-fdg8c
... skipping 794 lines ...
Aug 14 06:48:09.059: INFO: ssh prow@35.227.187.47:22: command:   sudo mkdir "/var/lib/kubelet/mount-propagation-293"/host; sudo mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-293"/host; echo host > "/var/lib/kubelet/mount-propagation-293"/host/file
Aug 14 06:48:09.059: INFO: ssh prow@35.227.187.47:22: stdout:    ""
Aug 14 06:48:09.059: INFO: ssh prow@35.227.187.47:22: stderr:    ""
Aug 14 06:48:09.059: INFO: ssh prow@35.227.187.47:22: exit code: 0
Aug 14 06:48:09.106: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-293 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 06:48:09.106: INFO: >>> kubeConfig: /workspace/.kube/config
Aug 14 06:48:11.080: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Aug 14 06:48:11.172: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-293 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 14 06:48:11.172: INFO: >>> kubeConfig: /workspace/.kube/config
Aug 14 06:48:12.211: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Aug 14 06:48:12.263: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-293 PodName:mast