This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-10-25 19:42
Elapsed46m18s
Revision
Builder33735820-16fa-11eb-b256-6ee25ea2e440
infra-commitd88010efd
reposigs.k8s.io/gcp-compute-persistent-disk-csi-driver
repo-commitff0b719f51348c8ea0d6f0d2807ee416bf5396aa
repos{u'sigs.k8s.io/gcp-compute-persistent-disk-csi-driver': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 520 lines ...
I1025 19:46:12.525] gcloud docker -- push gcr.io/k8s-jnks-gci-autoscaling/gcp-persistent-disk-csi-driver:11980eb2-b95f-4f15-a0c6-0d2bc68e8dc2
W1025 19:46:13.014] WARNING: `gcloud docker` will not be supported for Docker client versions above 18.03.
W1025 19:46:13.014] 
W1025 19:46:13.014] As an alternative, use `gcloud auth configure-docker` to configure `docker` to
W1025 19:46:13.015] use `gcloud` as a credential helper, then use `docker` as you would for non-GCR
W1025 19:46:13.015] registries, e.g. `docker pull gcr.io/project-id/my-image`. Add
W1025 19:46:13.015] `--verbosity=error` to silence this warning: `gcloud docker
W1025 19:46:13.015] --verbosity=error -- pull gcr.io/project-id/my-image`.
W1025 19:46:13.015] 
W1025 19:46:13.015] See: https://cloud.google.com/container-registry/docs/support/deprecation-notices#gcloud-docker
W1025 19:46:13.016] 
I1025 19:46:13.411] The push refers to repository [gcr.io/k8s-jnks-gci-autoscaling/gcp-persistent-disk-csi-driver]
I1025 19:46:13.425] f4ac37ef7c80: Preparing
I1025 19:46:13.426] 826fceb729d2: Preparing
... skipping 218 lines ...
W1025 20:03:06.247] Trying to find master named 'e2e-test-prow-master'
W1025 20:03:06.247] Looking for address 'e2e-test-prow-master-ip'
W1025 20:03:07.226] Using master: e2e-test-prow-master (external IP: 34.68.50.173; internal IP: (not set))
I1025 20:03:07.326] Waiting up to 300 seconds for cluster initialization.
I1025 20:03:07.327] 
I1025 20:03:07.327]   This will continually check to see if the API for kubernetes is reachable.
I1025 20:03:07.327]   This may time out if there was some uncaught error during start up.
I1025 20:03:07.327] 
I1025 20:04:35.933] .....................Kubernetes cluster created.
I1025 20:04:36.094] Cluster "k8s-jnks-gci-autoscaling_e2e-test-prow" set.
I1025 20:04:36.250] User "k8s-jnks-gci-autoscaling_e2e-test-prow" set.
I1025 20:04:36.407] Context "k8s-jnks-gci-autoscaling_e2e-test-prow" created.
I1025 20:04:36.564] Switched to context "k8s-jnks-gci-autoscaling_e2e-test-prow".
... skipping 22 lines ...
I1025 20:05:30.456] e2e-test-prow-master              Ready,SchedulingDisabled   <none>   25s   v888.888.888-fake-testing-master.version
I1025 20:05:30.456] e2e-test-prow-minion-group-0p2r   Ready                      <none>   13s   v888.888.888-fake-testing-master.version
I1025 20:05:30.457] e2e-test-prow-minion-group-n7cn   Ready                      <none>   14s   v888.888.888-fake-testing-master.version
I1025 20:05:30.457] e2e-test-prow-minion-group-vd5r   Ready                      <none>   14s   v888.888.888-fake-testing-master.version
W1025 20:05:30.658] Warning: v1 ComponentStatus is deprecated in v1.19+
I1025 20:05:30.758] Validate output:
I1025 20:05:30.856] NAME                 STATUS    MESSAGE             ERROR
I1025 20:05:30.857] scheduler            Healthy   ok                  
I1025 20:05:30.857] controller-manager   Healthy   ok                  
I1025 20:05:30.857] etcd-1               Healthy   {"health":"true"}   
I1025 20:05:30.857] etcd-0               Healthy   {"health":"true"}   
I1025 20:05:30.863] Cluster validation succeeded
W1025 20:05:30.964] Warning: v1 ComponentStatus is deprecated in v1.19+
... skipping 113 lines ...
W1025 20:05:46.782]   "details": {
W1025 20:05:46.782]     "name": "gce-pd-csi-driver",
W1025 20:05:46.783]     "kind": "namespaces"
W1025 20:05:46.783]   },
W1025 20:05:46.783]   "code": 404
W1025 20:05:46.783] }]
W1025 20:05:46.783] F1025 20:05:46.781234   84293 helpers.go:115] Error from server (NotFound): namespaces "gce-pd-csi-driver" not found
W1025 20:05:46.783] goroutine 1 [running]:
W1025 20:05:46.783] k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000b5c000, 0x75, 0xc6)
W1025 20:05:46.784] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
W1025 20:05:46.784] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d1ee00, 0xc000000003, 0x0, 0x0, 0xc0008d2150, 0x2aff6a6, 0xa, 0x73, 0x40b200)
W1025 20:05:46.784] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
W1025 20:05:46.784] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d1ee00, 0x3, 0x0, 0x0, 0x2, 0xc0005c9ad0, 0x1, 0x1)
W1025 20:05:46.784] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
W1025 20:05:46.785] k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
W1025 20:05:46.785] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
W1025 20:05:46.785] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0003be820, 0x46, 0x1)
W1025 20:05:46.785] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
W1025 20:05:46.785] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e73800, 0xc00013aec0, 0x1d1cbc8)
W1025 20:05:46.786] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
W1025 20:05:46.786] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
W1025 20:05:46.786] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
W1025 20:05:46.786] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004f18c0, 0xc0004503c0, 0x2, 0x3)
... skipping 108 lines ...
W1025 20:05:47.189]   "details": {
W1025 20:05:47.189]     "name": "cloud-sa",
W1025 20:05:47.189]     "kind": "secrets"
W1025 20:05:47.189]   },
W1025 20:05:47.189]   "code": 404
W1025 20:05:47.189] }]
W1025 20:05:47.190] F1025 20:05:47.184968   84400 helpers.go:115] Error from server (NotFound): secrets "cloud-sa" not found
W1025 20:05:47.190] goroutine 1 [running]:
W1025 20:05:47.190] k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000182001, 0xc000b18240, 0x69, 0xba)
W1025 20:05:47.190] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
W1025 20:05:47.190] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d1ee00, 0xc000000003, 0x0, 0x0, 0xc00018cee0, 0x2aff6a6, 0xa, 0x73, 0x40b200)
W1025 20:05:47.191] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
W1025 20:05:47.191] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d1ee00, 0x3, 0x0, 0x0, 0x2, 0xc000a9bad0, 0x1, 0x1)
W1025 20:05:47.191] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
W1025 20:05:47.191] k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
W1025 20:05:47.192] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
W1025 20:05:47.192] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0003a2200, 0x3a, 0x1)
W1025 20:05:47.192] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
W1025 20:05:47.192] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e73800, 0xc000a93300, 0x1d1cbc8)
W1025 20:05:47.193] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
W1025 20:05:47.193] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
W1025 20:05:47.193] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
W1025 20:05:47.193] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000561b80, 0xc000098f50, 0x2, 0x5)
... skipping 99 lines ...
W1025 20:05:47.578]     "name": "cluster-admin-binding",
W1025 20:05:47.578]     "group": "rbac.authorization.k8s.io",
W1025 20:05:47.578]     "kind": "clusterrolebindings"
W1025 20:05:47.578]   },
W1025 20:05:47.578]   "code": 404
W1025 20:05:47.578] }]
W1025 20:05:47.579] F1025 20:05:47.543710   84507 helpers.go:115] Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin-binding" not found
W1025 20:05:47.579] goroutine 1 [running]:
W1025 20:05:47.579] k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc00033e0f0, 0x9c, 0xed)
W1025 20:05:47.579] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
W1025 20:05:47.579] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d1ee00, 0xc000000003, 0x0, 0x0, 0xc0003bc150, 0x2aff6a6, 0xa, 0x73, 0x40b200)
W1025 20:05:47.579] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
W1025 20:05:47.579] k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d1ee00, 0x3, 0x0, 0x0, 0x2, 0xc000763ad0, 0x1, 0x1)
W1025 20:05:47.580] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
W1025 20:05:47.580] k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
W1025 20:05:47.580] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
W1025 20:05:47.580] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0006f4070, 0x6d, 0x1)
W1025 20:05:47.580] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
W1025 20:05:47.580] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e73800, 0xc0006b8b00, 0x1d1cbc8)
W1025 20:05:47.581] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
W1025 20:05:47.581] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
W1025 20:05:47.581] 	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
W1025 20:05:47.581] k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0003efb80, 0xc0003209c0, 0x2, 0x3)
... skipping 1687 lines ...
I1025 20:05:50.903] deployment.apps/csi-gce-pd-controller created
I1025 20:05:50.903] daemonset.apps/csi-gce-pd-node created
I1025 20:05:50.903] daemonset.apps/csi-gce-pd-node-win created
I1025 20:05:50.903] csidriver.storage.k8s.io/pd.csi.storage.gke.io created
I1025 20:05:50.904] Waiting for driver to start
I1025 20:05:50.904] [/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/wait-for-driver.sh]
W1025 20:06:20.932] error: timed out waiting for the condition on deployments/csi-gce-pd-controller
I1025 20:21:49.802] Timeout waiting for node daemonset csi-gce-pd-node
I1025 20:21:49.829] Deleting driver
I1025 20:21:49.830] [/go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/delete-driver.sh]
I1025 20:21:49.834] PKGDIR is /go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver
I1025 20:21:49.840] installing kustomize
I1025 20:21:51.408] {Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
... skipping 720 lines ...
W1025 20:28:49.653]   Associated tags:
W1025 20:28:49.653]  - 11980eb2-b95f-4f15-a0c6-0d2bc68e8dc2
W1025 20:28:49.653] Tags:
W1025 20:28:49.654] - gcr.io/k8s-jnks-gci-autoscaling/gcp-persistent-disk-csi-driver:11980eb2-b95f-4f15-a0c6-0d2bc68e8dc2
W1025 20:28:49.824] Deleted [gcr.io/k8s-jnks-gci-autoscaling/gcp-persistent-disk-csi-driver:11980eb2-b95f-4f15-a0c6-0d2bc68e8dc2].
W1025 20:28:50.578] Deleted [gcr.io/k8s-jnks-gci-autoscaling/gcp-persistent-disk-csi-driver@sha256:4645f996a6c70518abdc6c92b9e3f4c645119ad21fbac41d694044c5a44cfd12].
W1025 20:28:51.432] F1025 20:28:51.432551    2979 main.go:161] Failed to run integration test: failed to install CSI Driver: driver failed to come up: exit status 255
W1025 20:28:51.439] Traceback (most recent call last):
W1025 20:28:51.439]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 50, in <module>
W1025 20:28:51.439]     main(ARGS.env, ARGS.cmd + ARGS.args)
W1025 20:28:51.440]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 41, in main
W1025 20:28:51.440]     check(*cmd)
W1025 20:28:51.440]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W1025 20:28:51.440]     subprocess.check_call(cmd)
W1025 20:28:51.440]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1025 20:28:51.440]     raise CalledProcessError(retcode, cmd)
W1025 20:28:51.441] subprocess.CalledProcessError: Command '('test/run-k8s-integration-migration.sh',)' returned non-zero exit status 255
E1025 20:28:51.448] Command failed
I1025 20:28:51.448] process 425 exited with code 1 after 46.1m
E1025 20:28:51.449] FAIL: ci-gcp-compute-persistent-disk-csi-driver-latest-k8s-master-migration
I1025 20:28:51.449] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1025 20:28:52.064] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I1025 20:28:52.161] process 88141 exited with code 0 after 0.0m
I1025 20:28:52.161] Call:  gcloud config get-value account
I1025 20:28:52.715] process 88154 exited with code 0 after 0.0m
I1025 20:28:52.716] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
... skipping 20 lines ...