This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-04 16:25
Elapsed10m27s
Revisione44a6c8af0ed5e8784ce5da484f6777453843c88

No Test Failures!


Error lines from build-log.txt

... skipping 336 lines ...

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/secrets-store-csi-driver'
docker pull gcr.io/k8s-staging-csi-secrets-store/driver:v1.0.0-e2e-e44a6c8a || make e2e-container
Error response from daemon: manifest for gcr.io/k8s-staging-csi-secrets-store/driver:v1.0.0-e2e-e44a6c8a not found: manifest unknown: Failed to fetch "v1.0.0-e2e-e44a6c8a" from request "/v2/k8s-staging-csi-secrets-store/driver/manifests/v1.0.0-e2e-e44a6c8a".
make[1]: Entering directory '/home/prow/go/src/sigs.k8s.io/secrets-store-csi-driver'
make container
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/secrets-store-csi-driver'
rm -rf _output/crds/*
mkdir -p _output/crds
cp -R manifest_staging/charts/secrets-store-csi-driver/crds/ _output/crds/
... skipping 354 lines ...
client.go:128: [debug] creating 1 resource(s)
client.go:528: [debug] Watching for changes to Job secrets-store-csi-driver-upgrade-crds with timeout of 5m0s
I1004 16:32:01.806099   13137 reflector.go:203] Reflector from k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I1004 16:32:01.806125   13137 reflector.go:219] Starting reflector *unstructured.Unstructured (0s) from k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167
I1004 16:32:01.806133   13137 reflector.go:255] Listing and watching *unstructured.Unstructured from k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167
client.go:556: [debug] Add/Modify event for secrets-store-csi-driver-upgrade-crds: ADDED
client.go:595: [debug] secrets-store-csi-driver-upgrade-crds: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for secrets-store-csi-driver-upgrade-crds: MODIFIED
client.go:595: [debug] secrets-store-csi-driver-upgrade-crds: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for secrets-store-csi-driver-upgrade-crds: MODIFIED
I1004 16:32:07.252667   13137 reflector.go:225] Stopping reflector *unstructured.Unstructured (0s) from k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167
client.go:299: [debug] Starting delete for "csi-secrets-store-secrets-store-csi-driver-upgrade-crds" ServiceAccount
client.go:299: [debug] Starting delete for "csi-secrets-store-secrets-store-csi-driver-upgrade-crds" ClusterRole
client.go:299: [debug] Starting delete for "csi-secrets-store-secrets-store-csi-driver-upgrade-crds" ClusterRoleBinding
client.go:299: [debug] Starting delete for "secrets-store-csi-driver-upgrade-crds" Job
... skipping 926 lines ...
ok 9 CSI inline volume test with pod portability - unmount succeeds
ok 10 Sync with K8s secrets - create deployment
ok 11 Sync with K8s secrets - read secret from pod, read K8s secret, read env var, check secret ownerReferences with multiple owners
ok 12 Sync with K8s secrets - delete deployment, check secret is deleted
not ok 13 Test Namespaced scope SecretProviderClass - create deployment
# (in test file test/bats/vault.bats, line 239)
#   `kubectl wait --for=condition=Ready --timeout=90s pod -l app=busybox -n test-ns' failed
# namespace/test-ns created
# secretproviderclass.secrets-store.csi.x-k8s.io/vault-foo-sync created
# secretproviderclass.secrets-store.csi.x-k8s.io/vault-foo-sync created
# customresourcedefinition.apiextensions.k8s.io/secretproviderclasses.secrets-store.csi.x-k8s.io condition met
#       {"apiVersion":"secrets-store.csi.x-k8s.io/v1","kind":"SecretProviderClass","metadata":{"annotations":{},"name":"vault-foo-sync","namespace":"default"},"spec":{"parameters":{"objects":"- secretPath: \"secret/data/foo\"\n  objectName: \"bar\"\n  secretKey: \"bar\"\n- secretPath: \"secret/data/foo1\"\n  objectName: \"bar1\"\n  secretKey: \"bar1\"\n","roleName":"csi","vaultAddress":"http://vault.vault:8200"},"provider":"invalidprovider","secretObjects":[{"data":[{"key":"pwd","objectName":"bar"},{"key":"username","objectName":"bar1"}],"secretName":"foosecret","type":"Opaque"}]}}
#   name: vault-foo-sync
#     vaultAddress: http://vault.vault:8200
#       {"apiVersion":"secrets-store.csi.x-k8s.io/v1","kind":"SecretProviderClass","metadata":{"annotations":{},"name":"vault-foo-sync","namespace":"test-ns"},"spec":{"parameters":{"objects":"- secretPath: \"secret/data/foo\"\n  objectName: \"bar\"\n  secretKey: \"bar\"\n- secretPath: \"secret/data/foo1\"\n  objectName: \"bar1\"\n  secretKey: \"bar1\"\n","roleName":"csi","vaultAddress":"http://vault.vault:8200"},"provider":"vault","secretObjects":[{"data":[{"key":"pwd","objectName":"bar"},{"key":"username","objectName":"bar1"}],"secretName":"foosecret","type":"Opaque"}]}}
#   name: vault-foo-sync
#     vaultAddress: http://vault.vault:8200
#   provider: vault
# deployment.apps/busybox-deployment created
# error: no matching resources found
not ok 14 Test Namespaced scope SecretProviderClass - Sync with K8s secrets - read secret from pod, read K8s secret, read env var, check secret ownerReferences
# (in test file test/bats/vault.bats, line 244)
#   `result=$(kubectl exec -n test-ns $POD -- cat /mnt/secrets-store/bar)' failed
# error: unable to upgrade connection: container not found ("busybox")
ok 15 Test Namespaced scope SecretProviderClass - Sync with K8s secrets - delete deployment, check secret deleted
ok 16 Test Namespaced scope SecretProviderClass - Should fail when no secret provider class in same namespace
ok 17 deploy multiple vault secretproviderclass crd
ok 18 deploy pod with multiple secret provider class
ok 19 CSI inline volume test with multiple secret provider class
make: *** [Makefile:474: e2e-vault] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
cd80a81dee56
... skipping 4 lines ...