-
Notifications
You must be signed in to change notification settings - Fork 260
Open
Description
I am looking for a way to connect properly form my Databricks Azure Workspace to connect securely to our Google CLoud storage from a GCP Project.
The current option is the following configuration via the keyfile:
spark.sparkContext._jsc.hadoopConfiguration().set( # type: ignore
"google.cloud.auth.service.account.json.keyfile", file_var_for_path
)other options I saw where following
spark.hadoop.google.cloud.auth.service.account.enable.<bucket-name> true
spark.hadoop.fs.gs.auth.service.account.email.<bucket-name> <client-email>
spark.hadoop.fs.gs.project.id.<bucket-name> <project-id>
spark.hadoop.fs.gs.auth.service.account.private.key.<bucket-name> {{secrets/scope/gsa_private_key}}
spark.hadoop.fs.gs.auth.service.account.private.key.id.<bucket-name> {{secrets/scope/gsa_private_key_id}}
Is there any way to do it smoothly without having multiple env vars to maintain or a keyfile which we need to store somewhere. This is also not that cool because we need to store this temporary. I also do not like the approach of setting up the connection to the bucket via the UI because I want to keep things as code.
I hope there is the cool and smooth solution for that.
Thanks already for the help.
PS: Hopefully right place to ask
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels