-
Notifications
You must be signed in to change notification settings - Fork 115
Description
Describe the issue
The CLI treats file:// URIs in job libraries as local paths and attempts to validate they exist in the bundle directory, causing deployment to fail even though these URIs reference files on the cluster's runtime filesystem.
Configuration
resources:
jobs:
my_job:
tasks:
- task_key: my_task
libraries:
- jar: file:///opt/spark/jars/my-library.jarSteps to reproduce the behavior
- Create a bundle with a job that references a file:// URI in libraries
- Run databricks bundle deploy
- See error: file doesn't exist file:///opt/spark/jars/my-library.jar
Expected Behavior
The CLI should pass file:// URIs through to the Jobs API without validation or upload. These URIs reference files already present on the cluster's filesystem (via init scripts, container images, or pre-installed dependencies), and the Jobs API supports this pattern.
Actual Behavior
The CLI validates file:// paths as if they were local files in the bundle directory and fails deployment with "file doesn't exist" error.
OS and CLI version
macOS, CLI version [run databricks --version to fill this in]
Is this a regression?
Unknown - this may never have worked.
Debug Logs
Error: file doesn't exist file:///opt/spark/jars/my-library.jar
at resources.jobs.my_job.tasks[0].libraries[0].jar
Note: The issue appears to be in bundle/libraries/local_path.go where file:// URIs are incorrectly treated as local paths.