You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/spark/synapse-file-mount-api.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,9 +83,9 @@ mssparkutils.fs.mount(
83
83
>from notebookutils import mssparkutils
84
84
>```
85
85
> Mount parameters:
86
-
>1. fileCacheTimeout: Blobs will be cached in the local temp folder for120 seconds by default. During this time, blobfuse will notcheck whether the fileis up to date ornot. The parameter could be set to change the default timeout time. When multiple clients modify files at the same time, in order to avoid inconsistencies between local and remote files, It's recommended to shorten the cache time, or even change it to 0, and always get the latest files from the server.
87
-
>2. timeout: The mount operation timeout in120 seconds by default. The parameter could be set to change the default timeout time. When there are too many executors or when mount times out, It's recommended to increase the value.
88
-
>3. scope: The scope parameter is used to specify the scope of the mount. The default value is"job". If the scope isset to "job", the mount will be visible only to the current cluster. If the scope isset to "workspace", the mount will be visible to all notebooks in current workspace, and the mount point will be automatically created if it doesn't exist, and you should add the same parameters to unmount api to unmount the mount point. The workspace level mount is only supported for linked service authentication.
86
+
>- fileCacheTimeout: Blobs will be cached in the local temp folder for120 seconds by default. During this time, blobfuse won't check whether the file is up to date or not. The parameter could be set to change the default timeout time. When multiple clients modify files at the same time, in order to avoid inconsistencies between local and remote files, we recommend shortening the cache time, or even changing it to 0, and always getting the latest files from the server.
87
+
>- timeout: The mount operation timeout is120 seconds by default. The parameter could be set to change the default timeout time. When there are too many executors or when the mount times out, we recommend increasing the value.
88
+
>- scope: The scope parameter is used to specify the scope of the mount. The default value is"job." If the scope isset to "job," the mount isvisible only to the current cluster. If the scope isset to "workspace," the mount isvisible to all notebooks inthe current workspace, and the mount point isautomatically created if it doesn't exist. Add the same parameters to the unmount API to unmount the mount point. The workspace level mount is only supported for linked service authentication.
89
89
>
90
90
> You can use these parameters like this:
91
91
>```python
@@ -222,7 +222,7 @@ df.show()
222
222
```
223
223
224
224
> [!NOTE]
225
-
> When you mount the storage using linked service, you should always explicitly set spark linked service configuration before using synfs schema to access the data. Please refer to this link for details: [ADLS Gen2 storage with linked services](./apache-spark-secure-credentials-with-tokenlibrary.md#adls-gen2-storage-without-linked-services)
225
+
> When you mount the storage using a linked service, you should always explicitly set spark linked service configuration before using synfs schema to access the data. Refer to [ADLS Gen2 storage with linked services](./apache-spark-secure-credentials-with-tokenlibrary.md#adls-gen2-storage-without-linked-services) for details.
226
226
227
227
### Read a file from a mounted Blob Storage account
0 commit comments