Skip to content

Commit 087611a

Browse files
authored
Documentation spelling/copy improvements (#2233)
## Changes This PR fixes some spelling mistakes and improves the formatting/flow for some of our documentation.
1 parent d441ad8 commit 087611a

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

docs/assessment.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ The Assessment Report (Main) is the output of the Databricks Labs UCX assessment
154154
[[back to top](#migration-assessment-report)]
155155

156156
## Readiness
157-
This is an overall summary of rediness detailed in the Readiness dashlet. This value is based on the ratio of findings divided by the total number of assets scanned.
157+
This is an overall summary of readiness detailed in the Readiness dashlet. This value is based on the ratio of findings divided by the total number of assets scanned.
158158

159159
[[back to top](#migration-assessment-report)]
160160

@@ -247,7 +247,7 @@ The next row contains the "Table Types" widget
247247
This widget is a detailed list of each table, it's format, storage type, location property and if a DBFS table approximate table size. Upgrade strategies include:
248248
- DEEP CLONE or CTAS for DBFS ROOT tables
249249
- SYNC for DELTA tables (managed or external) for tables stored on a non-DBFS root (Mount point or direct cloud storage path)
250-
- Managed non DELTA tables need to be upgraded to to Unity Catalog by either:
250+
- Managed non DELTA tables need to be upgraded to Unity Catalog by either:
251251
- Use CTAS to convert targeting the Unity Catalog catalog, schema and table name
252252
- Moved to an EXTERNAL LOCATION and create an EXTERNAL table in Unity Catalog.
253253

@@ -355,7 +355,7 @@ using UCX. As a transition strategy, "No Isolation Shared" clusters or "Assigned
355355
### AF115 - Uses passthrough config: spark.databricks.passthrough.enabled.
356356

357357
Passthrough security model is not supported by Unity Catalog. Passthrough mode relied upon file based authorization which is incompatible with Fine Grained Access Controls supported by Unity Catalog.
358-
Recommend mapping your Passthrough security model to a External Location/Volume/Table/View based security model compatible with Unity Catalog.
358+
Recommend mapping your Passthrough security model to an External Location/Volume/Table/View based security model compatible with Unity Catalog.
359359

360360
[[back to top](#migration-assessment-report)]
361361

@@ -524,20 +524,20 @@ Recommend upgrading your shared cluster DBR to 13.3 LTS or greater or using Assi
524524
The minimum DBR version to access Unity Catalog was not met. The recommendation is to upgrade to the latest Long Term Supported (LTS) version of the Databricks Runtime.
525525

526526
### AF300.4 - ML Runtime cpu
527-
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundry.
527+
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundary.
528528

529529
### AF300.5 - ML Runtime gpu
530-
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundry.
530+
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundary.
531531

532532
### AF301.1 - spark.catalog.x
533533

534-
The `spark.catalog.` pattern was found. Commonly used functions in spark.catalog, such as tableExists, listTables, setDefault catalog are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. DBR 14.1 and above have made these commands available. Upgrade your DBR version.
534+
The `spark.catalog.` pattern was found. Commonly used functions in `spark.catalog`, such as `tableExists`, `listTables`, `setCurrentCatalog` are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. DBR 14.1 and above have made these commands available. Upgrade your DBR version.
535535

536536
[[back to top](#migration-assessment-report)]
537537

538538
### AF301.2 - spark.catalog.x (spark._jsparkSession.catalog)
539539

540-
The `spark._jsparkSession.catalog` pattern was found. Commonly used functions in spark.catalog, such as tableExists, listTables, setDefault catalog are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. The corresponding `spark.catalog.x` methods may work on DBR 14.1 and above.
540+
The `spark._jsparkSession.catalog` pattern was found. Commonly used functions in `spark.catalog`, such as `tableExists`, `listTables`, `setCurrentCatalog` are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. The corresponding `spark.catalog.x` methods may work on DBR 14.1 and above.
541541

542542
[[back to top](#migration-assessment-report)]
543543

@@ -687,7 +687,7 @@ The `dbfs:/mnt` is used as a mount point. This is not supported by Unity Catalog
687687

688688
### AF311.6 - dbfs usage (`dbfs:/`)
689689

690-
The `dbfs:/` pattern was found. DBFS is not supported by Unity Catalog. Use instead EXTERNAL LOCATIONS and VOLUMES. There may be false positives with this pattern because `dbfs:/Volumes/mycatalog/myschema/myvolume` is ligitamate usage.
690+
The `dbfs:/` pattern was found. DBFS is not supported by Unity Catalog. Use instead EXTERNAL LOCATIONS and VOLUMES. There may be false positives with this pattern because `dbfs:/Volumes/mycatalog/myschema/myvolume` is legitimate usage.
691691

692692
Please Note: `dbfs:/Volumes/<catalog>/<schema>/<volume>` is a supported access pattern for spark.
693693

@@ -1100,7 +1100,7 @@ Is shortcut for CREATE TABLE DEEP CLONE <target table> <source table> which only
11001100
[STORAGE CREDENTIAL]([url](https://docs.databricks.com/en/sql/language-manual/sql-ref-storage-credentials.html)https://docs.databricks.com/en/sql/language-manual/sql-ref-storage-credentials.html) are a UC object encapsulating the credentials necessary to access cloud storage.
11011101

11021102
## Assigned Clusters or Single User Clusters
1103-
"Assigned Clusters" are Interactive clusters assigned to a single principal. Implicit in this term is that these clusters are enabled for Unity Catalog. Publically available today, "Assigned Clusters" can be assigned to a user and the user's identity is used to access data resources. The access to the cluster is restricted to that single user to ensure accountability and accuracy of the audit logs.
1103+
"Assigned Clusters" are Interactive clusters assigned to a single principal. Implicit in this term is that these clusters are enabled for Unity Catalog. Publicly available today, "Assigned Clusters" can be assigned to a user and the user's identity is used to access data resources. The access to the cluster is restricted to that single user to ensure accountability and accuracy of the audit logs.
11041104

11051105
"Single User Clusters" are Interactive clusters that name one specific user account as user.
11061106

0 commit comments

Comments
 (0)