You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/assessment.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -154,7 +154,7 @@ The Assessment Report (Main) is the output of the Databricks Labs UCX assessment
154
154
[[back to top](#migration-assessment-report)]
155
155
156
156
## Readiness
157
-
This is an overall summary of rediness detailed in the Readiness dashlet. This value is based on the ratio of findings divided by the total number of assets scanned.
157
+
This is an overall summary of readiness detailed in the Readiness dashlet. This value is based on the ratio of findings divided by the total number of assets scanned.
158
158
159
159
[[back to top](#migration-assessment-report)]
160
160
@@ -247,7 +247,7 @@ The next row contains the "Table Types" widget
247
247
This widget is a detailed list of each table, it's format, storage type, location property and if a DBFS table approximate table size. Upgrade strategies include:
248
248
- DEEP CLONE or CTAS for DBFS ROOT tables
249
249
- SYNC for DELTA tables (managed or external) for tables stored on a non-DBFS root (Mount point or direct cloud storage path)
250
-
- Managed non DELTA tables need to be upgraded to to Unity Catalog by either:
250
+
- Managed non DELTA tables need to be upgraded to Unity Catalog by either:
251
251
- Use CTAS to convert targeting the Unity Catalog catalog, schema and table name
252
252
- Moved to an EXTERNAL LOCATION and create an EXTERNAL table in Unity Catalog.
253
253
@@ -355,7 +355,7 @@ using UCX. As a transition strategy, "No Isolation Shared" clusters or "Assigned
Passthrough security model is not supported by Unity Catalog. Passthrough mode relied upon file based authorization which is incompatible with Fine Grained Access Controls supported by Unity Catalog.
358
-
Recommend mapping your Passthrough security model to a External Location/Volume/Table/View based security model compatible with Unity Catalog.
358
+
Recommend mapping your Passthrough security model to an External Location/Volume/Table/View based security model compatible with Unity Catalog.
359
359
360
360
[[back to top](#migration-assessment-report)]
361
361
@@ -524,20 +524,20 @@ Recommend upgrading your shared cluster DBR to 13.3 LTS or greater or using Assi
524
524
The minimum DBR version to access Unity Catalog was not met. The recommendation is to upgrade to the latest Long Term Supported (LTS) version of the Databricks Runtime.
525
525
526
526
### AF300.4 - ML Runtime cpu
527
-
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundry.
527
+
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundary.
528
528
529
529
### AF300.5 - ML Runtime gpu
530
-
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundry.
530
+
The Databricks ML Runtime is not supported on Shared Compute mode clusters. Recommend migrating these workloads to Assigned clusters. Implement cluster policies and pools to even out startup time and limit upper cost boundary.
531
531
532
532
### AF301.1 - spark.catalog.x
533
533
534
-
The `spark.catalog.` pattern was found. Commonly used functions in spark.catalog, such as tableExists, listTables, setDefault catalog are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. DBR 14.1 and above have made these commands available. Upgrade your DBR version.
534
+
The `spark.catalog.` pattern was found. Commonly used functions in `spark.catalog`, such as `tableExists`, `listTables`, `setCurrentCatalog` are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. DBR 14.1 and above have made these commands available. Upgrade your DBR version.
The `spark._jsparkSession.catalog` pattern was found. Commonly used functions in spark.catalog, such as tableExists, listTables, setDefault catalog are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. The corresponding `spark.catalog.x` methods may work on DBR 14.1 and above.
540
+
The `spark._jsparkSession.catalog` pattern was found. Commonly used functions in `spark.catalog`, such as `tableExists`, `listTables`, `setCurrentCatalog` are not allowed on shared clusters due to security reasons. `spark.sql("<sql command>)` may be a better alternative. The corresponding `spark.catalog.x` methods may work on DBR 14.1 and above.
541
541
542
542
[[back to top](#migration-assessment-report)]
543
543
@@ -687,7 +687,7 @@ The `dbfs:/mnt` is used as a mount point. This is not supported by Unity Catalog
687
687
688
688
### AF311.6 - dbfs usage (`dbfs:/`)
689
689
690
-
The `dbfs:/` pattern was found. DBFS is not supported by Unity Catalog. Use instead EXTERNAL LOCATIONS and VOLUMES. There may be false positives with this pattern because `dbfs:/Volumes/mycatalog/myschema/myvolume` is ligitamate usage.
690
+
The `dbfs:/` pattern was found. DBFS is not supported by Unity Catalog. Use instead EXTERNAL LOCATIONS and VOLUMES. There may be false positives with this pattern because `dbfs:/Volumes/mycatalog/myschema/myvolume` is legitimate usage.
691
691
692
692
Please Note: `dbfs:/Volumes/<catalog>/<schema>/<volume>` is a supported access pattern for spark.
693
693
@@ -1100,7 +1100,7 @@ Is shortcut for CREATE TABLE DEEP CLONE <target table> <source table> which only
1100
1100
[STORAGE CREDENTIAL]([url](https://docs.databricks.com/en/sql/language-manual/sql-ref-storage-credentials.html)https://docs.databricks.com/en/sql/language-manual/sql-ref-storage-credentials.html) are a UC object encapsulating the credentials necessary to access cloud storage.
1101
1101
1102
1102
## Assigned Clusters or Single User Clusters
1103
-
"Assigned Clusters" are Interactive clusters assigned to a single principal. Implicit in this term is that these clusters are enabled for Unity Catalog. Publically available today, "Assigned Clusters" can be assigned to a user and the user's identity is used to access data resources. The access to the cluster is restricted to that single user to ensure accountability and accuracy of the audit logs.
1103
+
"Assigned Clusters" are Interactive clusters assigned to a single principal. Implicit in this term is that these clusters are enabled for Unity Catalog. Publicly available today, "Assigned Clusters" can be assigned to a user and the user's identity is used to access data resources. The access to the cluster is restricted to that single user to ensure accountability and accuracy of the audit logs.
1104
1104
1105
1105
"Single User Clusters" are Interactive clusters that name one specific user account as user.
0 commit comments