Skip to content

Commit 3535b8d

Browse files
authored
Merge pull request #49730 from roygara/adlsBugFix
Adls bug fix
2 parents 7a221bc + 7647a53 commit 3535b8d

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/storage/data-lake-storage/using-databricks-spark.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -93,10 +93,10 @@ Re-open Databricks in your browser and execute the following steps:
9393
#mount Azure Blob Storage as an HDFS file system to your databricks cluster
9494
#you need to specify a storage account and container to connect to.
9595
#use a SAS token or an account key to connect to Blob Storage.
96-
accountname = "<insert account name>'
97-
accountkey = " <insert account key>'
98-
fullname = "fs.azure.account.key." +accountname+ ".blob.core.windows.net"
99-
accountsource = "abfs://dbricks@" +accountname+ ".blob.core.windows.net/folder1"
96+
accountname = "<insert account name>"
97+
accountkey = " <insert account key>"
98+
fullname = "fs.azure.account.key." +accountname+ ".dfs.core.windows.net"
99+
accountsource = "abfs://dbricks@" +accountname+ ".dfs.core.windows.net/folder1"
100100
#create a dataframe to read data
101101
flightDF = spark.read.format('csv').options(header='true', inferschema='true').load(accountsource + "/On_Time_On_Time*.csv")
102102
#read the all the airline csv files and write the output to parquet format for easy query

0 commit comments

Comments
 (0)