Skip to content

Commit 482c32b

Browse files
committed
Update readme file
1 parent 7c6d1e7 commit 482c32b

File tree

1 file changed

+54
-3
lines changed

1 file changed

+54
-3
lines changed

r/sparkr/README.md

Lines changed: 54 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ within a DSX Jupyter notebook, you can obtain your account credentials in the fo
3737
If your Object Storage was created with a Softlayer account, each part of the credentials will
3838
be found as text that you can copy and paste into the example code below.
3939

40-
### Cloud Object Storage
40+
### Softlayer - IBM Cloud Object Storage
4141
library(ibmos2sparkR)
4242
configurationName = "bluemixO123"
4343

@@ -60,8 +60,59 @@ be found as text that you can copy and paste into the example code below.
6060
header = "true")
6161
head(df.data.1)
6262

63+
### Bluemix - IBM Cloud Object Storage
64+
The class CloudObjectStorage allows you to connect to an IBM Cloud Object Storage (COS) hosted on Bluemix. You can connect to
65+
a Bluemix COS using api keys as follows:
6366

64-
### Bluemix / Data Science Experience
67+
library(ibmos2sparkR)
68+
configurationName = "bluemixO123"
69+
70+
# In DSX notebooks, the "insert to code" will insert this credentials list for you
71+
credentials <- list(
72+
apiKey = "XXX",
73+
serviceId = "XXX",
74+
endpoint = "https://s3-api.objectstorage.....net/"
75+
)
76+
77+
cos <- CloudObjectStorage(sparkContext=sc, credentials=credentials, configurationName=configurationName, cosType="bluemix_cos")
78+
79+
bucketName <- "bucketName"
80+
fileName <- "test.csv"
81+
url <- cos$url(bucketName, fileName)
82+
83+
invisible(sparkR.session(appName = "SparkSession R"))
84+
85+
df.data.1 <- read.df(url,
86+
source = "org.apache.spark.sql.execution.datasources.csv.CSVFileFormat",
87+
header = "true")
88+
head(df.data.1)
89+
90+
Alternatively, you can connect to an IBM Bluemix COS using IAM token. Example:
91+
92+
library(ibmos2sparkR)
93+
configurationName = "bluemixO123"
94+
95+
# In DSX notebooks, the "insert to code" will insert this credentials list for you
96+
credentials <- list(
97+
iamToken = "XXXXXXXXX",
98+
serviceId = "XXX",
99+
endpoint = "https://s3-api.objectstorage.....net/"
100+
)
101+
102+
cos <- CloudObjectStorage(sparkContext=sc, credentials=credentials, configurationName=configurationName, cosType="bluemix_cos", authMethod="iam_token")
103+
104+
bucketName <- "bucketName"
105+
fileName <- "test.csv"
106+
url <- cos$url(bucketName, fileName)
107+
108+
invisible(sparkR.session(appName = "SparkSession R"))
109+
110+
df.data.1 <- read.df(url,
111+
source = "org.apache.spark.sql.execution.datasources.csv.CSVFileFormat",
112+
header = "true")
113+
head(df.data.1)
114+
115+
### Bluemix Swift Object Storage / Data Science Experience
65116

66117
library(ibmos2sparkR)
67118
configurationname = "bluemixOScon" #can be any any name you like (allows for multiple configurations)
@@ -86,7 +137,7 @@ be found as text that you can copy and paste into the example code below.
86137
data = read.df(bmconfig$url(container, objectname), source="com.databricks.spark.csv", header="true")
87138

88139

89-
### Softlayer
140+
### Softlayer Swift Object Storage
90141

91142
library(ibmos2sparkR)
92143
configurationname = "softlayerOScon" #can be any any name you like (allows for multiple configurations)

0 commit comments

Comments
 (0)