You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: scala/README.md
+46-18Lines changed: 46 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,22 @@
1
1
# ibmos2spark
2
2
3
-
The package sets Spark Hadoop configurations for connecting to
3
+
The package sets Spark Hadoop configurations for connecting to
4
4
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5
-
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
5
+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6
6
7
-
Using the `stocator` driver connects your Spark executor nodes directly
7
+
Using the `stocator` driver connects your Spark executor nodes directly
8
8
to your data in object storage.
9
9
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
10
-
are instantiated with the `stocator` driver in the Spark kernel's classpath.
11
-
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
12
-
and adding it to your local Apache Spark kernel's classpath.
10
+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
11
+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
12
+
and adding it to your local Apache Spark kernel's classpath.
13
13
14
14
15
15
## Installation
16
16
17
17
This library is cross-built on both Scala 2.10 (for Spark 1.6.0) and Scala 2.11 (for Spark 2.0.0 and greater)
18
18
19
-
### Releases
19
+
### Releases
20
20
21
21
#### SBT library dependency
22
22
@@ -69,8 +69,8 @@ Data Science Experience](http://datascience.ibm.com), will install the package.
69
69
70
70
### Snapshots
71
71
72
-
From time-to-time, a snapshot version may be released if fixes or new features are added.
73
-
The following snipets show how to install snapshot releases.
72
+
From time-to-time, a snapshot version may be released if fixes or new features are added.
73
+
The following snipets show how to install snapshot releases.
74
74
Replace the version number (`0.0.7`) in the following examples with the version you desire.
75
75
76
76
##### SBT library dependency
@@ -138,24 +138,52 @@ Add SNAPSHOT repository to pom.xml
138
138
## Usage
139
139
140
140
The usage of this package depends on *from where* your Object Storage instance was created. This package
141
-
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
142
-
(DSX) or from a separate account on IBM Softlayer. The instructions below show how to connect to
143
-
either type of instance.
141
+
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
142
+
(DSX) or from a separate account on IBM Softlayer. It also supports IBM cloud object storage (COS).
143
+
The instructions below show how to connect to either type of instance.
144
144
145
145
The connection setup is essentially the same. But the difference for you is how you deliver the
146
146
credentials. If your Object Storage was created with Bluemix/DSX, with a few clicks on the side-tab
147
147
within a DSX Jupyter notebook, you can obtain your account credentials in the form of a HashMap object.
148
148
If your Object Storage was created with a Softlayer account, each part of the credentials will
149
-
be found as text that you can copy and paste into the example code below.
149
+
be found as text that you can copy and paste into the example code below.
150
+
151
+
### IBM Cloud Object Storage / Data Science Experience
152
+
```scala
153
+
importcom.ibm.ibmos2spark.CloudObjectStorage
154
+
155
+
// The credentials HashMap may be created for you with the
0 commit comments