Skip to content

Commit a8e6bb2

Browse files
author
gadamc
committed
Updates to README
1 parent 7e00c46 commit a8e6bb2

File tree

5 files changed

+99
-20
lines changed

5 files changed

+99
-20
lines changed

README.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,17 @@
11
# ibmos2spark
22

33
The package sets Spark Hadoop configurations for connecting to
4-
IBM Bluemix Object Storage and Softlayer Account Object Storage instances
5-
with the swift protocol. The packages uses the new [swift2d/stocator](https://github.com/SparkTC/stocator) protocol, availble
6-
on the latest IBM Spark Service instances (and through IBM Data Science Experience).
4+
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6+
7+
8+
Using the `stocator` driver connects your Spark executor nodes directly
9+
to your data in object storage.
10+
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
11+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
12+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
13+
and adding it to your local Apache Spark kernel's classpath.
14+
715

816
This repository contains separate packages for `python`, `R` and `scala`.
917
You will find their documentation within the sub-folders.

python/README.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,16 @@
11
# ibmos2spark
22

33
The package sets Spark Hadoop configurations for connecting to
4-
IBM Bluemix Object Storage and Softlayer Account Object Storage instances
5-
with the swift protocol. This packages uses the new [swift2d/stocator](https://github.com/SparkTC/stocator) protocol, availble
6-
on the latest IBM Spark Service instances (and through IBM Data Science Experience).
4+
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6+
7+
8+
Using the `stocator` driver connects your Spark executor nodes directly
9+
to your data in object storage.
10+
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
11+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
12+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
13+
and adding it to your local Apache Spark kernel's classpath.
714

815
## Installation
916

@@ -13,7 +20,18 @@ pip install --user --upgrade ibmos2spark
1320

1421
## Usage
1522

16-
### Bluemix
23+
The usage of this package depends on *from where* your Object Storage instance was created. This package
24+
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
25+
(DSX) or from a separate account on IBM Softlayer. The instructions below show how to connect to
26+
either type of instance.
27+
28+
The connection setup is essentially the same. But the difference for you is how you deliver the
29+
credentials. If your Object Storage was created with Bluemix/DSX, with a few clicks on the side-tab
30+
within a DSX Jupyter notebook, you can obtain your account credentials in the form of a Python dictionary.
31+
If your Object Storage was created with a Softlayer account, each part of the credentials will
32+
be found as text that you can copy and paste into the example code below.
33+
34+
### Bluemix / Data Science Experience
1735

1836
```python
1937
import ibmos2spark

r/sparklyr/README.md

Lines changed: 22 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,16 @@
11
# ibmos2sparklyr
22

33
The package sets Spark Hadoop configurations for connecting to
4-
IBM Bluemix Object Storage and Softlayer Account Object Storage instances
5-
with the swift protocol. This packages uses the new
6-
[swift2d/stocator](https://github.com/SparkTC/stocator) protocol, availble
7-
on the latest IBM Spark Service instances and through IBM Data Science Experience (DSX).
4+
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6+
7+
Using the `stocator` driver connects your Spark executor nodes directly
8+
to your data in object storage.
9+
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
10+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
11+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
12+
and adding it to your local Apache Spark kernel's classpath.
13+
814

915
This package expects a SparkContext instantiated by sparklyr. It has been tested
1016
to work with IBM RStudio from DSX, though it should work with other Spark
@@ -29,7 +35,18 @@ sparklyr package to the special DSX version.
2935

3036
## Usage
3137

32-
### Bluemix
38+
The usage of this package depends on *from where* your Object Storage instance was created. This package
39+
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
40+
(DSX) or from a separate account on IBM Softlayer. The instructions below show how to connect to
41+
either type of instance.
42+
43+
The connection setup is essentially the same. But the difference for you is how you deliver the
44+
credentials. If your Object Storage was created with Bluemix/DSX, with a few clicks on the side-tab
45+
within a DSX Jupyter notebook, you can obtain your account credentials in the form of a list.
46+
If your Object Storage was created with a Softlayer account, each part of the credentials will
47+
be found as text that you can copy and paste into the example code below.
48+
49+
### Bluemix / Data Science Experience
3350

3451
library(ibmos2sparklyr)
3552
configurationname = "bluemixOScon" #can be any any name you like (allows for multiple configurations)

r/sparkr/README.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,16 @@
11
# ibmos2sparkR
22

33
The package sets Spark Hadoop configurations for connecting to
4-
IBM Bluemix Object Storage and Softlayer Account Object Storage instances
5-
with the swift protocol. This packages uses the new [swift2d/stocator](https://github.com/SparkTC/stocator) protocol, availble
6-
on the latest IBM Spark Service instances, and through IBM Data Science Experience.
4+
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6+
7+
Using the `stocator` driver connects your Spark executor nodes directly
8+
to your data in object storage.
9+
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
10+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
11+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
12+
and adding it to your local Apache Spark kernel's classpath.
13+
714

815
This package expects a SparkContext instantiated by SparkR. It has been tested to work with
916
IBM Spark service in R notebooks on IBM DSX, though it should work with other Spark installations
@@ -19,7 +26,18 @@ where `version` should be a tagged release, such as `0.0.7`. (If you're daring,
1926

2027
## Usage
2128

22-
### Bluemix
29+
The usage of this package depends on *from where* your Object Storage instance was created. This package
30+
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
31+
(DSX) or from a separate account on IBM Softlayer. The instructions below show how to connect to
32+
either type of instance.
33+
34+
The connection setup is essentially the same. But the difference for you is how you deliver the
35+
credentials. If your Object Storage was created with Bluemix/DSX, with a few clicks on the side-tab
36+
within a DSX Jupyter notebook, you can obtain your account credentials in the form of a list.
37+
If your Object Storage was created with a Softlayer account, each part of the credentials will
38+
be found as text that you can copy and paste into the example code below.
39+
40+
### Bluemix / Data Science Experience
2341

2442
library(ibmos2sparkR)
2543
configurationname = "bluemixOScon" #can be any any name you like (allows for multiple configurations)

scala/README.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,16 @@
11
# ibmos2spark
22

33
The package sets Spark Hadoop configurations for connecting to
4-
IBM Bluemix Object Storage and Softlayer Account Object Storage instances
5-
with the swift protocol. This packages uses the new [swift2d/stocator](https://github.com/SparkTC/stocator) protocol, availble
6-
on the latest IBM Spark Service instances (and through IBM Data Science Experience).
4+
IBM Bluemix Object Storage and Softlayer Account Object Storage instances. This packages uses the new [stocator](https://github.com/SparkTC/stocator) driver, which implements the `swift2d` protocol, and is availble
5+
on the latest IBM Apache Spark Service instances (and through IBM Data Science Experience).
6+
7+
Using the `stocator` driver connects your Spark executor nodes directly
8+
to your data in object storage.
9+
This is an optimized, high-performance method to connect Spark to your data. All IBM Apache Spark kernels
10+
are instantiated with the `stocator` driver in the Spark kernel's classpath.
11+
You can also run this locally by installing the [stocator driver](https://github.com/SparkTC/stocator)
12+
and adding it to your local Apache Spark kernel's classpath.
13+
714

815
## Installation
916

@@ -130,7 +137,18 @@ Add SNAPSHOT repository to pom.xml
130137

131138
## Usage
132139

133-
### Bluemix
140+
The usage of this package depends on *from where* your Object Storage instance was created. This package
141+
is intended to connect to IBM's Object Storage instances obtained from Bluemix or Data Science Experience
142+
(DSX) or from a separate account on IBM Softlayer. The instructions below show how to connect to
143+
either type of instance.
144+
145+
The connection setup is essentially the same. But the difference for you is how you deliver the
146+
credentials. If your Object Storage was created with Bluemix/DSX, with a few clicks on the side-tab
147+
within a DSX Jupyter notebook, you can obtain your account credentials in the form of a HashMap object.
148+
If your Object Storage was created with a Softlayer account, each part of the credentials will
149+
be found as text that you can copy and paste into the example code below.
150+
151+
### Bluemix / Data Science Experience
134152

135153

136154
```scala

0 commit comments

Comments
 (0)