Skip to content
This repository was archived by the owner on Oct 8, 2020. It is now read-only.

Commit da5d2d8

Browse files
Update README.md
1 parent 2694b2b commit da5d2d8

File tree

1 file changed

+31
-14
lines changed

1 file changed

+31
-14
lines changed

README.md

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -124,24 +124,41 @@ and for Apache Flink add
124124
where `VERSION` is the released version you want to use.
125125

126126
## Usage
127+
Besides using the Inference API in your application code, we also provide a command line interface with various options that allow for a convenient way to use the core reasoning algorithms:
127128
```
128129
RDFGraphMaterializer 0.1.0
129130
Usage: RDFGraphMaterializer [options]
130-
131-
132-
-i <file> | --input <file>
133-
the input file in N-Triple format
134-
-o <directory> | --out <directory>
135-
the output directory
136-
--single-file
137-
write the output to a single file in the output directory
138-
--sorted
139-
sorted output of the triples (per file)
140-
-p {rdfs | owl-horst} | --profile {rdfs | owl-horst}
141-
the reasoning profile
142-
--help
143-
prints this usage text
131+
132+
-i, --input <path1>,<path2>,...
133+
path to file or directory that contains the input files (in N-Triples format)
134+
-o, --out <directory> the output directory
135+
--properties <property1>,<property2>,...
136+
list of properties for which the transitive closure will be computed (used only for profile 'transitive')
137+
-p, --profile {rdfs | rdfs-simple | owl-horst | transitive}
138+
the reasoning profile
139+
--single-file write the output to a single file in the output directory
140+
--sorted sorted output of the triples (per file)
141+
--parallelism <value> the degree of parallelism, i.e. the number of Spark partitions used in the Spark operations
142+
--help prints this usage text
143+
```
144+
This can easily be used when submitting the Job to Spark (resp. Flink), e.g. for Spark
145+
146+
```
147+
/PATH/TO/SPARK/spark-submit [spark-options] /PATH/TO/INFERENCE-SPARK-DISTRIBUTION/FILE.jar [inference-api-arguments]
148+
```
149+
150+
and for Flink
151+
152+
```
153+
/PATH/TO/FLINK/bin/flink run [flink-options] /PATH/TO/INFERENCE-FLINK-DISTRIBUTION/FILE.jar [inference-api-arguments]
154+
```
155+
156+
In addition, we also provide Shell scripts that wrap the Spark (resp. Flink) deployment and can be used with
157+
```
158+
/PATH/TO/INFERENCE-DISTRIBUTION/bin/cli [inference-api-arguments]
144159
```
160+
(Note that setting Spark (resp. Flink) options isn't supported here and has to be done via the corresponding config files)
161+
145162
### Example
146163

147164
`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.

0 commit comments

Comments
 (0)