Skip to content
This repository was archived by the owner on Oct 8, 2020. It is now read-only.

Commit 6b06c38

Browse files
Merge branch 'develop' of github.com:SANSA-Stack/SANSA-Inference into develop
2 parents 7a6bca0 + 76a51d9 commit 6b06c38

File tree

1 file changed

+59
-15
lines changed

1 file changed

+59
-15
lines changed

README.md

Lines changed: 59 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,30 @@
1+
2+
13
# SANSA Inference Layer
24
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/net.sansa-stack/sansa-inference-parent_2.11/badge.svg)](https://maven-badges.herokuapp.com/maven-central/net.sansa-stack/sansa-inference-parent_2.11)
35
[![Build Status](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/badge/icon)](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/)
46

7+
**Table of Contents**
8+
9+
- [SANSA Inference Layer](#)
10+
- [Structure](#structure)
11+
- [sansa-inference-common](#sansa-inference-common)
12+
- [sansa-inference-spark](#sansa-inference-spark)
13+
- [sansa-inference-flink](#sansa-inference-flink)
14+
- [sansa-inference-tests](#sansa-inference-tests)
15+
- [Setup](#setup)
16+
- [Prerequisites](#prerequisites)
17+
- [From source](#from-source)
18+
- [Using Maven pre-build artifacts](#)
19+
- [Using SBT](#using-SBT)
20+
- [Usage](#usage)
21+
- [Example](#example)
22+
- [Supported Reasoning Profiles](#)
23+
- [RDFS](#rdfs)
24+
- [RDFS Simple](#rdfs-simple)
25+
- [OWL Horst](#owl-horst)
26+
27+
528
## Structure
629
### sansa-inference-common
730
* common datastructures
@@ -124,27 +147,48 @@ and for Apache Flink add
124147
where `VERSION` is the released version you want to use.
125148

126149
## Usage
150+
Besides using the Inference API in your application code, we also provide a command line interface with various options that allow for a convenient way to use the core reasoning algorithms:
127151
```
128152
RDFGraphMaterializer 0.1.0
129153
Usage: RDFGraphMaterializer [options]
130-
131-
132-
-i <file> | --input <file>
133-
the input file in N-Triple format
134-
-o <directory> | --out <directory>
135-
the output directory
136-
--single-file
137-
write the output to a single file in the output directory
138-
--sorted
139-
sorted output of the triples (per file)
140-
-p {rdfs | owl-horst} | --profile {rdfs | owl-horst}
141-
the reasoning profile
142-
--help
143-
prints this usage text
154+
155+
-i, --input <path1>,<path2>,...
156+
path to file or directory that contains the input files (in N-Triples format)
157+
-o, --out <directory> the output directory
158+
--properties <property1>,<property2>,...
159+
list of properties for which the transitive closure will be computed (used only for profile 'transitive')
160+
-p, --profile {rdfs | rdfs-simple | owl-horst | transitive}
161+
the reasoning profile
162+
--single-file write the output to a single file in the output directory
163+
--sorted sorted output of the triples (per file)
164+
--parallelism <value> the degree of parallelism, i.e. the number of Spark partitions used in the Spark operations
165+
--help prints this usage text
166+
```
167+
This can easily be used when submitting the Job to Spark (resp. Flink), e.g. for Spark
168+
169+
```bash
170+
/PATH/TO/SPARK/sbin/spark-submit [spark-options] /PATH/TO/INFERENCE-SPARK-DISTRIBUTION/FILE.jar [inference-api-arguments]
144171
```
172+
173+
and for Flink
174+
175+
```bash
176+
/PATH/TO/FLINK/bin/flink run [flink-options] /PATH/TO/INFERENCE-FLINK-DISTRIBUTION/FILE.jar [inference-api-arguments]
177+
```
178+
179+
In addition, we also provide Shell scripts that wrap the Spark (resp. Flink) deployment and can be used by first
180+
setting the environment variable `SPARK_HOME` (resp. `FLINK_HOME`) and then calling
181+
```bash
182+
/PATH/TO/INFERENCE-DISTRIBUTION/bin/cli [inference-api-arguments]
183+
```
184+
(Note, that setting Spark (resp. Flink) options isn't supported here and has to be done via the corresponding config files)
185+
145186
### Example
146187

147-
`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.
188+
```bash
189+
RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs
190+
```
191+
will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.
148192

149193
## Supported Reasoning Profiles
150194

0 commit comments

Comments
 (0)