|
| 1 | + |
| 2 | + |
1 | 3 | # SANSA Inference Layer |
2 | 4 | [](https://maven-badges.herokuapp.com/maven-central/net.sansa-stack/sansa-inference-parent_2.11) |
3 | 5 | [](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/) |
4 | 6 |
|
| 7 | +**Table of Contents** |
| 8 | + |
| 9 | +- [SANSA Inference Layer](#) |
| 10 | + - [Structure](#structure) |
| 11 | + - [sansa-inference-common](#sansa-inference-common) |
| 12 | + - [sansa-inference-spark](#sansa-inference-spark) |
| 13 | + - [sansa-inference-flink](#sansa-inference-flink) |
| 14 | + - [sansa-inference-tests](#sansa-inference-tests) |
| 15 | + - [Setup](#setup) |
| 16 | + - [Prerequisites](#prerequisites) |
| 17 | + - [From source](#from-source) |
| 18 | + - [Using Maven pre-build artifacts](#) |
| 19 | + - [Using SBT](#using-SBT) |
| 20 | + - [Usage](#usage) |
| 21 | + - [Example](#example) |
| 22 | + - [Supported Reasoning Profiles](#) |
| 23 | + - [RDFS](#rdfs) |
| 24 | + - [RDFS Simple](#rdfs-simple) |
| 25 | + - [OWL Horst](#owl-horst) |
| 26 | + |
| 27 | + |
5 | 28 | ## Structure |
6 | 29 | ### sansa-inference-common |
7 | 30 | * common datastructures |
@@ -124,27 +147,48 @@ and for Apache Flink add |
124 | 147 | where `VERSION` is the released version you want to use. |
125 | 148 |
|
126 | 149 | ## Usage |
| 150 | +Besides using the Inference API in your application code, we also provide a command line interface with various options that allow for a convenient way to use the core reasoning algorithms: |
127 | 151 | ``` |
128 | 152 | RDFGraphMaterializer 0.1.0 |
129 | 153 | Usage: RDFGraphMaterializer [options] |
130 | | - |
131 | | - |
132 | | - -i <file> | --input <file> |
133 | | - the input file in N-Triple format |
134 | | - -o <directory> | --out <directory> |
135 | | - the output directory |
136 | | - --single-file |
137 | | - write the output to a single file in the output directory |
138 | | - --sorted |
139 | | - sorted output of the triples (per file) |
140 | | - -p {rdfs | owl-horst} | --profile {rdfs | owl-horst} |
141 | | - the reasoning profile |
142 | | - --help |
143 | | - prints this usage text |
| 154 | +
|
| 155 | + -i, --input <path1>,<path2>,... |
| 156 | + path to file or directory that contains the input files (in N-Triples format) |
| 157 | + -o, --out <directory> the output directory |
| 158 | + --properties <property1>,<property2>,... |
| 159 | + list of properties for which the transitive closure will be computed (used only for profile 'transitive') |
| 160 | + -p, --profile {rdfs | rdfs-simple | owl-horst | transitive} |
| 161 | + the reasoning profile |
| 162 | + --single-file write the output to a single file in the output directory |
| 163 | + --sorted sorted output of the triples (per file) |
| 164 | + --parallelism <value> the degree of parallelism, i.e. the number of Spark partitions used in the Spark operations |
| 165 | + --help prints this usage text |
| 166 | +``` |
| 167 | +This can easily be used when submitting the Job to Spark (resp. Flink), e.g. for Spark |
| 168 | + |
| 169 | +```bash |
| 170 | +/PATH/TO/SPARK/sbin/spark-submit [spark-options] /PATH/TO/INFERENCE-SPARK-DISTRIBUTION/FILE.jar [inference-api-arguments] |
144 | 171 | ``` |
| 172 | + |
| 173 | +and for Flink |
| 174 | + |
| 175 | +```bash |
| 176 | +/PATH/TO/FLINK/bin/flink run [flink-options] /PATH/TO/INFERENCE-FLINK-DISTRIBUTION/FILE.jar [inference-api-arguments] |
| 177 | +``` |
| 178 | + |
| 179 | +In addition, we also provide Shell scripts that wrap the Spark (resp. Flink) deployment and can be used by first |
| 180 | +setting the environment variable `SPARK_HOME` (resp. `FLINK_HOME`) and then calling |
| 181 | +```bash |
| 182 | +/PATH/TO/INFERENCE-DISTRIBUTION/bin/cli [inference-api-arguments] |
| 183 | +``` |
| 184 | +(Note, that setting Spark (resp. Flink) options isn't supported here and has to be done via the corresponding config files) |
| 185 | + |
145 | 186 | ### Example |
146 | 187 |
|
147 | | -`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`. |
| 188 | +```bash |
| 189 | +RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs |
| 190 | +``` |
| 191 | +will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`. |
148 | 192 |
|
149 | 193 | ## Supported Reasoning Profiles |
150 | 194 |
|
|
0 commit comments