Skip to content
This repository was archived by the owner on Oct 8, 2020. It is now read-only.

Commit b9ee77f

Browse files
Merge branch 'release/0.1.0'
2 parents 321f22a + adf5606 commit b9ee77f

File tree

1,982 files changed

+71045
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,982 files changed

+71045
-0
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,5 @@ project/plugins/project/
1515
# Scala-IDE specific
1616
.scala_dependencies
1717
.worksheet
18+
*.iml
19+
.idea

README.md

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
# SANSA Inference Layer
2+
[![Build Status](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/badge/icon)](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/)
3+
4+
## Structure
5+
### sansa-inference-common
6+
* common datastructures
7+
* rule dependency analysis
8+
9+
### sansa-inference-spark
10+
Contains the core Inference API based on Apache Spark.
11+
12+
### sansa-inference-flink
13+
Contains the core Inference API based on Apache Flink.
14+
15+
### sansa-inference-tests
16+
Contains common test classes and data.
17+
18+
19+
## Setup
20+
### Prerequisites
21+
* Maven 3.x
22+
* Java 8
23+
* Scala 2.11 (support for Scala 2.10 is planned)
24+
* Apache Spark 2.x
25+
* Apache Flink 1.x
26+
27+
### From source
28+
29+
To install the SANSA Inference API, you need to download it via Git and install it via Maven.
30+
```shell
31+
git clone https://github.com/SANSA-Stack/SANSA-Inference.git
32+
cd SANSA-Inference
33+
mvn clean install
34+
```
35+
Afterwards, you have to add the dependency to your pom.xml
36+
37+
For Apache Spark
38+
```xml
39+
<dependency>
40+
<groupId>net.sansa-stack</groupId>
41+
<artifactId>sansa-inference-spark_2.11</artifactId>
42+
<version>VERSION</version>
43+
</dependency>
44+
```
45+
and for Apache Flink
46+
```xml
47+
<dependency>
48+
<groupId>net.sansa-stack</groupId>
49+
<artifactId>sansa-inference-flink_2.11</artifactId>
50+
<version>VERSION</version>
51+
</dependency>
52+
```
53+
with `VERSION` beeing the released version you want to use.
54+
55+
### Using Maven pre-build artifacts
56+
57+
1. Add AKSW Maven repository to your pom.xml (will be added to Maven Central soon)
58+
```xml
59+
<repository>
60+
<id>maven.aksw.snapshots</id>
61+
<name>University Leipzig, AKSW Maven2 Repository</name>
62+
<url>http://maven.aksw.org/archiva/repository/snapshots</url>
63+
<releases>
64+
<releases>
65+
<enabled>false</enabled>
66+
</releases>
67+
<snapshots>
68+
<enabled>true</enabled>
69+
</snapshots>
70+
</repository>
71+
72+
<repository>
73+
<id>maven.aksw.internal</id>
74+
<name>University Leipzig, AKSW Maven2 Internal Repository</name>
75+
<url>http://maven.aksw.org/archiva/repository/internal</url>
76+
<releases>
77+
<enabled>true</enabled>
78+
</releases>
79+
<snapshots>
80+
<enabled>false</enabled>
81+
</snapshots>
82+
</repository>
83+
```
84+
'2'. Add dependency to your pom.xml
85+
86+
For Apache Spark
87+
```xml
88+
<dependency>
89+
<groupId>net.sansa-stack</groupId>
90+
<artifactId>sansa-inference-spark_2.11</artifactId>
91+
<version>VERSION</version>
92+
</dependency>
93+
```
94+
and for Apache Flink
95+
```xml
96+
<dependency>
97+
<groupId>net.sansa-stack</groupId>
98+
<artifactId>sansa-inference-flink_2.11</artifactId>
99+
<version>VERSION</version>
100+
</dependency>
101+
```
102+
with `VERSION` beeing the released version you want to use.
103+
### Using SBT
104+
SANSA Inference API has not been published on Maven Central yet, thus, you have to add an additional repository as follows
105+
```scala
106+
resolvers ++= Seq(
107+
"AKSW Maven Releases" at "http://maven.aksw.org/archiva/repository/internal",
108+
"AKSW Maven Snapshots" at "http://maven.aksw.org/archiva/repository/snapshots"
109+
)
110+
```
111+
Then you have to add a dependency on either the Apache Spark or the Apache Flink module.
112+
113+
For Apache Spark add
114+
```scala
115+
"net.sansa-stack" % "sansa-inference-spark_2.11" % VERSION
116+
```
117+
118+
and for Apache Flink add
119+
```scala
120+
"net.sansa-stack" % "sansa-inference-flink_2.11" % VERSION
121+
```
122+
123+
where `VERSION` is the released version you want to use.
124+
125+
## Usage
126+
```
127+
RDFGraphMaterializer 0.1.0
128+
Usage: RDFGraphMaterializer [options]
129+
130+
131+
-i <file> | --input <file>
132+
the input file in N-Triple format
133+
-o <directory> | --out <directory>
134+
the output directory
135+
--single-file
136+
write the output to a single file in the output directory
137+
--sorted
138+
sorted output of the triples (per file)
139+
-p {rdfs | owl-horst} | --profile {rdfs | owl-horst}
140+
the reasoning profile
141+
--help
142+
prints this usage text
143+
```
144+
### Example
145+
146+
`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.

0 commit comments

Comments
 (0)