Skip to content
This repository was archived by the owner on Apr 22, 2022. It is now read-only.

Commit 125289f

Browse files
committed
Merge branch 'ontos-geoknow'
Conflicts: src/main/resources/framework-components.ttl src/main/webapp/js/workbench/linking-and-fusing/limes-controller.js
2 parents 02e48b6 + e07c75b commit 125289f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+4063
-1410
lines changed

README.md

Lines changed: 16 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -4,44 +4,40 @@ The GeoKnow Generator provides workbench that integrates of tools developed with
44

55
## Install
66

7-
* __From Debian package__: The GeoKnow Generator UI is available as a debuian package, to install follow [these](http://stack.linkeddata.org/documentation/installation-of-a-local-generator-demonstrator/) instructions.
8-
9-
* __Form source__: You can use `maven pacakge` to package the souces in a war file and deploy it on a servlet container.
7+
* __From Debian package__: The GeoKnow Generator UI is available as a debuian package, to install follow [these](http://stack.linkeddata.org/documentation/installation-of-a-local-generator-demonstrator/) instructions. This is a preconfigured application that assumes that a triple store is installed in the localhost.
8+
* __Form source__: Follow the configuration instructiobs below, and use `maven pacakge` to package the souces in a war file and deploy it on a servlet container.
109

1110
These option will not install any integrated component from the stack and you require to install each one. You can choose to use again Debian packages following [these](http://stack.linkeddata.org/documentation/installation-of-a-local-generator-demonstrator/) instructions, or manually visiting each developer's component installation guides.
1211

1312
## Configuration
1413

15-
### Application configuration
16-
17-
1. Make a copy of the `src/main/resources/framework-configuration-template.ttl` to `src/main/resources/framework-configuration.ttl` and provide the ***REMOVED*** data.
14+
### Requirements
1815

19-
2. Have a clean triple store and make sure to have a user create in the lds:StorageService, the lds:SecuredSPARQLEndPointService user will be automatically created
2016

21-
3. Make a copy of the `src/main/webapp/WEB-INF/web-template.xml` to `src/main/webapp/WEB-INF/web.xml` and provide the ***REMOVED*** data.
17+
* Have a clean triple store and make sure to have a superuser created.
2218

19+
### Application configuration
2320

24-
Note that the endpoint has to support [UPDATE](http://www.w3.org/TR/2013/REC-sparql11-update-20130321/) service and [Graph Store HTTP Protocol](http://www.w3.org/TR/2013/REC-sparql11-http-rdf-update-20130321/)
25-
21+
1. Open the `src/main/resources/framework-configuration.ttl` and provide endpoints, users and password information in the lds:StorageService element. The lds:SecuredSPARQLEndPointService user will be automatically created in the setup.
2622

27-
## Optional Extra Configutation
23+
2. Open the `src/main/resources/framework-components.ttl` and make sure URLs for alll stack components are accurate.
2824

29-
Depending on the components you use in the Generator some extra configurations may be required.
25+
2. Edit `src/main/webapp/WEB-INF/web.xml` and provide the ***REMOVED*** data.
3026

31-
### Using Virtuoso Endpoint
3227

33-
If you have a Virtuoso Endpoint, you can configure the following:
28+
Note that the endpoint has to support [UPDATE](http://www.w3.org/TR/2013/REC-sparql11-update-20130321/) service and [Graph Store HTTP Protocol](http://www.w3.org/TR/2013/REC-sparql11-http-rdf-update-20130321/)
29+
30+
### Application setup
3431

35-
1. Enable SPARQL Update on a Virtuoso SPARQL by excecuting the following lines in the isql utility:
32+
To initalize the application the user is only required to navigate to the URL where the application has being deployed and a setup page will be shown. The application requires a system file (e.g. `/etc/generator/`) with write privileges for creating a init file that will be used as a flag for setting the system up.
3633

37-
$ isql-vt
38-
GRANT SPARQL_UPDATE TO "SPARQL"
39-
GRANT EXECUTE ON DB.DBA.L_O_LOOK TO "SPARQL"
34+
### Virtuoso Endpoint
4035

41-
2. [Enable CORS for Virtuoso](http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtTipsAndTricksCORsEnableSPARQLURLs) SPARQL endpoint.
36+
Currently ontly Virtuoso store is supported and it has being tested in version 7.1. The Generator makes use of Virtuoso users management to allow the creation of private graphs and allow them to grant access to to specific users.
4237

43-
### Authentication configuration
38+
## Optional Extra Configutation
4439

40+
Depending on the components you use in the Generator some extra configurations may be required.
4541

4642

4743
### Using OntoWiki-Virtuoso

pom.xml

Lines changed: 49 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
22
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
33
<modelVersion>4.0.0</modelVersion>
4-
<groupId>eu.geoknow</groupId>
5-
<artifactId>generator</artifactId>
6-
<packaging>war</packaging>
7-
<version>1.0.1</version>
8-
<name>GeoKnow Generator</name>
4+
<groupId>eu.geoknow</groupId>
5+
<artifactId>generator</artifactId>
6+
<packaging>war</packaging>
7+
<version>1.0.1</version>
8+
<name>GeoKnow Generator</name>
99

1010
<properties>
1111
<endorsed.dir>${project.build.directory}/endorsed</endorsed.dir>
@@ -27,10 +27,10 @@
2727
<scope>test</scope>
2828
</dependency>
2929
<dependency>
30-
<groupId>javax.servlet</groupId>
31-
<artifactId>servlet-api</artifactId>
32-
<version>2.5</version>
33-
<scope>provided</scope>
30+
<groupId>javax.servlet</groupId>
31+
<artifactId>servlet-api</artifactId>
32+
<version>2.5</version>
33+
<scope>provided</scope>
3434
</dependency>
3535
<dependency>
3636
<groupId>commons-fileupload</groupId>
@@ -53,22 +53,42 @@
5353
<type>pom</type>
5454
<version>2.10.1</version>
5555
</dependency>
56-
<dependency>
57-
<groupId>javax.mail</groupId>
58-
<artifactId>mail</artifactId>
59-
<version>1.4.7</version>
60-
</dependency>
56+
<dependency>
57+
<groupId>javax.mail</groupId>
58+
<artifactId>mail</artifactId>
59+
<version>1.4.7</version>
60+
</dependency>
61+
<dependency>
62+
<groupId>com.google.code.gson</groupId>
63+
<artifactId>gson</artifactId>
64+
<version>2.2.4</version>
65+
</dependency>
6166

62-
<!-- for digest authentication -->
63-
<dependency>
64-
<groupId>commons-codec</groupId>
65-
<artifactId>commons-codec</artifactId>
66-
<version>1.9</version>
67-
</dependency>
68-
</dependencies>
67+
<!-- for digest authentication -->
68+
<dependency>
69+
<groupId>commons-codec</groupId>
70+
<artifactId>commons-codec</artifactId>
71+
<version>1.9</version>
72+
</dependency>
73+
74+
<!-- Logger -->
75+
<dependency>
76+
<groupId>log4j</groupId>
77+
<artifactId>log4j</artifactId>
78+
<version>1.2.16</version>
79+
</dependency>
80+
81+
<!-- Javax web services with jersey -->
82+
<dependency>
83+
<groupId>org.glassfish.jersey.containers</groupId>
84+
<artifactId>jersey-container-servlet</artifactId>
85+
<version>2.12</version>
86+
</dependency>
87+
88+
</dependencies>
6989

7090
<build>
71-
<finalName>generator</finalName>
91+
<finalName>${project.artifactId}</finalName>
7292
<plugins>
7393
<plugin>
7494
<groupId>org.apache.maven.plugins</groupId>
@@ -93,11 +113,11 @@
93113
<!-- Maven Tomcat Plugin -->
94114
<plugin>
95115
<version>1.1</version>
96-
<groupId>org.codehaus.mojo</groupId>
97-
<artifactId>tomcat-maven-plugin</artifactId>
98-
<configuration>
99-
<url>http://127.0.0.1:8080/manager/text</url>
100-
<server>tomcat</server>
116+
<groupId>org.codehaus.mojo</groupId>
117+
<artifactId>tomcat-maven-plugin</artifactId>
118+
<configuration>
119+
<url>http://127.0.0.1:8080/manager/text</url>
120+
<server>tomcat</server>
101121
<path>/${project.build.finalName}</path>
102122
</configuration>
103123
</plugin>
@@ -114,12 +134,9 @@
114134
<artifactId>maven-antrun-plugin</artifactId>
115135
<configuration>
116136
<tasks>
117-
<echo
118-
message="Creating debian package">
137+
<echo message="Creating debian package">
119138
</echo>
120-
<exec
121-
dir="${basedir}/deb-package"
122-
executable="${basedir}/deb-package/build_debpkg.sh"
139+
<exec dir="${basedir}/deb-package" executable="${basedir}/deb-package/build_debpkg.sh"
123140
failonerror="true">
124141
</exec>
125142
</tasks>

src/main/java/ImportRDFString.java

Lines changed: 201 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
import java.io.ByteArrayInputStream;
2+
import java.io.ByteArrayOutputStream;
3+
import java.io.File;
4+
import java.io.IOException;
5+
import java.io.PrintWriter;
6+
import java.util.Enumeration;
7+
import java.util.HashMap;
8+
import java.util.UUID;
9+
10+
import javax.servlet.http.HttpServlet;
11+
import javax.servlet.http.HttpServletRequest;
12+
import javax.servlet.http.HttpServletResponse;
13+
14+
import org.codehaus.jackson.map.ObjectMapper;
15+
16+
import rdf.RdfStoreManager;
17+
import util.HttpUtils;
18+
import util.JsonResponse;
19+
import accounts.FrameworkUserManager;
20+
import authentication.FrameworkConfiguration;
21+
22+
import com.hp.hpl.jena.rdf.model.Model;
23+
import com.hp.hpl.jena.rdf.model.ModelFactory;
24+
import com.hp.hpl.jena.rdf.model.RDFNode;
25+
import com.hp.hpl.jena.rdf.model.Resource;
26+
import com.hp.hpl.jena.rdf.model.Statement;
27+
import com.hp.hpl.jena.rdf.model.StmtIterator;
28+
29+
30+
public class ImportRDFString extends HttpServlet {
31+
32+
/**
33+
*
34+
*/
35+
private static final long serialVersionUID = 1L;
36+
private String endpoint;
37+
private static String uriBase;
38+
private String graph;
39+
private String saveString;
40+
private String username;
41+
private String token;
42+
43+
private String prefixes = "@prefix lgdo: <http://linkedgeodata.org/ontology/> . "
44+
+ "@prefix owl: <http://www.w3.org/2002/07/owl#> . ";
45+
46+
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
47+
doPost(request, response);
48+
}
49+
50+
public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException {
51+
52+
response.setContentType("application/json");
53+
54+
PrintWriter out = response.getWriter();
55+
56+
// get the paht where files were uploaded
57+
String filePath = getServletContext().getRealPath(File.separator)
58+
+ getServletContext().getInitParameter("file-upload").replaceFirst("/", "");
59+
filePath = "file:///" + filePath.replace("\\", "/");
60+
filePath = filePath.replace(" ", "%20");
61+
JsonResponse res = new JsonResponse();
62+
ObjectMapper mapper = new ObjectMapper();
63+
64+
endpoint = request.getParameter("params[endpoint]");
65+
graph = request.getParameter("params[graph]");
66+
saveString = prefixes+request.getParameter("params[saveString]");
67+
username = request.getParameter("params[username]");
68+
token = HttpUtils.getCookieValue(request, "token");
69+
70+
try {
71+
int inserted = stringImport(endpoint, graph, saveString);
72+
res.setStatus("SUCCESS");
73+
res.setMessage("Data Imported " + inserted + " triples");
74+
} catch (Exception e) {
75+
res.setStatus("FAIL");
76+
res.setMessage(e.getMessage());
77+
e.printStackTrace();
78+
}
79+
80+
mapper.writeValue(out, res);
81+
out.close();
82+
}
83+
84+
private int stringImport(String destEndpoint, String graph, String saveString)
85+
throws Exception {
86+
Model model = ModelFactory.createDefaultModel();
87+
model.read(new ByteArrayInputStream(saveString.getBytes()), null, "N3");
88+
int inserted = httpUpdate(destEndpoint, graph, model);
89+
return inserted;
90+
}
91+
92+
private int httpUpdate(String endpoint, String graph, Model model) throws Exception {
93+
RdfStoreManager rdfStoreManager = null;
94+
if (username != null && !username.isEmpty() && token != null && !token.isEmpty()) {
95+
FrameworkUserManager frameworkUserManager = FrameworkConfiguration.getInstance(
96+
getServletContext()).getFrameworkUserManager();
97+
if (frameworkUserManager.checkToken(username, token))
98+
rdfStoreManager = frameworkUserManager.getRdfStoreManager(username);
99+
}
100+
101+
// generate queries of 100 lines each
102+
StmtIterator stmts = model.listStatements();
103+
int linesLimit = 100, linesCount = 0, total = 0;
104+
HashMap<String, String> blancNodes = new HashMap<String, String>();
105+
106+
Model tmpModel = ModelFactory.createDefaultModel();
107+
108+
while (stmts.hasNext()) {
109+
110+
if (linesCount < linesLimit) {
111+
112+
Statement stmt = stmts.next();
113+
Resource subject = null;
114+
RDFNode object = null;
115+
// find bnodes to skolemise them
116+
if (stmt.getSubject().isAnon()) {
117+
String oldBN = stmt.getSubject().asNode().getBlankNodeLabel();
118+
if (blancNodes.containsKey(oldBN)) {
119+
subject = tmpModel.getResource(blancNodes.get(oldBN));
120+
} else {
121+
String newBN = uriBase + "bnode#" + UUID.randomUUID();
122+
blancNodes.put(oldBN, newBN);
123+
subject = tmpModel.createResource(newBN);
124+
}
125+
} else
126+
subject = stmt.getSubject();
127+
128+
if (stmt.getObject().isAnon()) {
129+
String oldBN = stmt.getObject().asNode().getBlankNodeLabel();
130+
if (blancNodes.containsKey(oldBN)) {
131+
object = tmpModel.getResource(blancNodes.get(oldBN));
132+
} else {
133+
String newBN = uriBase + "bnode#" + UUID.randomUUID();
134+
blancNodes.put(oldBN, newBN);
135+
object = tmpModel.createResource(newBN);
136+
}
137+
} else
138+
object = stmt.getObject();
139+
140+
tmpModel.add(subject, stmt.getPredicate(), object);
141+
linesCount++;
142+
} else {
143+
144+
ByteArrayOutputStream os = new ByteArrayOutputStream();
145+
tmpModel.write(os, "N-TRIPLES");
146+
147+
if (rdfStoreManager != null) {
148+
String queryString = "INSERT DATA { GRAPH <" + graph + "> { " + os.toString() + " } }";
149+
os.close();
150+
rdfStoreManager.execute(queryString, null);
151+
} else {
152+
String queryString = "INSERT { " + os.toString() + "}";
153+
os.close();
154+
155+
HttpSPARQLUpdate p = new HttpSPARQLUpdate();
156+
p.setEndpoint(endpoint);
157+
p.setGraph(graph);
158+
p.setUpdateString(queryString);
159+
160+
if (!p.execute())
161+
throw new Exception("UPDATE/SPARQL failed: " + queryString);
162+
}
163+
164+
total += linesCount;
165+
linesCount = 0;
166+
tmpModel.removeAll();
167+
}
168+
169+
}
170+
171+
if (!tmpModel.isEmpty()) {
172+
173+
ByteArrayOutputStream os = new ByteArrayOutputStream();
174+
tmpModel.write(os, "N-TRIPLES");
175+
176+
if (rdfStoreManager != null) {
177+
String queryString = "INSERT DATA { GRAPH <" + graph + "> { " + os.toString() + "} }";
178+
os.close();
179+
rdfStoreManager.execute(queryString, null);
180+
} else {
181+
String queryString = "INSERT { " + os.toString() + "}";
182+
os.close();
183+
184+
HttpSPARQLUpdate p = new HttpSPARQLUpdate();
185+
p.setEndpoint(endpoint);
186+
p.setGraph(graph);
187+
p.setUpdateString(queryString);
188+
189+
if (!p.execute())
190+
throw new Exception("UPDATE/SPARQL failed: " + queryString);
191+
}
192+
193+
total += linesCount;
194+
195+
}
196+
197+
return total;
198+
199+
}
200+
201+
}

0 commit comments

Comments
 (0)