- 1 Licensing
- 2 Videos
- 3 Description
- 4 General Description of the GUI
- 5 Use
- 6 BBC Preset Examples
- 7 Using Local Files
- 8 Access with REST
- 9 Access with SOAP
- 10 Gephi: extensions in the SPARQL query
- 11 Pre and post processing using python
This plugin is made available through the CeCILL-B licence.
The main page for the following videos can found at SemanticWebImport Plugin Videos.
The SemanticWebImport plugin is intended to allow the import of semantic data into Gephi. The imported data are obtained by processing a SPARQL request on the semantic data. The data can be accessed following three manners:
- by accessing local rdf, rdfs, rul files and using the embedded Corese engine to apply the SPARQL request;
- by accessing a remote REST SPARQL endpoint. In that case, the SPARQL request is applied remotely and the graph is built locally by analyzing the result sent by the REST endpoint;
- by accessing a remote SOAP SPARQL endpoint. As for the REST endpoint, the resulting graph is built from the result returned by the endpoint.
We begin by showing how to make run the preset examples which come with the plugin. Then we detail the three drivers allowing to import semantic data.
In all the following cases, it is required there is a currently opened project, otherwise the graph can not be built.
General Description of the GUI
The plugin consist of fourth tabs:
How to access the data
This tab allows to select among the available SPARQL drivers how the semantic data can be accessed. Currently, the choice can be made between (i) access to local data through the Corese engine; (ii) access to a remote REST SPARQL endpoint; (iii) access to a remote SOAP SPARQL enpoint.
Write the SPARQL query
The SPARQL editor to enter a SPARQL request that extract the data used to build the graph. It is mandatory to be a construct request.
This tab contains the log, i.e. the outputs of the plugin.
This tab allows to manage the configurations. I.e. it contains: (i) the selector for preset examples and the load button to activate them.
Launch the query
Note that to launch the query, the Run button is set at the bottom of the window.
To use the plugin, follow these steps:
- choose in tab 1 a driver between the three available;
- parameterize the driver (tab 1, and see the following sections for more details);
- enter the SPARQL request, making sure it is a construct SPARQL request. All the relations "?x ?r ?y" in the construct part are creating nodes ?x and ?y and an edge to connect both;
- In tab 4, choose:
- Wether the workspace has to be reset;
- If blank nodes muste be ignored;
- Which level of follow your nose recursion you want.
- The Python pre and post processing scripts can be used, using the python scripting plugin (see http://wiki.gephi.org/index.php/Script_plugin for more details). The /fr/inria/edelweiss/examples/autolayout.py comes as a example of script.
- The processing of the query can be launched by clicking on the Run button (at bottom).
BBC Preset Examples
To obtain the BBC example, foolow the following steps:
- Create a new empty project;
- Select in the Load Examples/Configurations (i.e. the part 4 in previous image) the "BBC" example;
- Click on load. The GUI should be similar to
- Launch the SPARQL driver by clicking on the start button (part 5). The BBC example connects to a SOAP SPARQL endpoint, and the SPARQL request is processed remotely, then the result is returned to the plugin. The graph obtained should be similar to
Using Local Files
The Corese driver allows to process locally a SPARQL request on RDF files. The RDF files can be provided as:
- local files.
- internet files, i.e. beginning with http://. For example, http://dbpedia.org/data/The_Beatles.rdf can be added and used as an input.
- resource files (i.e. RDF files coming from a jar file run by gephi). The file must begin with /. Three such files are coming embedded inside the plugin, /fr/inria/edelweis/examples/human_2007_09_11.rdf, /fr/inria/edelweis/examples/human_2007_09_11.rdfs, /fr/inria/edelweis/examples/human_2007_09_11.rul.
The CoreseDriver is made of three parts:
- buttons allowing to add a local file (+) or remove a resource (-);
- the list of resources;
- a text field and a button to add external resources, i.e. rdf files on the internet.
Access with REST
The REST SPARQL driver allows to make process a SPARQL request on a remote SPARQL endpoint with REST interface.
The Rest panel is made of the following parts:
- the URL for the endpoint;
- the name given to the query tag. Most often it is "query"; some endpoints use "q".
- some parameters to be added the request. For example "debug=on" can be provided by:
- Writing "debug" instead of "REST name" in the part 3.
- Writing "on" instead of "REST value" in the part 3.
- Clicking on +.
Access with SOAP
The SOAP SPARQL driver has a single parameter, the URL of the endpoint.
Gephi: extensions in the SPARQL query
When building a query, some special keywords can be used to customize the results in gephi. http://gephi.org/ is used as the namespace for this extension. It is counselled to add the line "namespace gephi: <http://gephi.org/>" at the beginning of the query.
- ?node gephi:label ?node_label fill the label of the node with the content of ?node_label;
- ?node gephi:size ?value sets the size of the node to the content of ?value;
- ?node gephi:color ?color_name sets the color of the node according to the content of ? color_name. The known names are those defined in .
- ?node gephi:color_r ?value sets the red part of the color of the node with ?value. ?value must be set between 0 and 255, inclusives,
- ?node gephi:color_g ?value sets the green part of the color of the node with ?value. ?value must be set between 0 and 255, inclusives,
- ?node gephi:color_b ?value sets the blue part of the color of the node with ?value. ?value must be set between 0 and 255, inclusives,
- ?node gephi:AttributeName ?anyValue creates a new attribute called "AttributeName" for all the nodes of the graph, and set the attribute for the current node with the content of ?anyValue.
Pre and post processing using python
As previously stated, the import of data can be pre or post processed with a script plugin.
import org.openide.util.Lookup as Lookup import org.gephi.ranking.api.RankingController import org.gephi.ranking.api.Ranking as Ranking import org.gephi.ranking.api.Transformer as Transformer import java.awt.Color as Color rankingController = Lookup.getDefault().lookup(org.gephi.ranking.api.RankingController) # Set the color in function of the degree. degreeRanking = rankingController.getModel().getRanking(Ranking.NODE_ELEMENT, Ranking.DEGREE_RANKING); colorTransformer = rankingController.getModel().getTransformer(Ranking.NODE_ELEMENT, Transformer.RENDERABLE_COLOR) colorTransformer.setColors([Color.BLUE, Color.YELLOW]) rankingController.transform(degreeRanking, colorTransformer) # Set the size in function of the degree of the nodes. sizeTransformer = rankingController.getModel().getTransformer(Ranking.NODE_ELEMENT, Transformer.RENDERABLE_SIZE) sizeTransformer.setMinSize(3) sizeTransformer.setMaxSize(40) rankingController.transform(degreeRanking, sizeTransformer) ### Layout of the graph # Construction of a layout object import org.gephi.layout.plugin.forceAtlas2.ForceAtlas2Builder as ForceAtlas2Builder import org.gephi.layout.plugin.forceAtlas2.ForceAtlas2 as ForceAtlas2 fa2builder = ForceAtlas2Builder() fa2 = ForceAtlas2(fa2builder) # Setting the layout object import org.gephi.graph.api.GraphController as GraphController graphModel = Lookup.getDefault().lookup(GraphController).getModel() fa2.setGraphModel(graphModel) fa2.setAdjustSizes(True) # To prevent overlap print "executing layout" # Run the layout. fa2.initAlgo() for i in range(5000): fa2.goAlgo()