All versions of this manual
X

 Linkurious administration manual

Welcome to the Linkurious Enterprise administrator documentation. This documentation will help you install, run and customize Linkurious Enterprise.

Architecture overview

Linkurious Enterprise is a three-tier application.

The presentation layer is a Web application. It uses our graph visualization library, Ogma, to allow rich interactions with the graph. It also provides a user interface to enable data administration and collaboration among end users.

The presentation layer communicates with the logic layer via a JSON-based REST API. Custom presentation layer application can be developed on top of the logic layer.

The logic layer is a NodeJS-based server. It provides a unified REST API to read, write and search into graph databases from multiple vendors (Neo4j, JanusGraph and Cosmos DB). It implements also a security layer with modular authentication that enables role-based access control policies. It can be connected to multiple graph databases at the same time and offers high-level APIs for collaborative exploration of graphs: users can create, share and publish graph visualizations, and multiple users can edit graph data.

Administrators can control it from its REST API for easy automation and deployment.

Multiple external authentication providers are supported (LDAP, Microsoft Active Directory, Microsoft Azure Active Directory, Google Suite, OpenID Connect, SAML2 / ADFS).

The data layer supports several graph databases, as well as indexation engines.

 FAQ

Going to production

What should I do before going to production?

1. Ready your graph database for production

Consult with your vendor to make sure that your graph database is installed on appropriate hardware and configured for better performances:

Make sure that your graph database is secure:

2. Ready Elasticsearch for production

Keep in mind that Linkurious Enterprise can be used without Elasticsearch, see search options.

If you are using Linkurious Enterprise with Elasticsearch

3. Ready your user-data store for production

By default, SQLite is used for the user-data store. SQLite is not recommended for production environment: switch to MySQL/MariaDB/MSSQL instead.

Schedule regular backups of the user-data store:

Make sure your user-data-store database is secure

If you need high-availability, set up replication

4. Ready Linkurious Enterprise itself for production

How can Fault tolerance be achieved?

Linkurious Enterprise can be set up with a backup instance to allow for continuity of service when the main server crashes.

For this setup:

A reverse proxy is then configured to send requests to the backup server when the main server is down. If you are using nginx, this sample configuration can be used:

http {
    # define the "backend" upstream
    upstream backend {
        # main server
        server linkurious-main.example.com;
 
        # backup server
        server linkurious-backup.example.com backup;
    }
 
    # redirect all queries to the "backend" upsteam
    server {
        location / {
            proxy_pass http://backend;
        }
    }
}

See nginx documentation for more details.

Fault-tolerance diagram

Security

Where is the user-data store located?

The user-data store database (containing visualizations, saved queries, user, groups, etc) is stored in a SQL database.

By default, this database is an SQLite database (located at linkurious/data/database.sqlite). In production, the use of a MySQL/MariaDB/MSSQL database is recommended. These databases can be located on a remote server.

Is the user-data store encrypted?

The default user-data store (SQLite) is not encrypted.

Encryption is available with the following vendors:

Is it possible to delete the SQLite user-data store when using an external database?

Yes, when using an external user-data store (e.g. MariaDB, MySQL or MSSQL), the SQLite files can be deleted.

What kind of information is stored in the configuration file?

The configuration file contains all configurable options, as well as the configuration options of all configured data sources (e.g. User-Data Store host/port/username/encrypted password; Graph Database URL/username/encrypted password; Index Search URL/username/encrypted password, etc). All passwords/secrets in the configuration file are encrypted before storage.

The configuration file, like the rest of the data folder, should be considered private and not be readable by anyone other than the Linkurious Enterprise service account.

How are application secrets stored?

All application secrets stored by Linkurious Enterprise (Graph Database credentials, User-Data Store credentials, Index Search credentials, SSL certificate passphrase, etc.) are encrypted using the AES-256-CTR algorithm.

How are user credentials stored?

User passwords are strongly hashed before being stored in the database. Passwords for LDAP and other external authentication solutions are not stored at all.

Where is the audit trail stored?

The audit trail files are generated in linkurious/data/audit-trail by default. This path can be set in the audit trail configuration.

Does enabling the audit-trail require additional security measures?

The audit trail contains sensitive information and should be secured. It should be owned and readable only by the Linkurious Enterprise service account.

How can the data directory be secured?

The data directory contains logs, configuration files, and, if enabled, audit trails. This information is sensitive, and the directory should be owned and readable only by the Linkurious Enterprise service account

What is a service account and why should I use one?

A service account is an operating system user account with restricted privileges that is used only to run a specific service and own it data related to this service. Service accounts are not intended to be used by people, except for performing administrative operations. Access to service accounts is usually tightly controlled using privileged access management solutions.

Service accounts prevent other users and services from reading or writing to sensitive files in the directories that they own, and are themselves prevented from reading and writing to other parts of the file system where they are not owners.

Can Kerberos be used for single sign-on?

We do not support Kerberos as of now (but we support many other third-party authentication services).

What do the log files contain?

Linkurious Enterprise creates three types of logs:

How can the communication with an LDAP server be secured?

If your LDAP server supports secure LDAP, use the "ldaps://" protocol in your LDAP configuration.

Can Elasticsearch be secured?

If you need authentication and transport layer security for Elasticsearch:

Can I customize the cryptographic ciphers used for TLS?

To customize supported TLS ciphers, in the general configuration, set tlsCipherList in the server section. Here is an example, based on Mozilla's recommended cipher list:

{
  "tlsCipherList": "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:!eNULL:!aNULL"
}

Miscellaneous

What are PEM Certificates?

PEM (for Privacy-Enhanced Mail) is a file format for storing and sending cryptographic keys and certificates.

To verify if a certificate is PEM-encoded, open it with a text-editor, it should look something like this:

-----BEGIN CERTIFICATE-----
MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G
A1UEChMGR251VExTMSUwIwYDVQQLExxHbnVUTFMgY2VydGlmaWNhdGUgYXV0aG9y
DwEB/wQFAwMHBgAwHQYDVR0OBBYEFPC0gf6YEr+1KLlkQAPLzB9mTigDMAoGCCqG
SM49BAMCA0gAMEUCIDGuwD1KPyG+hRf88MeyMQcqOFZD0TbVleF+UsAGQ4enAiEA
l4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo=
-----END CERTIFICATE-----

If you have a DER-encoded certificate (binary), it can be converted to PEM:

Can I use Linkurious Enterprise without Elasticsearch?

Most graph vendors support search strategies other than Elasticsearch. See dails on our search options page.

Can I use a custom tile server in geo-spatial mode?

Yes. See the geospatial configuration options for further details.

Can I use ESRI ArcGIS for geo-spatial mode?

Yes, you can configure ArcGIS as the tile-server for geo-spatial mode. The ArcGIS documentation describes the API endpoints that is compatible with Linkurious Enterprise:

For example:

What are the command lines utilities to administrate Linkurious Enterprise?

Will enabling the audit trail impact performance?

Depending on the configuration options specified, enabling the audit trail can have an impact on performance. See the audit trail documentation for details.

 Getting started: Technical requirements

Linkurious Enterprise is a Web-application server. It needs to be installed on a server and can then be accessed by multiple users using their Web browser.

Linkurious Enterprise Server

Technical requirements for the machine used to install the Linkurious Enterprise Web-application server:

Hardware

Linkurious Enterprise hardware requirements change according to your needs and setup. Here are some scenarios with their suggested minimum hardware configurations.

Scenario 1

Project up to 20 users and few alerts.

Using the embedded Elasticsearch1:

Not using the embedded Elasticsearch:

Scenario 2

Project up to 100 users and tenth of alerts.

Using the embedded Elasticsearch1:

Not using the embedded Elasticsearch:

Scenario 3

Project with more than 100 users and several alerts.

To maintain stable performance, it is necessary to move heavily loaded components to well-dimensioned dedicated servers/clusters:

Hardware requirements only for the Linkurious Enterprise server:

Extra information:

Linkurious Enterprise requires a 64-bit system to run.

1The embedded Elasticsearch is not recommended when dealing with large amounts of data, see Elasticsearch documentation.

2Some extra space is required for the Elasticsearch full-text index. This space is proportional to the size of your graph database. A (very) rough estimate could be 50% of your graph database (it also depends on the actual data density).

3It is possible to configure Elasticsearch for higher memory usage, please contact us at support@linkurio.us.

4It is possible to configure Linkurious Enterprise for higher memory usage more, please contact us at support@linkurio.us.

Please keep in mind that these technical requirements are for Linkurious Enterprise server only. For hardware requirements regarding your graph database, please refer to these guides:

Elasticsearch

Linkurious Enterprise includes an embedded Elasticsearch instance for search capabilities. Please keep in mind that this embedded instance will only work for smaller graphs (less than 50M nodes + edges). For larger graphs, you will need to deploy an Elasticsearch cluster. Please refer to Elasticsearch's hardware requirements guide for details.

Operating System

Linkurious Enterprise server can be deployed on the following platforms:

Java

Linkurious Enterprise embedded ElasticSearch engine requires Java 7 or Java 8 from Oracle or OpenJDK.

If you do not intend to use the embedded ElasticSearch you need to change the configuration file choosing a different indexation strategy for all the configured datasources. With this configuration, the embedded Elasticsearch won't be started and the Java version won't be checked.

You can download the latest JRE from Oracle's website.

Installation instructions:

The JAVA_HOME environment variable should be set.

In order to use the embedded ElasticSearch on Windows 10, we strongly recommend using Java 8u211 as some other versions are incompatible with our embedded ElasticSearch.

SQLite and GLIBC 2.14

Linkurious Enterprise uses an embedded SQLite store for user-data persistence. This database requires GLIBC >= 2.14. Some older Linux distributions don't have this version of GLIBC available. You can check the version available on your system on http://distrowatch.com.

If SQLite does not work on your system, please refer to the user-data store documentation section to learn how to use an alternative database.

Linkurious Enterprise Client

Technical requirements for users that access Linkurious Enterprise with their Web browser:

Hardware

Hardware requirements of the Linkurious Enterprise Web client vary with the size of the visualized graphs. For up to 500 nodes and edges in a single visualization, we recommend to use a machine with 4 GB RAM, and 2 CPU cores @ 1.6 Ghz.

The minimal screen resolution is 1024 x 768.

Web Browser

End-users will access Linkurious Enterprise through a Web browser. The following browsers are officially supported:

 Getting started: Downloading

Where to download from

The latest version of Linkurious Enterprise can be downloaded from https://get.linkurio.us/.

Log in with the username and password created during the purchase process and then go to the download section for the specific license (in case of multiple one), it will be possible to download the package for the correct platform.

Archive content

The ZIP file contains:

Staying up-to-date

Please see the Linkurious Enterprise version compatibility matrix and our documentation on how to update Linkurious Enterprise.

 Getting started: Installing

Security considerations

To work properly, Linkurious Enterprise only need permissions (including write access) on the whole application directory, no administrative rights are needed.

The only exception may be related to Operating Systems' security policies preventing any standard user to bind applications on the first 1024 port numbers, see web server configuration to learn more on the issue and how prevent to grant administrative rights.

As best practice, it is advised to create a dedicated service account (e.g. linkurious) with the minimum level of permissions.

Linux systems

  1. Unzip Linkurious Enterprise archive: > unzip linkurious-linux-v2.10.18.zip
  2. Adjust securities to ensure access to the user executing the process
  3. Enter the Linkurious Enterprise folder: > cd linkurious-linux
  4. Check the configuration file at linkurious-linux/data/config/production.json (see how to configure a data-source)
  5. Make sure Java 7 or Java 8 is installed (type java -version in a terminal), if needed, install Java

See how to start Linkurious Enterprise on Linux.

Windows systems

  1. Remove eventual Windows security lock on the downloaded file (right-click on the file, then "Property" / "Unblock" / "OK")
  2. Unzip Linkurious Enterprise archive (right-click on the file, then "Extract all")
  3. Adjust securities to ensure access to the user executing the process
  4. Enter the linkurious-windows folder
  5. Check the configuration file at linkurious-windows/data/config/production.json (see how to configure a data-source)
  6. Make sure Java 7 or Java 8 is installed (see Java requirements)

See how to start Linkurious Enterprise on Windows.

Mac OS systems

  1. Unzip Linkurious Enterprise archive: > unzip linkurious-osx-v2.10.18.zip
  2. Adjust securities to ensure access to the user executing the process
  3. Enter the Linkurious Enterprise folder: > cd linkurious-osx
  4. Check the configuration file at linkurious-osx/data/config/production.json (see how to configure a data-source)
  5. Make sure Java 7 or Java 8 is installed (type java -version in a terminal), if needed, install Java

See how to start Linkurious Enterprise on Mac OS.

Docker Linux

  1. Load the docker image > docker load -i linkurious-docker-v2.10.18.tar.gz
  2. Port configuration

The Linkurious Enterprise docker image exposes the ports 3000 and 3443 for http and https connections respectively. These ports should we mapped on the host machine to allow user connections.

Please visit the docker documentation to learn how publish the ports of a container.

  1. Volume configuration

Even if not strictly necessary, the best practice is to define external named volumes to store application data outside the container.

The Linkurious Enterprise docker image doesn't declare any volume, however below folders should be maintained when upgrading Linkurious Enterprise and therefore should be mapped to external volumes:

Please visit the docker documentation to learn how the configure volumes.

Here is an example to create named volumes (an arbitrary name can be chosen):

docker volume create lke-data
docker volume create lke-elasticsearch
  1. Now you need to install your Linkurious Enterprise license, following the manage your license docker steps.

See how to start Linkurious Enterprise with docker.

Install as a service

In order to run Linkurious Enterprise automatically when the operating system starts, it is possible to install it as a system service on Linux, Mac OS and Windows.

Open the administration menu by running menu.sh, menu.bat or menu.sh.command in the Linkurious Enterprise folder. Click on Install Linkurious as a system service (administrative rights may be needed to successfully complete the task).

Linkurious Enterprise automatically detects the owner of the folder and will use that user as the Process owner.

It is possible to use a different user by running the menu script with the option --user=USER (where USER is the desired Process owner with adequate permissions).

Uninstall from services

When Linkurious Enterprise is installed as a service, the administration menu (by running menu.sh, menu.bat or menu.sh.command in the Linkurious Enterprise folder) will show the current status of the service as well as a new entry to Uninstall Linkurious from system services.

 Getting started: Starting Linkurious

Linux systems

To start Linkurious Enterprise, run the start.sh script in the linkurious-linux directory.

Alternatively, run the menu.sh script and click Start Linkurious.

By default, Linkurious Enterprise server will listen for connection on port 3000. However, some firewalls block network traffic ports other than 80 (HTTP). See the Web server configuration documentation to learn how to make Linkurious Enterprise listen on port 80.

Windows systems

To start Linkurious Enterprise, run the start.bat script in the linkurious-windows directory.

Alternatively, run the menu.bat script and click Start Linkurious.

The firewall of Windows might ask you to authorize connections to Linkurious Enterprise. If so, click on Authorize access.

Content of the linkurious-windows directory:

Linkurious Enterprise starting up on Windows:

Mac OS systems

Mac OS prevents you from running applications downloaded from the internet.
To solve this problem, please run the following command before starting Linkurious Enterprise.
This will remove the attributes used by the operating system to identify Linkurious Enterprise files as untrusted.

xattr -rc <Linkurious_home_directory>

To start Linkurious Enterprise, run the start.sh.command script in the linkurious-osx directory.

Alternatively, run the menu.sh.command script and click Start Linkurious.

Docker Linux

To start a Linkurious Enterprise docker image, please use the docker run command. Here is an example:

 docker run -d --rm \
     -p 3000:3000 \
     --mount type=volume,src=lke-data,dst=/data \
     --mount type=volume,src=lke-elasticsearch,dst=/elasticsearch \
     linkurious:2.10.18

If you choose to mount a host machine folder as a volume please make sure that the user within the container has read and write access to the volume folders. You can do that by adding a --user option to the docker run command. Please read the docker documentation to learn more.

Here is an example:

 docker run -d --rm \
     -p 3000:3000 \
     --mount type=bind,src=/path/to/my/data/folder,dst=/data \
     --mount type=bind,src=/path/to/my/elasticsearch/folder,dst=/elasticsearch \
     --user 1000:1000 \
     linkurious:2.10.18

Kubernetes

Please read the previous section on starting a Linkurious Enterprise instance using docker, and the section on fault tolerance.

A simple way to test out Linkurious Enterprise using kubernetes is to create a simple deployement, using only one replica, and allocate a PersistentVolume for both of the volumes (lke-data, lke-elasticsearch) described above.

In production however you would want to follow the fault tolerance guide and use a StatefulSet, with a main/failover strategy, and the appropriate strategy configured for your load-balancer or ingress.

 Getting started: Stopping Linkurious

Linux systems

Run the stop.sh script in the linkurious-linux directory.

Alternately, run menu.sh and click Stop Linkurious.

Windows systems

Run the stop.bat script in the linkurious-windows directory.

Alternately, run menu.bat and click Stop Linkurious.

Mac OS systems

Run the stop.sh.command script in the linkurious-osx directory.

Alternately, run menu.sh.command and click Stop Linkurious.

 Getting started: Configure Linkurious

To edit the Linkurious Enterprise configuration, you can either edit the configuration file located at linkurious/data/config/production.json or use the Web user-interface:

Using an administrator account, access the Admin > Configuration menu to edit the Linkurious Enterprise configuration:

Some configuration change requires a restart to be applied. Linkurious Enterprise will notify you about it and offer you to restart from the Web UI only if you made the changes from the Web UI itself. If you modified the production.json file manually, changes won't get applied immediately and you will need to restart Linkurious Enterprise.

Configuration keys are divided by category.

Password fields will always be hidden from the Web UI, but they can be edited.

You can also pass variables to the configuration, that will in turn expand to their appropriate value from environment variables or files. For example uising : "$ENV:NEO4J_PASSWORD" in the configuration will expand to the value of the environment variable "NEO4j_PASSWORD". Expandable variables are:

-"$ENV:VAR1": replaced with the value of the process' environment variable called "VAR1".

-"$ENV-NUMBER:VAR2": replaced with the value of the process' environment variable called "VAR2" parsed as a number.

-"$ENV-JSON:VAR3": replaced with the value of the process' environment variable called "VAR3" parsed as JSON.

-"$FILE:/path/to/file": replaced with the content of the file at "/path/to/file" parsed as a utf-8 string.

In the configuration object, you can use the following syntax:

When you are finished changing, click Save.

Limitation: There are some limitations with the "$ENV-JSON" expansion,

ENV-JSON breaks if used at the root level or at the first level of the configuration (for example for the whole "server.*" configuration key.)

ENV-JSON breaks if used at any level within the "dataSource.*" configuration key.

 Getting started: Release notes

Changelog for Linkurious Enterprise v2.10.18

Release Date: 2022-02-25

Bug Fixes (2)

 Getting started: Graph DB Feature Map

Features per graph database vendor

Feature \ Vendor Neo4j JanusGraph JanusGraph on IBM Compose Cosmos DB
Full-text search
Graph styles customization
Graph filtering
Graph editing
Access rights management
Custom graph queries
Custom query templates
Alerts
Shortest path analysis

Feature details

 Troubleshooting: Checking Linkurious status

Process monitoring

Linkurious Enterprise starts 3 separate processes when launched:

Check if these processes are alive by opening the menu from the Linkurious Enterprise directory (see how to open it on each operating system below):

Linux systems

Run menu.sh. Alternately, run menu.sh status.

Windows systems

Run menu.bat. Alternately, run menu.bat status.

Mac OS systems

Run menu.sh.command. Alternately, run menu.sh.command status.

API status

The status of the API can be retrieved using a browser or a command line HTTP client like cURL.

To retrieve the API status, send a GET request to http://127.0.0.1:3000/api/status (replace 127.0.0.1 and 3000 with the actual host and port of your server).

// example response 
{
  "status": {
    "code": 200,
    "name": "initialized",
    "message": "Linkurious ready to go :)",
    "uptime": 8633
  }
}

API version

To retrieve the API status, send a GET request to http://127.0.0.1:3000/api/version (replace 127.0.0.1 and 3000 with the actual host and port of your server).

// example response 
{
  "tag_name": "2.10.18",
  "name": "Brilliant Burrito",
  "prerelease": false,
  "enterprise": true
}

 Troubleshooting: Reading the logs

The logs of the application are located in linkurious/data/manager/logs folder:

For a structured version of the Linkurious Enterprise server logs in JSONL format, see the linkurious/data/logs folder:

 Troubleshooting: Getting support

If you need to get support regarding an issue or you have a question, please contact us at support@linkurio.us.

If the issue is technical, send us the application logs as described below.

Collect automatically generated report

You can download an automatically generated report from the Web user interface via the Admin > Global configuration menu:

At the end of the page, click Download Report:

Manual process to collect logs

It can happen that the system fails to start due an error. Since the impossibility to download the automatically generated report, the following files should be added manually to a compressed archive:

 Troubleshooting: Manage license

Install the license

To install the license, a license.key file needs to be added to the /data folder of Linkurious Enterprise.

In the case of a Linux, Windows or Mac OS systems, the license is generated, downloaded and installed automatically with the zip file. No extra step is needed.

Docker

When installing with Docker the process has to be done manually. To achieve this:

  1. If it doesn't already exist, create a folder with the same name as the already created data volume (lke-data in our documentation example).
  2. From get.linkurio.us, download the license.key file.
  3. Copy the license.key file inside the previously created folder (lke-data in our example).

Check if the license has expired

  1. Go to /data folder in Linkurious Enterprise
  2. Open the license.key file with a text editor
  3. Copy the base64 string in the license field, decode it (https://www.base64decode.org/)
  4. The endDate field is the expiration date in the form of a timestamp (https://www.epochconverter.com/)

Update the license

If your license with Linkurious has expired, please contact the sales team of Linkurious.

In case your license has already been renewed, you can update the license. To achieve this, either:

Note: if using Docker please follow the guideline in the install the license section

 Importing graph data

Before starting to explore the content of your graph database using Linkurious Enterprise, you need to import data in your graph database.

The following sections will help you import data into your graph:

  1. Neo4j
  2. JanusGraph
  3. Cosmos db

If you already have data in your graph database or you don't need to import anything, see how to configure your graph database with Linkurious Enterprise.

 Importing graph data: Neo4j

Linkurious relies on Neo4j to store the data. The data importation is thus not handled by our solution. Many options exist to import data into Neo4j:

Finally, if you want to quickly try Linkurious Enterprise with an example dataset, you can download a Neo4j-compatible example dataset.

 Importing graph data: JanusGraph

Please refer to the JanusGraph online documentation for details on how to load data into JanusGraph.

 Importing graph data: Cosmos db

Please refer to the Cosmos db online documentation for details on how to load data into Cosmos db.

 Updating Linkurious

We release a new version of Linkurious Enterprise every couple of month on average. New releases include new features, improvements, and bug fixes.

The following pages will help you check for updates, back-up Linkurious Enterprise before updating, and update Linkurious Enterprise.

 Updating Linkurious: Checking for updates

About menu

Using an administrator account, access the Your Username > About menu to open the Linkurious Enterprise info:

Click Check for updates to see if you are using the latest version.

Public version API

Alternatively, you can check at http://linkurio.us/version/linkurious-enterprise.json

// example response 
{
  "tag_name": "v2.10.18", // latest version of Linkurious Enterprise 
  "message": null,
  "url": "https://get.linkurio.us/" // where to download the latest version from 
}

 Updating Linkurious: Backing up your data

Follow these steps to perform a backup of your Linkurious Enterprise data:

  1. Stop Linkurious Enterprise
  2. Back-up of the linkurious/data folder
  3. Back-up the user-data store
  4. Start Linkurious Enterprise

Please note that this procedure does not back-up your graph data, but only your Linkurious Enterprise configuration and user-data (visualizations, users, etc.).

 Updating Linkurious: Update procedure

If you follow this procedure, you will be able to update Linkurious Enterprise to a newer version without loosing your configuration and user-data store. You data will be automatically migrated to the newer version.

Before updating, make sure that you have backed-up your data in case something unexpected happens.

During the update procedure, if Linkurious Enterprise is running, it will be stopped. You will need to re-start Linkurious Enterprise after the update procedure.

  1. Download the newer version you wish to install (see how to download Linkurious Enterprise)
  2. Copy linkurious-xxx-v2.10.18.zip in your current Linkurious Enterprise directory (along the start stop and update scripts)
  3. Run the update script (Linux: update.sh, OSX: update.sh.command, Windows: update.bat)
  4. Done! You can restart Linkurious Enterprise.

Update Docker image

If you use the Linkurious Enterprise docker image, the only step to update Linkurious Enterprise is to use the new docker image with the existing Linkurious Enterprise data volume.

Troubleshooting

If the update script fails, please check the update log located at linkurious/data/update.log for details.

 Configuring data-sources

A data-source is a conceptual representation of a graph database within Linkurious Enterprise. Visualizations and other user data created within Linkurious Enterprise will be associated their respective data-source.

Every data-source is uniquely identified with a sourceKey, a string computed from the url and the store id of the database.

Modifying the url or regenerating a database will result in Linkurious Enterprise generating a new sourceKey, consequently creating a new data-source with new information. If this behaviour is not desirable, consider using a stable sourceKey.

By default, Linkurious Enterprise is configured to connect to a Neo4j database at 127.0.0.1:7474.

Supported vendors

Linkurious Enterprise can connect to some of the the most popular graph databases:

Multi-database support

Linkurious Enterprise is able to connect to several graph databases at the same time and lets you switch from one database to another seamlessly.

Edit the data-source configuration

You can configure your data-sources via the Web user interface or directly on the linkurious/data/config/production.json file.

Using the Web user interface

Using an administrator account, access the Admin > Data menu to edit the current data-source configuration:

Edit the data-source configuration to connect to your graph database:

Submit the changes by hitting the Save configuration button.

Using the configuration file

Edit the configuration file located at linkurious/data/config/production.json.

See details for each supported graph database vendor:

 Configuring data-sources: Neo4j

Neo4j v4 is partially supported in Linkurious Enterprise 2.10.18.

The following Neo4j v4 features are currently not supported:

Neo4j is supported since version 3.2.0.

Note that Neo4j 4.0.0 and 4.0.1 are not supported. Neo4j 4 is supported since version 4.0.2.

Configuration

To edit the Neo4j data-source configuration, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "neo4j",
        "url": "http://127.0.0.1:7474/",
        "user": "myNeo4jUser",
        "password": "nyNeo4jPassword"
      },
      "index": {
        "vendor": "neo4jSearch"
      }
    }
  ]
}

Linkurious can connect to Neo4j via the Bolt protocol. To do so, you need to enable the protocol in your Neo4j configuration file. If Linkurious is connected over HTTP/S, it will try to automatically upgrade the connection to Bolt. The HTTP/S protocol is still required to perform a small subset of operations.

Supported graphdb options with Neo4j:

Neo4j v4 Full Bolt support

For Neo4j v4 and later, Linkurious Enterprise provides full support for Neo4j connections over Bolt.

Linkurious Enterprise will use Full Bolt connector when it is provided with any of following url prefixes: bolt, neo4j, neo4j+s

It is recommended to use Full Bolt connector for Neo4j v4 and later.

Neo4j Aura

Linkurious Enterprise allows using Neo4j instances running on Neo4j Aura as data-sources.

Neo4j Aura is only supported for the Neo4j Aura instances running Neo4j engine v4.0 and later.

Search with Neo4j

In order to have full-text search, you can choose among the following options:

Neo4j credentials

If you just installed Neo4j, these steps will help you create credentials:

  1. Launch the Neo4j server
  2. Open your Web browser at http://127.0.0.1:7474
  3. Follow the instructions to create a new username and password

Alternatively, you can disable credentials in Neo4j by editing the Neo4j configuration at neo4j/conf/neo4j.conf by uncommenting the following line:

dbms.security.auth_enabled=false

Note that Linkurious requires a Neo4j user with the Neo4j built-in admin or architect role. This is needed so that Linkurious can use Neo4j features like Index creation and Query cancellation.

If you are connecting via an architect role to Neo4j 3.5, it's necessary to explicitly specify both the bolt url and the http url via the url and httpUrl configuration keys respectively.

Configure Alternative Ids indices

Configure altenative ids indices is only possible starting from Neo4j 3.5.1 and it's recommended to do so.

The first step is to:

Once we have this information we can create the indices with the following Cypher queries:

call db.index.fulltext.createNodeIndex('myAlternativeNodeIdIndex', ['Company', 'Person', 'City'], ['myUniqueNodeId'], {analyzer: 'keyword'})

call db.index.fulltext.createRelationshipIndex('myAlternativeEdgeIdIndex', ['WORKS_FOR', 'LIVES_IN'], ['myUniqueEdgeId'], {analyzer: 'keyword'})

If new node labels or edge types are added to Neo4j, it's necessary to recreate these indices with the full list of categories.

Once the indices are created, we can configure them Linkurious:

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "neo4j",
        "url": "http://127.0.0.1:7474/",
        "user": "myNeo4jUser",
        "password": "nyNeo4jPassword",
        "alternativeNodeId": "myUniqueNodeId",
        "alternativeNodeIdIndex": "myAlternativeNodeIdIndex",
        "alternativeEdgeId": "myUniqueEdgeId",
        "alternativeEdgeIdIndex": "myAlternativeEdgeIdIndex"
      },
      "index": {
        "vendor": "neo4jSearch"
      }
    }
  ]
}

 Configuring data-sources: JanusGraph

JanusGraph is supported since version 0.1.1.

Configuration

To edit the JanusGraph data-source configuration, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "janusGraph",
        "url": "ws://127.0.0.1:8182/",
        "graphAlias": 'graph1',
        "traversalSourceAlias": "g1"
      },
      "index": {
        "vendor": "janusGraphSearch",
        "create": true
      }
    }
  ]
}

Supported graphdb options for JanusGraph:

There are 3 alternatives to configure JanusGraph:

Note that only one of these alternatives can be used.

Dynamic Graphs

You can access your dynamically created graphs using the graph and traversal bindings references. Simply set graphAlias to <graph.graphname> and traversalSourceAlias to <graph.graphname>_traversal.

Visit the JanusGraph Documentation for more information on how to set up dynamic graphs.

Serializers Configuration

Linkurious DOES NOT support JanusGraph versions above 0.3.0 with the default configuration.

However, you can connect to a JanusGraph 0.3+ instance by modifying the default serializer configuration.

To do this, follow these steps:

  1. Locate the gremlin server configuration file:
    • Generally located at $INSTALL_DIR/conf/gremlin-server/gremlin-server.yaml for JanusGraph versions 0.4 and below.
    • Generally located at $INSTALL_DIR/conf/gremlin-server/gremlin-server-cql-es.yaml for JanusGraph versions 0.5 and above.
  2. Replace the entire serializers section with the following:
serializers:
    - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
    - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
  1. Restart JanusGraph.

Search with JanusGraph

In order to have full-text search, you can choose among the following options:

JanusGraph For Compose

Linkurious can connect to your JanuGraph instances deployed on Compose.

Configuration

Example configuration:

{
"dataSources": [
  {
    "graphdb": {
      "vendor": "janusGraphForCompose",
      "url": "wss://xyz.composedb.com:16916",
      "user": "admin",
      "password": "XYZ",
      "graphName": "myGraph"
    },
    "index": {
      "vendor": "elasticSearch",
      "host": "127.0.0.1",
      "port": 9201
    }
  }
]
}

Supported graphdb options for JanusGraph For Compose:

 Configuring data-sources: Cosmos DB

Cosmos DB is supported by Linkurious.

Configuration

To edit the Cosmos DB data-source configuration, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "cosmosDb",
        "url": "https://your-service.gremlin.cosmosdb.azure.com:443/",
        ".NET SDK URI": "https://your-service.documents.azure.com:443/",
        "database": "your-graph-database",
        "collection": "your-collection",
        "primaryKey": "your-account-primary-key",
        "partitionKey": "your-collection-partition-key"
      },
      "index": {
        "vendor": "azureSearch",
        "url": "https://your-search-service.search.windows.net",
        "apiKey": "your-search-service-admin-api-key",
        "nodeIndexName": "your-node-index",
        "edgeIndexName": "your-edge-index"
      }
    }
  ]
}

Supported graphdb options for Cosmos DB:

Search with Cosmos DB

In order to have full-text search, you can choose among the following options:

 Configuring data-sources: Alternative IDs

When you save a visualization in Linkurious Enterprise, only the node and edge identifier are persisted in the user-data store, along with position and style information. When a visualization is loaded, the node and edge identifiers are used to reload the actual node and edge data from the graph database.

If you need to re-generate your graph database from scratch, the graph database will probably generate new identifiers for all nodes and edges, breaking all references to nodes and edges in existing visualizations.

Using a property as a stable identifier

You can configure Linkurious Enterprise to use a node or edge property as stable identifiers. Once set-up, Linkurious Enterprise will use the given property as identifier instead of using the identifiers generated by the database.

Thank to this strategy, visualizations will be robust to graph re-generation.

Notice that the properties used as identifier should be indexed by the database to allow for a fast lookup by value.

Alternative identifiers configuration

To use alternative node and edge identifiers, edit your data-source database configuration in the configuration file (linkurious/data/config/production.json):

Example of alternative identifier configuration with Neo4j:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "neo4j",
        "url": "http://127.0.0.1:7474/",
        "alternativeNodeId": "STABLE_NODE_PROPETY_NAME",
        "alternativeEdgeId": "STABLE_EDGE_PROPETY_NAME"
      }
      // [...] 
    }
  ]
}

Configuring alternative ids indices is only possible with Neo4j 3.5.1 and above. If you plan on using a compatible Neo4J version and alternative ids, we recommend configuring alternative ids indices.

To achieve better performance with alternative Ids is recommended to configure indices for alternative node and edge Ids.

Refer to the documentation specific to Neo4j on how to configure these indices.

Stable sourceKey

Linkurious Enterprise generates a unique identifier for data-source, based on internal information from the data-source. This data-source identifiers (called the sourceKey) is used to identify all user-data (visualizations etc.) that belongs to a data-source.

When re-generating a Neo4j graph database, for example, the sourceKey will change. In order to avoid breaking all references to existing visualizations, it is possible to set manually the sourceKey of a data-source.

Before re-generating the graph database, go to the Admin > Sources menu:

In the sources administration panel, find the Key value for your data-source (e.g. 1c3490bd) and copy it.

Then, edit the configuration file (linkurious/data/config/production.json) and set the key manualSourceKey for your data-source:

{
  "dataSources": [
    {
      "manualSourceKey": "1c3490bd",
      "graphdb": {
        "vendor": "neo4j",
        "url": "http://127.0.0.1:7474/"
      }
      // [...] 
    }
  ]
}

 Configuring data-sources: Merging data-sources

We recommend using a manual sourceKey if you need to re-generate a database with existing user data (visualizations etc.), or when the hostname and port number of your data-source are likely to change after the first connection to Linkurious Enterprise.

Alternatively, you have the possibility to merge the data-sources directly from the data-source management page with the following steps:

1. Open the data-source management page.

From the dashboard, go to the Admin > Data-sources management menu

2. Select the old data-source to be merged.

The data-source to be merged should be the old data-source now marked as offline because it has been replaced with the freshly generated data-source.

3. Select the new data-source to be updated.

On the resulting modal, select the new data-source then click on the merge button.

The user can choose to perform a normal merge, or an overwrite merge.

1- Normal merge

As a result, the following objects from the old data-source will be merged in the new data-source:

The Data-Source won't be deleted in case the user decides to do an overwrite merge at a later stage.

2- Overwrite merge

To perform an overwrite merge simply select the overwrite check box in the merge modal.

As a result, the objects from the old data-source mentioned above will be merged in the new data-source. Additionally, the following objects will be replaced in the new data-source with the ones from the old data-source:

This action will irreversibly remove any data associated with the old data-source, and the data-source will be deleted.

 Configuring data-sources: Advanced settings

The following advanced data-source settings applies to all data-sources.

To change them, see how to configure Linkurious Enterprise.

General settings

Search engine settings

Graph exploration settings

Additional Certificate Authorities

If Linkurious Enterprise is installed as a service, the service needs to be uninstalled and re-installed for the change to be taken into account.

Password obfuscation

 Search index

Linkurious Enterprise allows you to search your graph using natural full-text search.

In order to offer the search feature out-of-the-box, Linkurious Enterprise ships with an embedded Elasticsearch server. This option allow for zero-configuration deployment in many cases.

Indexing your graph data

By default, Linkurious Enterprise uses Elasticsearch for search. This options requires Linkurious Enterprise to index your graph database, which technically means that Linkurious Enterprise will feed the whole content of the graph database to Elasticsearch to make it searchable. The time required to index the graph database increases with the size of the graph and this solution has limits in its scalability.

Indexation typically happens at speeds between 2000 and 20000 nodes or edges per second, depending on the number of properties for nodes and edges, and hardware performances.

Embedded Elasticsearch

By default, Linkurious Enterprise ships with an embedded Elasticsearch server (version 1.4.5). This server only listens for local connections on a non-default port (it binds to 127.0.0.1:9201), for security reasons and to avoid collisions with existing servers.

Use your own Elasticsearch

It is possible to use your own Elasticsearch cluster for performances reasons. Linkurious Enterprise supports Elasticsearch v1.x and v2.x. See details about Elasticsearch configuration options.

Search scalability and alternatives to Elasticsearch

Using Elasticsearch is convenient but may not fit cases where the graph database is big (more than a couple million nodes and edges) and is regularly modified from outside Linkurious Enterprise, which required to re-index the whole database.

In order to offer a scalable search feature on big graphs, Linkurious Enterprise offers alternatives search solution. See details about the different options.

Edit the search configuration

You can configure your search engines via the Web user interface or directly on the linkurious/data/config/production.json file.

Using the Web user interface

Using an administrator account, access the Admin > Data menu to edit the current data-source configuration:

Edit the search engine configuration to connect to your graph database:

Submit the changes by hitting the Save configuration button.

Using the configuration file

Edit the configuration file located at linkurious/data/config/production.json.

See details for each supported search connector.

 Search index: Search Option Comparison

Feature Description
Onboarding Does this search option require additional configuration, or can it be used out-of-the-box with the associated graph database?
Fast indexation How fast is indexation? Note that this is a relative metric; while some search options may be faster than others, speed will depend on the complexity of your data model and your hardware limitations.
Automatic index sync Are changes made to the graph DB propagated to the index automatically?
Search scalability How well search queries perform for large graph databases? The actual upper limit on the performance of a given option will vary from vendor to vendor.
Advanced search If advanced search features are available, such as numerical and date range search operators.

Neo4j

Onboarding Fast indexation Automatic index sync Search scalability Advanced search
Embedded Elasticsearch Plug-and-play No No Won't scale beyond ~100M nodes Yes (requires configuration)
External Elasticsearch (v2+) Requires Elasticsearch installation and configuration No No Yes (by adding hardware to Elasticsearch cluster) Yes (requires configuration)
Neo4j-to-Elasticsearch Requires Elasticsearch installation and configuration + Neo4j plugin installation and configuration Yes (up to 10,000 nodes/second) Yes Yes (by adding hardware to Elasticsearch cluster) Yes
Neo4j Search (v3.5.1+) Requires configuration in Neo4j (properties to index must be specified) Yes Yes Limited No
ElasticSearch Incremental Indexing Requires External ElasticSearch (versions compatible with LKE) and Neo4j v3.5.1 and above Yes (except the first full indexation) Yes (requires configuration) Yes (by adding hardware to Elasticsearch cluster) Yes (requires configuration)

JanusGraph

Onboarding Fast indexation Automatic index sync Search scalability Advanced search
Embedded Elasticsearch Plug-and-play No No Won't scale beyond ~100M nodes Yes (requires configuration)
External Elasticsearch (v2+) Requires Elasticsearch installation and configuration No No Yes (by adding hardware to Elasticsearch cluster) Yes (requires configuration)

JanusGraph on Compose.com

Onboarding Fast indexation Automatic index sync Search scalability Advanced search
Embedded Elasticsearch Plug-and-play No No Won't scale beyond ~100M nodes Yes (requires configuration)
External Elasticsearch (v2+) Requires Elasticsearch installation and configuration No No Yes (by adding hardware to Elasticsearch cluster) Yes (requires configuration)

Cosmos DB

Onboarding Fast indexation Automatic index sync Search scalability Advanced search
AzureSearch Requires AzureSearch setup (easy) Yes Yes Yes No
Embedded Elasticsearch Plug-and-play No No Won't scale beyond ~1M nodes Yes (requires configuration)
External Elasticsearch (v2+) Requires Elasticsearch installation and configuration No No Won't scale beyond ~1M nodes Yes (requires configuration)

 Search index: Neo4j

The neo4jSearch connector is a solution for full-text search with Neo4j. neo4jSearch is supported since version 3.5.1 of Neo4j.

Neo4j search integration

Linkurious can use the builtin search indices managed by Neo4j itself. You can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json to set the index.vendor property to the value neo4jSearch.

Configuration

To edit the Neo4j data-source configuration, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "neo4j",
        "url": "http://127.0.0.1:7474/",
        "user": "myNeo4jUser",
        "password": "nyNeo4jPassword"
      },
      "index": {
        "vendor": "neo4jSearch",
        "indexEdges": true
      }
    }
  ]
}

Supported index options with Neo4jSearch:

Note that, in Neo4jSearch, only fields stored in Neo4j as string will be searchable. Numerical and date properties won't be searchable if stored in Neo4j as numbers or native dates.

 Search index: Neo4j to Elasticsearch

Neo4j-to-elasticsearch is a Neo4j plugin that enables automatic synchronization between Neo4j and Elasticsearch. This means that all changes to Neo4j are automatically propagated to Elasticsearch.

Neo4j-to-elasticsearch plugin is not compatible with Neo4j v4.x.

Install neo4j-to-elasticsearch

Follow these steps to install the plugin:

  1. Download the GraphAware framework JAR
    • Choose a version A.B.C.x where A.B.C matches your Neo4j version and x is 44 or later
  2. Download the neo4j-to-elasticsearch JAR
    • Choose a version A.B.C.x.y where A.B.C matches your Neo4j version and x.y is 44.8 or later
  3. Copy graphaware-server-community-all-A.B.C.x.jar and graphaware-neo4j-to-elasticsearch-A.B.C.x.y.jar to your neo4j/plugins directory
  4. Add the following lines to the beginning of your Neo4 configuration file (neo4j/conf/neo4j.conf):
    com.graphaware.runtime.enabled=true
    com.graphaware.module.ES.1=com.graphaware.module.es.ElasticSearchModuleBootstrapper
    com.graphaware.module.ES.uri=HOST_OF_YOUR_ELASTICSEARCH_SERVER
    com.graphaware.module.ES.port=PORT_OF_YOUR_ELASTICSEARCH_SERVER
    com.graphaware.module.ES.mapping=AdvancedMapping
    com.graphaware.module.ES.keyProperty=ID()
    com.graphaware.module.ES.retryOnError=true
    com.graphaware.module.ES.asyncIndexation=true
    com.graphaware.module.ES.initializeUntil=2000000000000
     
    # Set "relationship" to "(false)" to disable relationship (edge) indexation. 
    # Disabling relationship indexation is recommended if you have a lot of relationships and don't need to search them. 
    com.graphaware.module.ES.relationship=(true)
     
    com.graphaware.runtime.stats.disabled=true
    com.graphaware.server.stats.disabled=true
  5. Restart Neo4j
  6. Once Neo4j has finished indexing the data, remove the following line (and only this line) from neo4j/conf/neo4j.conf:
    com.graphaware.module.ES.initializeUntil=2000000000000

Please note that indexation could fail if your data uses different data types for the same property key. For example, if a property representing a date uses ISO strings in some nodes and timestamps in others. If you encounter this issue, please get in touch.

Note regarding initializeUntil

The initializeUntil specification is used to trigger the indexation of existing Neo4j data in Elasticsearch. This is because com.graphaware.module.ES.initializeUntil must be set to a number slightly higher than a Java call to System.currentTimeInMillis() would normally return when the module is started. Thus, the database will be (re-)indexed only once, and not with every subsequent restart.

In other words, re-indexing will happen if System.currentTimeInMillis() < com.graphaware.module.ES.initializeUntil.

Integrate with Linkurious Enterprise

Once the neo4j-to-elasticsearch plugin is installed, you need to change the relevant data-source configuration to use neo2es as its search index vendor.

You can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json to set the index.vendor property to the value neo2es.

Troubleshooting and Optimization

For smaller indexation tasks, neo4j-to-elasticsearch can be used straight out of the box. Larger indexes are trickier. If you find that indexation is failing to complete, or that search is unusually slow, you will need to settle for partial indexation in order to keep Elasticsearch usable.

Partial indexation can be configured in your Neo4j configuration file (neo4j/conf/neo4j.conf) by specifying a subset of your graph to index. You can do this by selecting which types of nodes and relationships to keep and which properties to index on these nodes.

(For a detailed list of configuration options, please consult Neo4j-to-elasticsearch's official documentation, available at Graphaware's Github repo. This guide will focus on a few use cases that should be adaptable to a wide variety of graph models.)

Configuration Options

Partial indexation is handled by four options which can be added to your Neo4j configuration file:

com.graphaware.module.ES.node=...
com.graphaware.module.ES.node.property=...
com.graphaware.module.ES.relationship=...
com.graphaware.module.ES.relationship.property=...

com.graphaware.module.ES.node and com.graphaware.module.ES.relationship control which nodes and relationships to index. com.graphaware.module.ES.node.property and com.graphaware.module.ES.relationship.property control which properties of these nodes and relationships to index.

Each of these lines is followed by one or more parameters. Parameters are boolean expressions. They can be chained together using standard logical operators && (AND) and || (OR). The most basic parameters are (true) and (false). They will tell Elasticsearch either to index everything (the default behavior) or to index nothing. Note that the parentheses are required in order to force Elasticsearch to ignore these nodes while loading your database into the index.

To reiterate, if we add the line

com.graphaware.module.ES.relationship=(false)

to our configuration file, Elasticsearch will not index any of the relationships in our database.

The next step up is parameter functions. These allow for more complex inclusion and exclusion rules. For both nodes and relationships, the following two functions are available:

Additionally, there are functions specific to nodes and to relationships. For nodes, those that are useful for partial indexation are:

And for relationships, they are:

(You can find a full list of functions in Graphaware's inclusion policies maintained on their Github repo.)

Example

Say we have a due diligence database containing people, banks, account numbers, addresses, telephone numbers, and email addresses. It's a large database -- several hundred million nodes -- so to fully index the graph would be a lengthy process, and may ultimately be unnecessary if we are only interesting in full-text search on a restricted set of node and relationship properties.

The first question we should ask is about this data we are interested in. Maybe as part of our hypothesis about the data, we want to focus on a certain subset of connections that we believe form patterns of interest.

Let's say that we want to focus on identifying information only -- we think that there are cases where this information is shared by multiple individuals, for instance, and we're interested in analyzing them. The nodes of interest to us will therefore be those nodes which represent pure identifiers and not entities themselves -- addresses, telephone numbers, account numbers, and email addresses. We can tell Elasticsearch to index these and only these by adding the following line to our Neo4j configuration file:

com.graphaware.module.ES.node=hasLabel('Address') || hasLabel('Telephone') || hasLabel('Email') || hasLabel('Account')

(com.graphaware.module.ES.node=!hasLabel('Person') && !hasLabel('Bank') will also work.)

And since we aren't including people or banks, we also want to focus on the relationships relevant to our nodes of interest:

com.graphaware.module.ES.relationship=isType('HAS_ADDRESS') || isType('HAS_PHONE') || isType('HAS_EMAIL') || isType('HAS_ACCOUNT')

If we want to be even more specific, we can select only those properties which are relevant to our inquiry by adding com.graphaware.module.ES.node.property to our config. We follow this with a list of keys, or node property names, (separated by || (OR) statements) that we want to include in our index:

com.graphaware.module.ES.node.property=key == 'address1' || key == 'city' || key == 'state' || key == 'number' || key == 'email' || key == 'accountNumber'

And if we only want to index nodes and NOT relationships, we can disable relationship indexation completely:

com.graphaware.module.ES.relationship=(false)

Since relationships are often more numerous than nodes, excluding them from our index can significantly reduce its storage footprint.

What if we finish our initial analysis and conclude that we need to know more about the financial institutions in our graph? We want to add banks to our index, but we don't need to search everything that is stored on them. Furthermore, we're only interested in banks in a certain region -- Europe, say. We can add the right banks back by modifying our original directive to read:

com.graphaware.module.ES.node=hasLabel('Address') || hasLabel('Telephone') || hasLabel('Email') || hasLabel('Account') || (hasLabel('Bank') && getProperty('bankRegion', 'None') == 'Europe')

And we can add a few bank properties like this:

com.graphaware.module.ES.node.property=key == 'address1' || key == 'city' || key == 'state' || key == 'number' || key == 'email' || key == 'accountNumber' || key == 'bankName' || key == 'bankIdentifier' || key == 'bankRegion'

Keep in mind that these keys will be indexed for every node on which they appear. If banks have a state property, for example, it will be added to the index for state. It's worth remembering this when constructing your data model. Namespace collisions can prove computationally costly.

Summary

Neo4j-to-Elasticsearch, in combination with partial indexation strategies, can be a very efficient way of handling index synchronization for large graphs. Even if your graph is small enough to index in full, it's worth considering to what extent it may grow. By examining your problem space and choosing only a subset of the information available to you for your traversal needs, you may save yourself future headaches and maximize the efficiency of your graph.

 Search index: Azure search

Azure search is the recommended full text search solution for Cosmos DB.

Azure search integration

Linkurious requires an index on nodes to perform a search. If you do not have a configured index yet, you can create one via the azure portal.

Additionally, you can create an index on edges if you want search them as well with Linkurious.

Please review the description of each index attributes and make sure the label field is marked as filterable. Linkurious will not be able to use the index otherwise.

Configuration

To edit the AzureSearch data-source configuration, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "dataSources": [
    {
      "graphdb": {
        "vendor": "cosmosDb",
        "url": "https://your-service.gremlin.cosmosdb.azure.com:443/",
        "database": "your-graph-database",
        "collection":  "your-collection",
        "primaryKey": "your-account-primary-key"
      },
      "index": {
        "vendor": "azureSearch",
        "url": "https://your-search-service.search.windows.net",
        "apiKey": "your-search-service-admin-api-key",
        "nodeIndexName": "your-node-index-name",
        "edgeIndexName": "your-edge-index-name"
      }
    }
  ]
}

Supported index options with Azure search:

Please refer to the Azure search online documentation for details on how to load data into Azure search.

Note that today, in Azure Search, it's not possible to search on numbers or dates. If you are interested in this feature, please get in touch.

 Search index: Configuring Elasticsearch

Elasticsearch is supported from version 1.4.0 (using the elasticSearch connector) and for version 2.x and version 6.x (using the elasticSearch2 connector).

Elasticsearch 7 is not yet supported in Linkurious Enterprise 2.10.18.

Embedded Elasticsearch

Linkurious Enterprise ships with an embedded Elasticsearch server (version 1.4.5).

ATTENTION: The internal Elasticsearch is not intended to be used for graph databases > 50,000,000 nodes. Though indexation and search performance are ultimately dependent on hardware limitations, it has been configured to prevent horizontal scaling and so is not an efficient choice for large DBs. It is meant instead as a quick indexation strategy for POCs or small deployments.

To use the Linkurious Enterprise embedded Elasticsearch instance, set the following index configurations keys:

Example configuration:

{
  "dataSources": [
    {
      "graph": {
        "vendor": "neo4j"
        "url": "http://127.0.0.1:7474"
      },
      "index": {
        "vendor": "elasticSearch",
        "host": "127.0.0.1",
        "port": 9201
      }
    }
  ]
}

Configuring Elasticsearch v1.x

Search connector elasticSearch supports the following options:

Example configuration:

{
  "dataSources": [
    {
      "graph": {
        "vendor": "neo4j"
        "url": "http://127.0.0.1:7474"
      },
      "index": {
        "vendor": "elasticSearch",
        "host": "192.168.1.80",
        "port": 9200,
        "dynamicMapping": false
      }
    }
  ]
}

Configuring Elasticsearch v2.x

Search connector elasticSearch2 supports the following options:

Example configuration:

{
  "dataSources": [
    {
      "graph": {
        "vendor": "neo4j"
        "url": "http://127.0.0.1:7474"
      },
      "index": {
        "vendor": "elasticSearch2",
        "host": "192.168.1.122",
        "port": 9200,
        "skipEdgeIndexation": true
      }
    }
  ]
}

Enabling search on numerical and date properties

To enable search on numerical and date properties is necessary to set to true the option dynamicMapping described above. It's also necessary to declare correctly these properties as numbers or dates in the Linkurious Enterprise schema.

Note that, today, only date properties stored in Neo4j as ISO dates are currently searchable.

If you have issue configuring search on numerical or date properties, please get in touch.

 Search index: Using Elasticsearch on AWS

By default, Linkurious Enterprise ships with an embedded Elasticsearch instance which works out-of-the-box by default. The embedded Elasticsearch instance will work well for average to large database sizes, but for search-heavy use-cases or very large databases, configuring your own Elasticsearch cluster might be necessary.

An easy way to deploy an easy-to-scale Elasticsearch cluster yourself is to use Amazon Web Services (AWS).

Please follow these steps to create a configure your AWS Elasticsearch cluster with Linkurious Enterprise:

Create your AWS account

Visit the Amazon Web Services website and create your account (or log in if you already have one).

Create a new cluster

Visit the Amazon Elasticsearch Service page, log-in and follow the steps to create an Elasticsearch cluster:

  1. Select Services > Elasticsearch Service

  2. Click get started

  3. Name your cluster (1) and select the Elasticsearch version 2.x (2), click Next

  4. Select the instance type, the number of instances and the number of dedicated masters in your cluster (3), depending on your database's size

  5. Configure the access policy for your cluster. Use access from specific IP (4) and enter the public IP address of your Linkurious Enterprise server (5)

  6. Review your configuration and confirm the creation of the cluster

  7. Wait until the cluster is deployed (usually less than an hour)

  8. When your cluster is deployed, copy the Endpoint host name

  9. Configure Elasticsearch as explained here

 Search index: Incremental Indexing

Incremental indexing allows you to keep in sync your ElasticSearch index and your Neo4j graph database.

Linkurious Enterprise will index at a regular interval new and updated items from your database. This way, you avoid a complete reindex every time you update you data.

This is achieved by keeping track of a timestamp on every node and edge in the database, hereby allowing the indexer to only consider the nodes with newer timestamps and consequently reducing the indexing time.

You should consider this option if your database holds a significant number of items and needs to be updated frequently.

Requirements

1. APOC

Linkurious Enterprise relies on APOC triggers to ensure that every node and edge created or updated has a timestamp.

You need to make sure that you have installed APOC correctly and enabled APOC triggers. You can find all the information you need to install APOC from Neo4j documentation.

You can quickly verify that you have installed everything correctly by executing the following command from your Neo4j browser:

CALL apoc.trigger.list()

2. A property name

Your need to carefully choose the property that will hold the timestamp on every node/edge of your database. The consequence of this choice is that Linkurious Enterprise will create triggers that will store a timestamp on this property for all new and updated nodes/edge.

This means that any information stored on that property will be overwritten by the trigger

Enabling incremental indexing with ElasticSearch

After you have installed and configured APOC, you can enable incremental indexing from the data-source configuration page or by editing the configuration file with the following options:

After you have enabled incremental indexing for the first time, Linkurious Enterprise requires to perform a complete re-indexing of the data-source to ensure that every item has been indexed. You simply need to click on the "Start indexing" button to complete the configuration.

Linkurious Enterprise will index your data-source incrementally from that point forward using the timestamps generated by the APOC triggers.

Scheduling incremental indexing

Once you have set up the incremental indexation, you can configure the frequency at which it will be triggered. You can customize this schedule by adding a cron expression to your ElasticSearch configuration.

You can make this change from the configuration file located at linkurious/data/config/production.json under datasources.index for each dataSource.

By default, we have set all incremental indexing to be launched every Sunday at 12PM, but you can change it to the frequency that most suits your needs. We advise that you schedule your increments to run after you have updated your database with new information. Here are some examples of cron expressions:

If you need to index your data-source on a non regular interval, you can trigger it manually with the “Start indexation” button in the data-source configuration page or trigger it automatically with the start indexation api.

You can disable automatic indexing with the following configuration: "incrementalIndexationCron": "none"

Limitations

Here some important considerations when choosing incremental indexing.

Incremental indexing is only available on Neo4j v3.5.1 and above.

 User-data store

Linkurious Enterprise uses an SQL database to store user-data. The database contains:

By default, the user-data store is a SQLite file-based database. This makes Linkurious Enterprise easy to deploy.

To see the list of database providers and versions supported by Linkurious Enterprise, please view the compatibility matrix.

For deployment at scale (more than a couple users), we recommend switching to one of the supported server-based databases:

MySQL 8 is not yet supported in Linkurious Enterprise 2.10.18.

Getting started

In order to get started with Linkurious Enterprise, there are few requirements for user-data store to function properly.

Database configuration

Linkurious Enterprise provides many options which can be used to configure user-data store connection. Below, you can find the configuration documentation and configuration examples for popular DBMS solutions.

In the Linkurious Enterprise configuration file, it is possible to configure the database connection under db key.

Configure with SQLite

SQLite if the default user-data store of Linkurious Enterprise.

"db": {
    "name": "linkurious",
    "options": {
      "dialect": "sqlite",
      "storage": "server/database.sqlite"
    }
}

Configure with MySQL

"db": {
    "name": "linkurious",
    "username": "MYSQL_USER_NAME",
    "password": "MYSQL_PASSWORD",
    "options": {
      "dialect": "mysql",
      "host": "MYSQL_HOST",
      "port": 3306
    }
}

Configure with Microsoft SQL Server

"db": {
    "name": "linkurious",
    "username": "MSSQL_USER_NAME",
    "password": "MSSQL_PASSWORD",
    "options": {
      "dialect": "mssql",
      "host": "MSSQL_HOST",
      "port": 1433
    }
}

Configure with MariaDB

"db": {
    "name": "linkurious",
    "username": "MARIADB_USER_NAME",
    "password": "MARIADB_PASSWORD",
    "options": {
      "dialect": "mariadb",
      "host": "MARIADB_HOST",
      "port": 3306
    }
}

 User-data store: Migrating to an external database

The default storage system for Linkurious Enterprise is SQLite.

Configuring Linkurious Enterprise to work with MySQL is a really easy procedure if you don't need to migrate data from SQLite.

Migrating data from SQLite to an external database is possible but it is a procedure we would recommend only if restarting from scratch with the new configuration is not a viable option.

Our public resources contain a specific tool needed to perform the migration from SQLite to one of the supported databases, as well as the detailed list of steps to use and eventually configure the tool.

If you need help with the procedure please contact us at support@linkurio.us.

 User-data store: Backing-up your store

If you are using the SQLite database (by default), you only need to follow the standard Linkurious Enterprise backup procedure.

If you are using another database to store the Linkurious Enterprise user-data, please refer to one of the following guides:

 Web server

The web server of Linkurious Enterprise delivers the application to end users through HTTP/S. It is configured in the server configuration key within the configuration file (linkurious/data/config/production.json):

General

Within the server key:

Some firewalls block network traffic ports other than 80 (HTTP). Since only root users can listen on ports lower than 1024, you may want reroute traffic from 80 to 3000 as follows:

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

If you use SSL, you can add a second rule to redirect 3443 to 443:

sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3443

Serving Linkurious Enterprise under a custom path

Within the server key:

In some cases, you may want to host Linkurious Enterprise on a path other than root for a particular domain. For example, if you want Linkurious Enterprise to be reachable at http(s)://HOST:PORT/linkurious you should set baseFolder equal to linkurious.

Link generation

Within the server key:

In some cases, Linkurious Enterprise needs to generate links to itself (for example when generating a link to a widget). For that, the server needs to know its public domain and port to generate those links.

The public port can be different from the actual port if you use traffic rerouting (using a firewall or a reverse-proxy). In the example above (traffic rerouting), the actual HTTP port (listenPort) is 3000, but the public HTTP port (publicPortHttp) is 80.

Cookies

Within the server key:

Cross-origin resource sharing (CORS)

Within the server key:

Allow to render Linkurious embedded in another web page

First, to embed Linkurious in an iframe, make sure that Linkurious and your main application url have the same base domain and the same http scheme. If not, Linkurious cookie won't be sent in the HTTP request and Linkurious interface will be unavailable.

For example, you can serve the main application containing the iframe under:

And Linkurious under:

Note the usage of https also on the base domain. It's required in case Linkurious is also served under https.

Then, in the configuration file of Linkurious, within the server key, update allowFraming to true:

By default, Linkurious doesn't allow framing by returning at each request the following HTTP header: X-Frame-Options:SAMEORIGIN.

Setting the configuration key to true will remove the X-Frame-Options HTTP header.

Custom HTTP Headers

Within the server key:

Example:

customHTTPHeaders:{
  'header1': 'value1',
  'header2': 'value2',
  ...
}

Note: Some header keys are reserved for LKE and will be overwritten by the server to default values.

Image cross-origin (client-side)

Within the ogma.settings.render key:

Disabling compression for dynamic content

It is possible to disable the gzip compression for dynamic content that is returned by the Linkurious server.

Within the server key:

SSL

Within the server key:

External communications with the Linkurious Enterprise server can be secured using SSL without installing third-party software.

If the Linkurious Enterprise server, graph database, and the search index are installed on different machines, we recommend using secure communication channels between these machines (e.g. HTTPS or WSS). Please refer to the data-source documentation and search index documentation to learn how to enable HTTPS.

To use custom Certificate Authorities (CA), please check how to use additional Certificate Authorities in Linkurious Enterprise.

The TLS protocols supported by Linkurious Enterprise server are TLSv1.0, TLSv1.1 and TLSv1.2. If for compliance reasons you want to manually disable TLSv1.0 and TLSv1.1, you can open the file data/manager/manager.json and add "--tls-min-v1.2", above "server/app.js",, save the file and restart Linkurious Enterprise.

 Authentication: Getting started

By default, user authentication is disabled and all actions are performed under the special account named Unique User.

The unique user has unrestricted access and does not require a password, so anyone can access the platform.

We strongly advise you to enable user authentication to secure the access to your data once Linkurious Enterprise is deployed on a server.

There are 2 possible authentication options:

Local authentication

Local authentication can be enabled from Linkurious Enterprise user interface.

Once local authentication is enabled, users need an account to access Linkurious Enterprise. Administrators can create accounts directly in Linkurious Enterprise (see how to create users).

To enable authentication use the Web user interface via the Admin > Users menu:

Enabling authentication, step 1

The following screen will be prompted if authentication is disabled. Click Enable Authentication.

Enabling authentication, step 2

Create an admin account and click Save and enable.

Enabling authentication, step 3

Password hashing

Passwords of local users are hashed with the PBKDF2 algorithm and the following parameters:

External authentication

When using an external source for authentication, users are automatically created in Linkurious Enterprise when they connect for the first time.

These shadow-users allow to store specific data such as preferences, groups and visualizations.

Passwords of external users are never stored inside Linkurious Enterprise.

Authentication services

Linkurious Enterprise supports the following external authentication services:

If your company uses an authentication service that Linkurious Enterprise does not support yet, please get in touch.

If you enable an Single-Single-On (SSO) capable authentication service (OAuth/OpenID Connect or SAML2), your users won't need to login directly in Linkurious Enterprise but, instead, by clicking the SSO button they will be redirected to the identity provider for authentication.

img

Group mapping

If an external source already organizes users in groups, it's possible to use this information to map automatically external groups to Linkurious groups. To do so, you have to set the access.externalUsersGroupMapping configuration key to be an object with the external group IDs as keys and the internal group IDs, group names or a combination of both as values (you can map both built-in and custom groups).

For example, if we want to provide group mapping for Microsoft Active Directory:

{ // under the access configuration key 
  // ... 
  "externalUsersGroupMapping": {
    "Administrators": 1 // any Active Directory admin is a Linkurious admin ("1" being the id of the admin built-in group) 
    "DataAnalysts": "analyst" // any Active Directory data analyst will be assigned to the "analyst" custom group of each data source(s) containing a group with that name 
    "ProductManagers": [3, "product manager"] // any Active Directory product manager will be assigned to the group with id "3" and to the "product manager" custom group of each data source(s) containing a group with that name 
  }
  // ... 
}

The built-in Linkurious group names that you can use here are the following:

Group names may or may not be case sensitive, that depends of the user-data-store vendor and collation.

For some identity providers the external group IDs is an actual name, for others is an ID:

To exclude some groups of users from logging in into Linkurious, set up a list of authorized groups in the configuration key access.externalUsersAllowedGroups.

{ // under the access configuration key 
  // ... 
  "externalUsersAllowedGroups": [
    "CN=Administrators,CN=Users,DC=linkurious,DC=local",
    "CN=Analysts,CN=Users,DC=linkurious,DC=local"
  ]
  // ... 
}

By default, when an external user is connected for the first time, their external groups are mapped once. So any change in the user's group in the external source would not be reflected in the LKE user. However, setting autoRefreshGroupMapping to true makes an external user's groups to be reset according to externalUsersGroupMapping, each time the external user logs in.

{ // under the access configuration key 
  // ... 
  "autoRefreshGroupMapping": true,
  // ... 
}

Also, note that when autoRefreshGroupMapping is true updating external users' groups from within LKE is not allowed.

When using SSO to log in to Linkurious Enterprise, if the identity provider groups are mapped to more than one built-in group, Linkurious Enterprise will choose the one with the highest permissions per data-source and ignore the other groups. If admin built-in group is used, it will override the other groups.

 Authentication: LDAP / Active Directory

If Linkurious Enterprise is connected to an LDAP service, users will be authenticated using the external service at each log-in.

If you have a LDAP service running in your network, you can use it to authenticate users in Linkurious Enterprise.

Contact your network administrator to ensure that the machine where Linkurious Enterprise is installed can connect to the LDAP service.

OpenLDAP

For OpenLDAP compatible providers, add or edit the existing ldap section inside the access configuration.

Allowed options in access.ldap:

The bindDN and bindPassword are optional. If specified they will be used to bind to the LDAP server.

Example LDAP configuration:

"access": {
  // [...] 
  "ldap": {
    "enabled": true,
    "url": "ldap://ldap.forumsys.com:389",
    "bindDN": "cn=read-only-admin,dc=example,dc=com",
    "bindPassword": "password",
    "baseDN": ["dc=example,dc=com"],
    "usernameField": "uid",
    "emailField": "mail",
    "groupField": "group"
  }
}

Active Directory

For Microsoft Active Directory, add a msActiveDirectory section inside the access configuration.

Allowed options in access.msActiveDirectory:

Users can authenticate with their userPrincipalName or their sAMAccountName.

Use the domain configuration key to avoid your users to specify the domain part of their userPrincipalName. Use the netbiosDomain configuration key to avoid your users to specify the NetBIOS domain part of their sAMAccountName.

Example Active Directory configuration:

"access": {
  // [...] 
  "msActiveDirectory": {
    "enabled": true,
    "url": "ldaps://ldap.lks.com:636",
    "baseDN": "dc=ldap,dc=lks,dc=com",
    "domain": "ldap.lks.com",
    "netbiosDomain": "LINKURIO",
    "tls": {
      "rejectUnauthorized": true
    }
  }
}

In alternative is possible to use your on premises Active Directory in conjunction with Azure Active Directory to provide SSO to your users. Please refer to Prerequisites for Azure AD Connect for more information and to SSO with Azure AD to know how to setup Azure AD as an identity provider.

 Authentication: SSO with Azure AD

Linkurious Enterprise supports Microsoft Azure Active Directory as an external authentication provider.

Configuration

To set up Linkurious Enterprise authentication with Microsoft Azure Active Directory, follow these steps:

  1. Create a new app called Linkurious in Azure Active Directory on Azure Portal
  2. Assign the Directory.Read.All access right to the new app (notice: an Azure admin's approval is needed)
  3. From the Azure Portal, obtain the following parameters:
    • authorizationURL, e.g. https://login.microsoftonline.com/60d78xxx-xxxx-xxxx-xxxx-xxxxxx9ca39b/oauth2/authorize
    • tokenURL, e.g. https://login.microsoftonline.com/60d78xxx-xxxx-xxxx-xxxx-xxxxxx9ca39b/oauth2/token
    • clientID, e.g. 91d426e2-xxx-xxxx-xxxx-989f89b6b2a2
    • clientSecret, e.g. gt7BHSnoIffbxxxxxxxxxxxxxxxxxxtyAG5xDotC8I=
    • tenantID, (optional, required only for group mapping) e.g. 60d78xxx-xxxx-xxxx-xxxx-xxxxxx9ca39b
  4. Add or edit the existing oauth2 section inside the access section in linkurious/data/config/production.json

Example access.oauth2 configuration with Microsoft Azure Active Directory:

"access": {
  // [...] 
  "oauth2": {
    "enabled": true,
    "provider": "azure",
    "authorizationURL": "https://login.microsoftonline.com/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/oauth2/authorize",
    "tokenURL": "https://login.microsoftonline.com/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/oauth2/token",
    "clientID": "XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX",
    "clientSecret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
    "azure": {
      "tenantID": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
    }
  }
}

Oauth2 redirect URL

The OAuth2 redirect URL of Linkurious Enterprise is the following: http(s)://HOST:PORT/api/auth/sso/return.

 Authentication: SSO with Google

Linkurious Enterprise supports Google as an external authentication provider with Single Sign-On.

Since Google implements the OpenID Connect standard, it can be configured as an OpenID Connect provider in Linkurious Enterprise.

Configuration

To set up Linkurious Enterprise authentication with Google, follow these steps:

  1. Create the credentials on your Google Developers console. You may have to fill in the OAuth consent screen.
  2. From the portal, obtain the following parameters:
    • authorizationURL, e.g. https://accounts.google.com/o/oauth2/v2/auth
    • tokenURL, e.g. https://www.googleapis.com/oauth2/v4/token
    • clientID, e.g. 1718xxxxxx-xxxxxxxxxxxxxxxx.apps.googleusercontent.com
    • clientSecret, e.g. E09dQxxxxxxxxxxxxxxxxSN
  3. Add or edit the existing oauth2 section inside the access section in linkurious/data/config/production.json

To limit the access to the Google accounts from your domain, use the hd query parameter in the authorizationURL with your domain as value.

Example access.oauth2 configuration with Google:

"access": {
  // [...] 
  "oauth2": {
    "enabled": true,
    "provider": "openidconnect",
    "authorizationURL": "https://accounts.google.com/o/oauth2/v2/auth?hd=YOUR_DOMAIN",
    "tokenURL": "https://www.googleapis.com/oauth2/v4/token",
    "clientID": "XXXXXXXXXX-XXXXXXXXXXXXXXXX.apps.googleusercontent.com",
    "clientSecret": "XXXXXXXXXXXXXXXXXXXXXXX"
  }
}

OAuth2 redirect URL

The OAuth2 redirect URL of Linkurious Enterprise is the following: http(s)://HOST:PORT/api/auth/sso/return. This redirect url will need to be added in the Authorized redirect URls section of the credentials section on your Google Developers console

google sso return url config

 Authentication: SSO with OpenID Connect

Linkurious Enterprise supports any OpenID Connect compatible provider as external authentication providers.

What is OpenID Connect?

OpenID Connect is an identity layer on top of the OAuth2 protocol. It allows applications (like Linkurious Enterprise) to verify the identity of End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable manner.

Configuration

To set up Linkurious Enterprise authentication with an OpenID Connect provider, you need to obtain the following parameters from the provider:

Example access.oauth2 configuration with any OpenID Connect provider:

"access": {
  // [...] 
  "oauth2": {
    "enabled": true,
    "provider": "openidconnect",
    "authorizationURL": "https://accounts.google.com/o/oauth2/v2/auth",
    "tokenURL": "https://www.googleapis.com/oauth2/v4/token",
    "clientID": "XXXXXXXXXX-XXXXXXXXXXXXXXXX.apps.googleusercontent.com",
    "clientSecret": "XXXXXXXXXXXXXXXXXXXXXXX"
  }
}

OAuth2 redirect URL

The OAuth2 redirect URL of Linkurious Enterprise is the following: http(s)://HOST:PORT/api/auth/sso/return.

Group mapping in OIDC

To set up group mapping in OpenID Connect is necessary to specify additional configuration keys:

For example if you want to set up OIDC with Okta:

"access": {
  // [...] 
  "oauth2": {
    "enabled": true,
    "provider": "openidconnect",
    "authorizationURL": "https://XXXXXXXXXX.oktapreview.com/oauth2/v1/authorize",
    "tokenURL": "https://XXXXXXXXXX.oktapreview.com/oauth2/v1/token",
    "clientID": "XXXXXXXXXXXXXXXXXXXXXXX",
    "clientSecret": "XXXXXXXXXXXXXXXXXXXXXXX",
    "openidconnect": {
      "userinfoURL": "https://XXXXXXXXXX.oktapreview.com/oauth2/v1/userinfo",
      "scope": "openid profile email groups",
      "groupClaim": "groups"
    }
}

 Authentication: SSO with SAML2 / ADFS

Linkurious Enterprise supports any SAML2 compatible provider as external authentication providers.

Configuration

To set up Linkurious Enterprise authentication with a SAML2 provider, you need to obtain the following parameters from the provider:

groupAttribute is the attribute of the SAML response containing the array of groups a user belongs to.

emailAttribute is the attribute of the SAML response that should contain the email address if the NameID format of the SAML response is not already an email.

Example access.saml2 configuration with any SAML2 provider:

"access": {
  // [...] 
  "saml2": {
    "enabled": true,
    "url": "https://example.com/adfs/ls",
    "identityProviderCertificate": "/Users/example/linkurious/saml.pem",
    "groupAttribute": "Groups"
  },
}

Assertion consumer service

To complete the login process, you need to configure your identity provider to return the SAML response to Linkurious Enterprise at the following URL: http(s)://HOST:PORT/api/auth/sso/return.

ADFS Configuration

In particular, ADFS (Active Directory Federation Services) is a SAML2 provider that offers Single-Sign-On towards an Active Directory service, see more on Microsoft documentation.

To set up Linkurious Enterprise authentication with ADFS, Linkurious Enterprise has to be configured as a Relying Party Trust in ADFS (see how to configure the ADFS on the Microsoft documentation).

To set up group mapping, the list of groups associated to a user should be passed in the SAML2 response. See how to configure a claim for the groups on the Microsoft documentation.

 Authentication: Configuration

The authentication is configured within the access configuration key in the configuration file (linkurious/data/config/production.json):

Local vs. external authentication

To access Linkurious Enterprise when authRequired is true, users need accounts in Linkurious Enterprise. Administrators can create accounts directly in Linkurious Enterprise (see how to create users) or rely on an external authentication service.

Linkurious Enterprise supports the following external authentication services:

If your company uses an authentication service that Linkurious Enterprise does not support yet, please get in touch.

If you enable an SSO capable authentication service (OAuth/OpenID Connect or SAML2), your users won't need to login directly in Linkurious Enterprise but, instead, by clicking the SSO button they will be redirected to the identity provider for authentication.

 Access control: Managing users

User management page

This page is accessed via Admin > Users & Groups in the main menu. The page lists all users, both local and external.

User management operations available to Administrators are the following:

Listing users

Assigning users to groups

In order to be able to access Linkurious Enterprise, users need to be assigned to at least 1 user group.

A user can be assigned to:

Group assignment can be done either while creating or editing a user account.

Editing a user

 Access control: Managing groups

Role-based access control

Linkurious Enterprise relies on a role-based access control model:

Group management page

This page is accessed via Admin > Users & Groups in the main menu.

This page lists:

Group management operations available to the Administrators are the following:

Listing groups

Creating a group

Creating a group is a 2-step process:

  1. "General & Admin rights": define access-rights on the features.
  2. "Access-rights": define access-rights on the data.

Editing a group

General & Admin rights

Features access-rights description

Queries access-rights

Custom actions access-rights

Alert access-rights

Admin access-rights

Access-rights with multiple groups

For users that belong to multiple groups, access-rights are cumulative. In other words, a user can do something if at least one of their groups allows them to do it.

For example if user belongs to 2 groups: one having No access and the other Process Alerts for the Alert rights, then they have the right to Process Alerts because one of their groups allows them to do so.

Access-rights on the data

There are 2 available options. You can read about them in their dedicated sections:

 Access control: Standard access-rights

Definition of Standard access rights

standard-access-rights-panel

Example

As a result: user Foo has EDIT access on node-category CONTRACT

 Access control: Property-key access-rights

Enabling Property-key access rights

property-key-access-rights-switch

Defining property-key access rights

To define property-key access rights, the administrator needs to create (or edit) a custom group: after the access-rights configuration page, with the property-key access rights feature switched on, a second panel will be displayed.

property-key-access-rights-panel

From here:

Example

As a result, user Foo has:

 Guest mode: Guest mode

What is the Guest mode

The Guest mode is a way to share graphs with people who do not have an account on Linkurious Enterprise.

Key characteristics:

Standard user interface: Standard waorkspace

Guest mode user interface: Guest user workspace

Enabling the Guest mode

By default, the Guest mode is disabled. Enabling the Guest mode is done on a per-source basis so that you can manage the access rights associated with the Guest mode at the source level.

Enabling it is a 3-steps process:

  1. In the Configuration screen (Menu > Admin > Configuration), enable the Guest mode
  2. A new user is created in the user list with the email guest@linkurio.us. By default it is not associated with any user group. Assign the Guest user to the Read-Only group, or any custom group if you need to restrict its rights (see the security warnings below).
  3. Repeat step 2 for each source on which you want to enable the Guest mode as its access rights are managed on a source per source basis.

Once enabled, the Guest mode is available at http://your-domain.com/guest (replace your-domain.com with the actual host and port of your server).

Configuring the Guest mode

By default, guest users won't be allowed to search, export and use the design, filter and edge grouping panel.

If you want to allow guest users to use these features, you can go to the Configuration screen (Menu > Admin > Configuration) and, under the Guest Mode configuration, change the following values:

Security warnings

All people who have access to /guest will be able to browse the whole database (in read-only mode), even if nobody has explicitly shared a specific visualization with them. You may want to check who has access to /guest before enabling the Guest mode.

If you have assigned the Guest user to the Read Only built-in user group, all node categories and edge types will be available to see by the people who can access /guest. If the database contains sensitive data, you should create a custom group with limited rights on the data and assign the Guest user to that group.

Even if the Guest user is assigned to a group that has Write or Delete permissions, Guest mode users will not be able to write or edit data.

When initialized with a visualization, the Guest mode allows the user to expand the graph beyond the content of that visualization. Consider restricting the Guest user access rights if that is an issue.

Populating the Guest mode workspace

Accessing /guest directly will return an empty workspace.

Using parameters in the URL you can populate the Guest mode workspace with:

The Guest user sees the current state of a visualization. Any change to a visualization from an authenticated user will be automatically applied to the public link shared to the Guest user.

 Data Schema

What is the schema?

Linkurious Enterprise uses a data schema to deliver some of its features. However, most graph databases are schema-less. As a consequence, Linkurious Enterprise automatically detects the data schema from each data-source. The automatically detected schema is:

The schema can be extended manually. An administrator can:

The schema has two modes:

The Partial mode is flexible and incremental, whereas the Strict schema is stable and normative. Therefore the Partial mode is a better fit in early projects when the data schema is poised to change. The Strict mode is a better fit for production-ready projects that require a more controlled environment.

IMPORTANT

Statements made in the schema DO NOT alter the data in the database:

Accessing the schema

The schema can be reached through the Admin menu.

If this is the first time you access the feature, you will be prompted to "sample" the database. Linkurious Enterprise will take a few seconds / minutes to go through a sample of the database and build a more complete schema.

Overview

Structure

The schema is presented as two columns:

Switch search/visibility for categories, types and properties

The "eye" icon next to node categories, edge types and properties allows to switch the visibility to View & Search, Only view or No access. See the dedicated section for more details.

Create node-categories, edge-types and properties manually

The schema may be incomplete. Or you may want to add a category, type or property that is not yet in the data. You can create them manually through action links available at the bottom the list.

Remove nodes or edges from search results

Some node-categories or edge-types are unlikely to be searched for. As a consequence they pollute the search results.

You can disable the search by clicking on the "eye" icon and select Only view option. The nodes or the edges will then be not searchable but stay visible.

You can also remove them from the search by selecting No access option. The nodes or the edges will then be not searchable and not visible.

The "eye" icon will be updated by the selected search/visibility option

Switch to Strict mode

The button "Switch to strict mode" allows you to switch to a strict mode. See the dedicated section for more information.

Reset the schema

If the schema has changed a lot and you want to start fresh, it is possible to "Reset the schema": this will remove any schema declaration, and trigger a new sampling of the database.

Add more sampling

If the schema is too incomplete (due for example to a recent import in the graph database with lots of new node categories / edge types / properties) it's possible to launch manually another sampling round to detect the missing categories / types / properties, using the Re-launch detection option in the Options menu.

Sample size

The sampling of the database will scan only part of the database. The larger the sample, the more accurate but the longer the detection.

The default setting for the sampling size is 500 nodes per node category, 500 edges per edge type. You can change this by editing the value of sampledItemsPerType in the Advanced section of the configuration (via Admin > Global configuration).

Fix search inconsistencies

When Linkurious indexes your data, it will only index nodes and edges whose categories and properties are marked as searchable.

On the Linkurious interface to set a category/ property to be searchable, you need to select the (view & search) option, However if you want a category/ property to be non-searchable you need to select the (view-only) option.

If for example in the schema I declare:

In search, I will be able to search by "last_name" but not by "first_name". In case, I want to enable search also by "first_name", I will need to mark this property as searchable, however if I do this after the indexation has occurred, I will see a warning in the schema configuration page telling me that there are inconsistencies I will need to solve in order to search for some properties. In order to fix the issue, a full indexation is required.

In case a property has been changed from searchable to visible, another information is shown, telling us that, the current search index is not optimized, which means there are more properties indexed than what are really needed to search. Running a full indexation should optimize the search index.

 Data Schema: Property types

The type of a property can be set by clicking on the "Set a type" button next to a property's name. A popup allows you to choose a type.

Once a property type has been set, the type is displayed next to the property name, and it is possible to change it using the "Edit" link.

IMPORTANT

Setting a property type in Linkurious Enterprise will not change the type of the corresponding values in the graph database. The consequences are explained below for each property type.

Property types

String

Settings

Setting a property type as String does not require any additional information:

Consequences

Once a property type has been set as a String, all values for that property are processed as a String.

In the Workspace Filter panel:

Number

Settings

Setting a property type as Number does not require any additional information:

Consequences

Once a property type has been set as a Number, values for that property fall into 2 categories:

In the Workspace Property panel, invalid values are displayed as string with a warning.

In the Workspace Filter panel:

When editing or creating a node/edge, properties set as Number can only accept a valid number.

Enum

Settings

Setting a property type as "Enum" requires to enter a list of authorized values:

Consequences

Once a property type has been set as Enum, values for that property fall into 2 categories:

In the Workspace Property panel, invalid values are displayed as string with a warning.

In the Workspace Filter panel:

When editing or creating a node/edge, the user pick the value of an Enum property from the list of the authorized values.

True/False

Settings

Setting a property type as True/false does not require any additional information:

Consequences

Once a property type has been set as a True/false, values for that property fall into 2 categories:

In the Workspace Property panel, invalid values are displayed as string with a warning.

In the Workspace Filter panel:

When editing or creating a node/edge, the user picks the value of a True/false property from a list containing "true" and "false".

Date

Settings

Setting a property type as Date requires to select the storage format of the date in the graph database:

Consequences

Once a property type has been set as Date, values for that property fall into 2 categories:

In the Workspace Property panel:

Anywhere a date value is displayed (including the Property panel):

In the Workspace Filter panel:

When editing or creating a node/edge, the user picks the value of a Date property from a Date picker.

Date-time

Settings

Setting a property type as Datetime requires to select the storage format of the date in the graph database:

In case the Date-time is stored as Native type in Neo4j, there are 2 possible options:

Consequences

Once a property type has been set as Datetime, values for that property fall into 2 categories:

In the Workspace Property panel:

Anywhere a date value is displayed (including the Property panel):

In the Workspace Filter panel:

When editing or creating a node/edge, the user picks the value of a Datetime property from a Date picker.

Mandatory properties

Each property can be set as "Mandatory" by clicking the corresponding checkbox in the property type popup.

Consequences of making a property mandatory:

In the Workspace Property panel:

When editing or creating a node/edge, the value of a mandatory property cannot be empty.

Property type conflict

When nodes have multiple categories, it is possible that a property can be given 2 different types. For example:

When such conflicts arise:

 Data Schema: Hiding data

Switching search/visibility options for node-categories and edge-types

The schema allows to switch the search/visibility options for node-categories and edge-types by clicking on the "eye" icon next to the category/type name.

Node-categories and edge-types search option can be set to "Search & view", "View only" or "No access"

Switching the visibility off of a node-category (or an edge-type) can be useful when some data is stored in the database for technical reasons but is useless for the users.

Switching a node-category (or an edge-type) to "Only view" will make them not searchable, but they can still appear in the visualisation as result of a query or of a graph expansion.

Switching a node-category (or an edge-type) to "No access" is equivalent to setting it to the "No Access" access right for all existing and future User Groups. Nobody will have access to this category / type.

When switching a node-category (or an edge-type) to "No access", a warning is displayed, since it may be used in some existing visualizations, and may impact users.

Switching properties search options

Switching a property search options can be done by clicking on the "eye" icon next to the property name.

Properties can be hidden from all users by switching them to "No access". A "No access" property:

Properties can be only visible and not searchable by switching them to "Only view". A "Only view" property:

 Data Schema: Strict mode

Strict editing / creation templates

Contrary to the "partial mode", when in "strict mode", you cannot deviate from the data schema when creating or editing a node or an edge.

More specifically, since all properties are typed, you cannot enter a value that is inconsistent with the schema or add arbitrary properties to nodes and edges.

A property must be declared in the schema first, before being available to users.

Switching to "strict mode"

The schema can be switched to "strict mode" by clicking on the "Switch to strict mode" button.

The "strict mode" requires to have types declared for all properties. As a consequence, when enabling "strict mode", all properties that do not have a type are switched to No access.

When switching to "strict mode", a warning is displayed with the list of properties that are being automatically switched to No access.

Changing the schema in "strict mode"

A schema in "strict mode" cannot evolve without an explicit action from the administrator. That means that any node-category, edge-type or property that would be detected after switching to "strict mode" would be added to the schema but be switched to No access by default.

A schema administrator can add new node-categories, edge-types or properties manually.

A schema in "strict mode" should be stable, so it is non-editable by default. To enable schema edition, the schema administrator needs to click on the corresponding toggle button.

 Alerts

Alerts are a way to watch your graph database for specific patterns and be notified when such pattern appears in the data. A simple triage interface let the users navigate through pattern matches and flag them as confirmed or dismissed.

Alerts are currently only supported via Neo4j.

If you are interested in alerts from other graph vendors, please get in touch.

How alerts work

To create an alert, an administrator needs to:

Using the Run query button, you can preview results for the current query:

Additional columns can be configured using explicitly returned values, as done here with n.length.

Clicking Next, a secondary configuration page will ask for the name of the alert and the frequency at which the query will be run.

After configuring and enabling an alert, it can be accessed by users in the Alerts panel:

Opening an alerts lists all recent matches, sortable by any column:

Clicking on a match opens it for details:

 Alerts: Configuration

To enable, disable and customize the alerts behavior see how to configure Linkurious Enterprise.

The following options are available in alerts key:

Example of alerts configuration:

alerts: {
  "enabled": true,
  "maxMatchTTL": 0,
  "maxMatchesLimit": 5000,
  "maxRuntimeLimit": 600000,
  "maxConcurrency": 1
}

 Visualizations appearance

Linkurious Enterprise allows you to modify the visualizations appearence by modifying the ogma.options settings.

To edit the Ogma settings, you can either use the Web user-interface or edit the configuration file located at linkurious/data/config/production.json.

Example configuration:

{
  "renderer": "webgl",
  "options": {
    "styles": {
      "node": {
        "nodeRadius": 5,
        "shape": "circle",
        "text": {
          "minVisibleSize": 24,
          "maxLineLength": 35,
          "backgroundColor": null,
          "font": "roboto",
          "color": "#000",
          "size": 14,
          "maxTextLength": 60
        }
      },
      "edge": {
        "edgeWidth": 1,
        "shape": "arrow",
        "text": {
          "minVisibleSize": 4,
          "maxLineLength": 35,
          "backgroundColor": null,
          "font": "roboto",
          "color": "#000",
          "size": 14,
          "maxTextLength": 60
        }
      }
    },
    "interactions": {
      "zoom": {
        "modifier": 1.382
      },
      "pan": {
      },
      "rotation": {
        "enabled": false
      }
    },
    "backgroundColor": "rgba(240, 240, 240, 0)"
  }
}

Supported ogma.options settings are available here.

In addition to what you find in the Ogma documentation, Linkurious Enterprise allows the following extra configuration keys:

 Visualizations appearance: Default styles

In Linkurious Enterprise you can customize the default visual aspect of nodes, edges and edge groups for new visualizations; your users can then jump head first into the exploration of the data.

Styles can be configured individually by each user using the Design panel. Default values can be configured for all users by an administrator.

Styles belong to one particular data-source. To set the default styles of a data-source, you have two options:

By default, every node category has a pre-assigned color.

Inside Default Styles, the nodes, edges and edgeGroup sections define the default styles for nodes, edges and edge groups respectively.

A style rule has the following elements:

All grouped edges of a data-source are styled with the same configuration. If we want to customize their style we need to specify directly the color, shape and width within the apposite section (see the Style edge groups section for details). In this case the Selector is not needed.

The input field is an array containing a sequence of strings identifying the path to an item in the JSON representation of a visualization object (node or relationship). Supported paths are:

For example:

{
  "index": 3,
  "itemType": "COMPANY",
  "type": "is",
  "input": ["properties", "name"],
  "value": "linkurious",
  "style": {
    "color": "blue"
  }
}

The above rule will apply the style {"color": "blue"} to all nodes with category "COMPANY" where the name is "linkurious".

{
  "index": 4,
  "type": "range",
  "itemType": "COMPANY",
  "input": ["statistics", "degree"],
  "value": {
    ">": 20
  },
  "style": {
    "size": "150%"
  }
},
{
  "index": 5,
  "type": "novalue",
  "itemType": "COMPANY",
  "input": ["statistics", "degree"],
  "style": {
    "size": "200%"
  }
}

The above rule will apply the style {"size": "150%"} to all nodes with category "COMPANY" that are connected to more that 20 other nodes and the style {"size": "200%"} to all supernodes with category "COMPANY"

index has to be unique. It is currently required for technical reasons. This will be made more user-friendly in future releases.

Selectors

The selector is used to specify to which items the style is applied to. For example, you can configure all the "COMPANY" founded more than 12 years ago to have a particular style. To do so, we use a style rule with a range type and with value:

{
  ">": 12
}

The overall style rule will look like the following (assuming we want to color the nodes in red):

{
  "index": 3,
  "type": "range",
  "itemType": "COMPANY",
  "input": ["properties", "age"],
  "value": {
    ">": 12
  },
  "style": {
    "color": "red"
  }
}

For range queries you can use one or more among the following operators: >, <, >=, <=.

Supported selectors

{
  "type": "is",
  "input": ["properties", "name"],
  "value": "linkurious",
  // .. 
}

In addition to type, input and value you must always specify itemType to filter by node category or edge type except if type is any.

Styles

Colors

Set under the style property key an object with one key, color, e.g:

"style": {
  "color": "blue" // or "#0000FF", "rgba(0, 0, 255, 1)" 
}
"style": {
  "color": {
    "type": "auto", 
    "input": ["properties", "propertyName"]
  }
}

The color style for nodes, edges and edge groups has the same format.

Sizes

For nodes, set under the style property key an object with one key, size, e.g:

"style": {
  "size": "220%"
}

For edges, it is quite similar: set under the style property key an object with one key, width, e.g:

"style": {
  "width": "220%"
}

Automatic Resizing

Similiar to setting the size manually, it is also possible to set an automatic resizing rule with respect to user-defined property names, e.g:

"style": {
 "size": {
   "type": "autoRange",
   "input": ["properties", "age"]
  }
}

In the example above, all selected nodes and edges will be resized with respect to property age.

Edges and nodes will be resized linearly based on their property values.

Nodes will be resized in range that is from 50% (the smallest) to 500% (the biggest).

Edges will be resized in range that is from 50% (the smallest) to 200% (the biggest).

Shapes

Set under the style property key an object with one key, shape.

For nodes, set the shape of the node. Possible values are: "circle" (default), "cross", "diamond", "pentagon", "equilateral", "square" or "star".

"style": {
  "shape": "star" // "circle", "cross", "diamond", "pentagon", "equilateral", "square" or "star" 
}

For edges and edge groups, set the shape of the edge. Possible values are: "arrow" (default), "dashed", "dotted", "line" or "tapered".

"style": {
  "shape": "dotted" // "arrow", "dashed", "dotted", "line" or "tapered" 
}

Custom node icons

You can host your custom icons in Linkurious Enterprise itself by storing them in the folder located at linkurious/data/server/customFiles/icons.

Users will find them in the Design panel:

If you want to edit style rules manually, the style rules to access these images would look like:

"style": {
  "image": {
    "url": "/icons/company.png"
  }
}

Advanced custom node icons

Nodes can be filled with an image if one of their property is an URL to an image. Available image formats are PNG, JPG, GIF, or TIFF.

The following style will set an image:

Example:

"style": {
  "image": {
    "url": "http://example.com/img/company.png"
  }
}

To assign dynamically an image to a node, for example if the logo is stored in a node property called "logo_url", you just need to set the following style:

"style": {
  "image": {
    "url": {
      "type": "data",
      "path": [
       "properties",
       "logo_url" // change it to the property key where your image urls are stored 
      ]
    }
  }
}

If you want to resize your images in a node you can you use the additional properties scale, fit and tile, e.g.:

"style": {
  "image": {
    "url": ... // one of the above 
    "scale": 0.8, // scale the image in the node 
    "fit": false, // if true, fill the node with the image 
    "tile": false // if true, repeat the image to fill the node 
  }
}

Style edge groups

Within the edgeGroup property key, we can directly set the color, shape and width that will apply to all grouped edges of a data-source.

"edgeGroup": {
  "color": "red",
  "shape": "dashed", // "arrow", "dashed", "dotted", "line" or "tapered" 
  "width": "320%"
 }

Editing the default styles in the data-source page does automatically change captions for existing users for newly created visualizations. Existing visualizations are not touched.

 Visualizations appearance: Default captions

In Linkurious Enterprise, node and edge captions are the texts displayed next to a node or an edge.

Captions, as for Styles, can be configured individually by each user using the Design panel.

As for Styles, an administrator can set the default values from its own workspace or by editing them from the data-source configuration page.

Inside Default Captions, the nodes and the edges sections define the default captions for nodes and edges respectively.

Each of these two sections is an object where the keys are node categories or edge types and the values are objects with the following keys:

Example:

"defaultCaptions": {
  "nodes": {
    "CITY": {
      "active": true,
      "displayName": true,
      "properties": ["name"]
    },
    "COMPANY": {
      "active": true,
      "displayName": false,
      "properties": ["name", "country"]
    }
  },
  "edges": {
    "INVESTED_IN": {
      "active": true,
      "displayName": true,
      "properties": ["funded_month"]
    }
  }
}

Editing the default captions in the data-source page does automatically change captions for existing users for newly created visualizations. Existing visualizations are not touched.

 Visualizations appearance: Geographic tiles

Linkurious Enterprise supports displaying nodes with geographic coordinates (latitude and longitude) on a map.

Users are able to switch a visualization to geo mode when geographic coordinates are available on at least one nodes of the visualization. The map tiles layer used in geo mode can be customized by users.

By default, Linkurious Enterprise comes pre-configured with several geographical tile layers. Administrators change the available geographical tile layers by editing the leaflet section in the configuration file (linkurious/data/config/production.json). The leaflet key is an array of geographical tile layer configurations. Each entry has the following attributes:

Geographical tile layers and overlay layers can be found at https://leaflet-extras.github.io/leaflet-providers/preview/.

Example configuration:

"leaflet": [
  {
    "overlay": true,
    "name": "Stamen Toner Lines",
    "thumbnail": "",
    "urlTemplate": "http://stamen-tiles-{s}.a.ssl.fastly.net/toner-lines/{z}/{x}/{y}.png",
    "attribution": "Map tiles by <a href="http://stamen.com">Stamen Design</a>, <a href="http://creativecommons.org/licenses/by/3.0">CC BY 3.0</a> &mdash; Map data &copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>", 
    "subdomains": "abcd",
    "id": null,
    "accessToken": null,
    "minZoom": 2,
    "maxZoom": 20
  },
  {
    "name": "MapBox Streets",
    "thumbnail": "/assets/img/MapBox_Streets.png",
    "urlTemplate": "https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token={accessToken}",
    "attribution": "Map data &copy; <a href="http://openstreetmap.org">OpenStreetMap</a>, <a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery &copy <a href="http://mapbox.com">Mapbox</a>", 
    "subdomains": null,
    "id": "mapbox.streets",
    "accessToken": "pk.eyJ1Ijoic2hleW1hbm4iLCJhIjoiY2lqNGZmanhpMDAxaHc4bTNhZGFrcHZleiJ9.VliJNQs7QBK5e5ZmYl9RTw",
    "minZoom": 2,
    "maxZoom": 20
  }
]

 Audit trail

Audit trails are detailed logs about the operations performed on your graph databases by your users. Because they have the potential to take up a substantial amount of memory depending on the number of users and the operations performed, they are disabled by default.

Configuration

Audit trails can be enabled and configured in linkurious/data/config/production.json. They are found under the auditTrail key, which contains the following options:

Performances

Enabling the audit trail can impact performances negatively.

Here are options to consider to improve performances:

 Audit trail: Log format

The audit trail log files contain JSON lines in JSONL format.

You can easily bind a log-management system like Logstash to interpret them.

Each log line contains:

Lines are written to the log as follows:

{"mode":"WRITE","date":"2017-01-09T17:34:07.446Z","user":"simpleUser@example.com","sourceKey":"e8890b53","action":"createEdge","params":{"createInfo":{"source":4328,"target":4332,"type":"ACTED_IN","data":{"tata":"toto"}}},"result":{"edge":{"id":5958,"data":{"tata":"toto"},"type":"ACTED_IN","source":4328,"target":4332}}}
{"mode":"READ","date":"2017-01-09T17:34:07.478Z","user":"simpleUser@example.com","sourceKey":"e8890b53","action":"getNode","params":{"id":4330},"result":{"node":{"id":4330,"data":{"tagline":"Welcome to the Real World","title":"The Matrix","released":1999,"nodeNoIndexProp":"foo"},"categories":["Movie","TheMatrix"]}}}
{"mode":"READ","date":"2017-01-09T17:34:07.507Z","user":"simpleUser@example.com","sourceKey":"e8890b53","action":"getEdge","params":{"edgeId":5950},"result":{"edge":{"id":5950,"data":{"edgeNoIndexProp":"bar","roles":["Neo"]},"type":"ACTED_IN","source":4313,"target":4330}}}
{"mode":"READ WRITE","date":"2017-01-09T17:34:12.253Z","user":"user@linkurio.us","sourceKey":"e8890b53","action":"rawQuery","params":{"query":"MATCH (n:Person) RETURN n","dialect":"cypher"},"result":{"nodes":[{"id":4357,"data":{"born":1967,"name":"Andy Wachowski"},"categories":["Person"],"edges":[]},{"id":4359,"data":{"born":1967,"name":"Carrie-Anne Moss"},"categories":["Person"],"edges":[]},{"id":4360,"data":{"born":1954,"name":"James Cameron"},"categories":["Person"],"edges":[]},{"id":4361,"data":{"born":1964,"name":"Keanu Reeves"},"categories":["Person"],"edges":[]},{"id":4362,"data":{"born":1965,"name":"Lana Wachowski"},"categories":["Person"],"edges":[]},{"id":4364,"data":{"born":1901,"name":"Phillip Cameron"},"categories":["Person"],"edges":[]},{"id":4365,"data":{"born":1976,"name":"Sam Worthington"},"categories":["Person"],"edges":[]}]}}

Each line is a JSON objects in the following format (with logResult set to true):

{
  "mode": "WRITE",
  "date": "2017-01-09T17:34:07.446Z",
  "user": "simpleUser@example.com",
  "sourceKey": "e8890b53",
  "action": "createEdge",
  "params": {"createInfo":{"source":4328,"target":4332,"type":"ACTED_IN","data":{"tata":"toto"}}},
  "result": {"edge":{"id":5958,"data":{"foo":"bar"},"type":"BAZ","source":4328,"target":4332}}
}

The params key contains the parameters of the operation being performed. In this case, the user has run as createEdge action; params then contains the source and target IDs of the edge, as well as the ege type and a data key which holds the properties set on that edge.

result contains a JSON representation of the edge produced by the action. As can be seen, there is a substantial amount of duplication between the information in params and in data. This may not be a consideration with operations on single nodes, but with larger collections can mean that log size increases substantially. Consider the example below:

{
  "mode": "READ WRITE",
  "date": "2017-01-09T17:34:12.289Z",
  "user": "user@linkurio.us",
  "sourceKey": "e8890b53",
  "action": "rawQuery",
  "params": {
    "query": "MATCH (n1)-[r:DIRECTED]->(n2) RETURN n1, r",
    "dialect": "cypher"
  },
  "result":{"nodes":[{"id":4357,"data":{"born":1967,"name":"Andy Wachowski"},"categories":["Person"],"edges":[{"id":6009,"data":{},"type":"DIRECTED","source":4357,"target":4366},{"id":6010,"data":{},"type":"DIRECTED","source":4357,"target":4367},{"id":6011,"data":{},"type":"DIRECTED","source":4357,"target":4368}]},{"id":4358,"data":{"tagline":"Return to Pandora","title":"Avatar","released":1999},"categories":["Avatar","Movie"],"edges":[{"id":6034,"data":{},"type":"DIRECTED","source":4360,"target":4358}]},{"id":4360,"data":{"born":1954,"name":"James Cameron"},"categories":["Person"],"edges":[{"id":6034,"data":{},"type":"DIRECTED","source":4360,"target":4358}]},{"id":4362,"data":{"born":1965,"name":"Lana Wachowski"},"categories":["Person"],"edges":[{"id":6020,"data":{},"type":"DIRECTED","source":4362,"target":4366},{"id":6021,"data":{},"type":"DIRECTED","source":4362,"target":4367},{"id":6022,"data":{},"type":"DIRECTED","source":4362,"target":4368}]},{"id":4366,"data":{"tagline":"Welcome to the Real World","title":"The Matrix","released":1999,"nodeNoIndexProp":"foo"},"categories":["Movie","TheMatrix"],"edges":[{"id":6020,"data":{},"type":"DIRECTED","source":4362,"target":4366},{"id":6009,"data":{},"type":"DIRECTED","source":4357,"target":4366}]},{"id":4367,"data":{"tagline":"Free your mind","title":"The Matrix Reloaded","released":2003},"categories":["Movie","TheMatrixReloaded"],"edges":[{"id":6021,"data":{},"type":"DIRECTED","source":4362,"target":4367},{"id":6010,"data":{},"type":"DIRECTED","source":4357,"target":4367}]},{"id":4368,"data":{"tagline":"Everything that has a beginning has an end","title":"The Matrix Revolutions","released":2003},"categories":["Movie","TheMatrixRevolutions"],"edges":[{"id":6022,"data":{},"type":"DIRECTED","source":4362,"target":4368},{"id":6011,"data":{},"type":"DIRECTED","source":4357,"target":4368}]}]}
}

In this case, we've used a raw query to read nodes from the database without making any changes to them. The results of the query are returned to us in result, and include a substantial number of nodes and edges. To maximize the usefulness of audit trails and to minimize their footprint, it might be advisable to exclude unnecessary data, including passive queries such as these. Disabling logResult and setting mode to "w" (logging only operations which perform write operations to the database) is one strategy for accomplishing this.

 Plugins

What is a plugin?

A plugin is software running on top of Linkurious Enterprise extending its capabilities. For example, a plugin that adds the functionality of importing your data to Linkurious Enterprise via CSV files.

What is an official Linkurious Enterprise plugin?

Official plugins are plugins built, distributed, and maintained by Linkurious team. Customers and 3rd party developers can easily create their own plugins using our detailed Plugins Development Guide.

What are the currently available official Linkurious Enterprise Plugins?

What do I need to get started installing a plugin?

You need to be able to add files to the "data" folder of Linkurious Enterprise and have a basic understanding of JSON files. You also need admin access to Linkurious Enterprise to make the configuration and start/restart plugins. Usually plugins are triggered by custom actions, so you need to be able to create custom actions so users can easily access the plugin.

How do I install plugins?

Each plugin comes with clear instructions on how to install plugins, here is a detailed step-by-step example.

How do I install official plugins while using Linkurious Enterprise under Docker

If you are using Docker, plugins can be installed by setting the LKE_PLUGINS environment variable like this:

LKE_PLUGINS='["plugin-name"]'

You can choose from the following plugin-names:

What do I need to be able to use the plugin?

Once installed, any user can use the plugin from Linkurious Enterprise through custom actions.

Are plugins compatible with all versions of Linkurious Enterprise?

Compatibility is mentioned on the download page of each plugin. You can easily identify that by looking at the minimal and maximal supported Linkurious Enterprise version.

What can I expect about maintenance and bug fixes?

If you are facing problems with a plugin follow the next steps:

If you are still facing problems, please get in touch with support while keeping this in mind: