404
+ +Page not found
+ + +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..73081fc1 --- /dev/null +++ b/404.html @@ -0,0 +1,308 @@ + + +
+ + + + +Page not found
+ + +Some TriplyDB instances expose a GraphQL endpoint. This endpoint uses information from user-provided SHACL shapes for the schema creation.
+The goal of this documentation is to inform users about Triply's implementation of the GraphQL endpoint. For more generic information about GraphQL, you can visit graphql.org or other resources. In order to understand this documentation, you have to be familiar with the SHACL language.
+Note: in order to avoid confusion we will use the noun object
as a synonym for resource
and triple object
when referring to the third element of a triple.
A basic element of the schema is object types, which represents the type of the resources that you can query.
+type Book {
+ id:ID!
+ title:[XsdString]!
+}
+
+This object type corresponds to the shape below:
+shp:Book
+ a sh:NodeShape;
+ sh:targetClass sdo:Book;
+ sh:property
+ [ sh:path dc:title;
+ sh:datatype xsd:string ].
+
+Fields in object types, such as title
, represent properties of nodes. By default, fields return arrays of values. The only exception is when the property has sh:maxCount: 1
, then the field returns a single value.
+Thus, for the shape:
shp:Book
+ a sh:NodeShape;
+ sh:targetClass sdo:Book;
+ sh:property
+ [ sh:path dc:title;
+ sh:maxCount 1;
+ sh:datatype xsd:string ].
+
+The object type will be:
+type Book {
+ id:ID!
+ title:XsdString
+}
+
+Additionally, following the best practices, fields can give null results, except for:
+sh:minCount 1
and sh:maxCount 1
Thus, for this shape:
+shp:Book
+ a sh:NodeShape;
+ sh:targetClass sdo:Book;
+ sh:property
+ [ sh:path dc:title;
+ sh:maxCount 1;
+ sh:minCount 1;
+ sh:datatype xsd:string ].
+
+The corresponding object type is:
+type Book {
+ id:ID!
+ title:XsdString!
+}
+
+If the property shape includes an sh:datatype
, the field returns values of GraphQL scalar type (see example above). On the other hand, if the property shape has an sh:class
pointing to a class that:
+- is the sh:targetClass
of a node shape, the field returns values of the corresponding object type.
+- is not mentioned as a sh:targetClass
in a node shape, then the type of the returned values is ExternalIri
.
Therefore, the shapes:
+shp:Book
+ a sh:NodeShape;
+ sh:targetClass sdo:Book;
+ sh:property
+ [ sh:path sdo:author;
+ sh:class sdo:Person ];
+ [ sh:path sdo:audio;
+ sh:class sdo:AudioObject ].
+
+shp:Person
+ a sh:NodeShape;
+ sh:targetClass sdo:Person;
+ sh:property
+ [ sh:path sdo:name;
+ sh:datatype xsd:string ].
+
+correspond to the below graphql types:
+type Book {
+ id:ID!
+ author:[Person]!
+ audio:[ExternalIri]!
+}
+
+type Person {
+ id:ID!
+ name:[XsdString]!
+}
+
+The id field is of type ID, which represents the IRI of each resource. This ID is unique.
+For example:
+book:Odyssey
+ a sdo:Book;
+ dct:title "Odyssey".
+
+The id field of this resource would be https://example.org/book/Odyssey
.
+You can read more information on the ID
scalar in graphql.org. Also, the use of the id
field is mentioned later in the section Object Global Identification.
In order to name the GraphQL types in correspondence to shapes, we follow the below conventions:
+- For object types, we use the sh:targetClass
of the node shape.
+- For object type fields, we use the sh:path
of the property shape.
More specifically, the name comes from the part of the IRI after the last #
or otherwise the last /
, converted from kebab-case to camelCase.
Notice that if the selected name is illegal or causes a name collision, we'll return an error informing the user about the problem and ignore this type or field.
+Shape designers are able use their custom names by using a special property: <https://triplydb.com/Triply/GraphQL/def/graphqlName>
.
+More specifically, the designer has to add a triple with :
+- for object types, the class IRI
+- for fields, the IRI of the property shape
as a subject, the above-mentioned predicate and a string literal with the custom name as triple object.
+If we wanted to rename using the first example of the section, we would do:
+shp:Book
+ a sh:NodeShape;
+ sh:targetClass sdo:Book;
+ sh:property
+ [ sh:path dc:title;
+ triply:graphqlName "name"; # Rename the object type field
+ sh:datatype xsd:string ]
+
+sdo:Book
+ triply:graphqlName "PieceOfArt". # Rename the object type field.
+
+Then the corresponding object type would be:
+type PieceOfArt {
+ id:ID!
+ name:[XsdString]!
+}
+
+The user can query for objects using their unique ID. Also, they can query for objects of a specific type along with fields, and get nested information. Last, the user can get information by filtering results. Let's see some important concepts.
+For reasons such as caching, the user should be able to query an object by their unique ID. This is possible using global object identification, using the node(id:ID)
query.
An example:
+{
+ node(id: "https://example.org/book/Odyssey") {
+ id
+ }
+}
+
+For more information on global object identification, see graphql specification.
+A simple query would be:
+{
+ BookConnection {
+ edges {
+ node {
+ id
+ title
+ }
+ }
+ }
+}
+
+The results would include the IRIs of books together with their titles and would be paginated.
+In order to paginate through a large number of results, our GraphQL implementation supports cursor-based pagination using connections. For more information, please visit the Relay project's cursor-based connection pagination specification.
+When you query for objects, you might want to get back resources based on specific values in certain fields. You can do this by filtering.
+For example, you can query for people with a specific id:
+{
+ PersonConnection(filter: {id: "https://example.org/person/Homer"}) {
+ edges {
+ node {
+ id
+ name
+ }
+ }
+ }
+}
+
+Another query would be to search for a person with a specific name:
+{
+ PersonConnection(filter: {name: {eq: "Homer"}}) {
+ edges {
+ node {
+ id
+ name
+ }
+ }
+ }
+}
+
+Notice that in the second example, there is a new field for filtering called eq
. When we want to filter on a field with returns a scalar, meaning that its value is represented by a literal in linked data, we have to use comparison operators: eq
,in
for equality, and notEq
and notIn
for inequality. The operators in
andnotIn
are refering to lists.
On the other hand, when we are filtering based on IDs - or in linked data terms, based on the IRI - , as in the first example, we don't use comparison operators.
+The only idiomatic case is the literal with a language tag and rdf:langString
as a datatype. This literal is represented as { value: "example-string", language: "en" }
and the corresponding scalar is RdfsLangString
. This means that in order to filter using a value of this scalar type, you have to execute the query below:
{
+ PersonConnection(filter: {name: {eq: {value: "Odysseus", language: "en"}}}) {
+ edges {
+ node {
+ id
+ name
+ }
+ }
+ }
+}
+
+Additionally, there is support for filtering results based on the language tag.
+An example is:
+person:Odysseus
+ a sdo:Person;
+ sdo:name
+ "Odysseus"@en,
+ "Οδυσσεύς"@gr.
+
+shp:Person
+ a sh:NodeShape;
+ sh:targetClass sdo:Person;
+ sh:property
+ [ sh:path sdo:name;
+ sh:datatype rdf:langString ].
+
+{
+ PersonConnection {
+ edges {
+ node {
+ id
+ name(language:"gr")
+ }
+ }
+ }
+}
+
+{
+ "data": {
+ "PersonConnection": {
+ "edges": [
+ {
+ "node": {
+ "id": "https://example.org/person/Odysseus",
+ "name": [
+ {
+ "value": "Οδυσσεύς",
+ "language": "gr"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+}
+
+Our implementation supports using the HTTP Accept-Language syntax, for filtering based on a language-tag.
+For example,
+{
+ PersonConnection {
+ edges {
+ node {
+ id
+ name(language:"gr, en;q=.5")
+ }
+ }
+ }
+}
+
+{
+ "data": {
+ "PersonConnection": {
+ "edges": [
+ {
+ "node": {
+ "id": "https://example.org/person/Odysseus",
+ "name": [
+ {
+ "value": "Οδυσσεύς",
+ "language": "gr"
+ },
+ {
+ "value": "Odysseus",
+ "language": "en"
+ },
+ ]
+ }
+ }
+ ]
+ }
+ }
+}
+
+If the writer of the shapes includes the sh:uniqueLang
constraint, then the result returned will be a single value, instead of an array.
Thus, the example becomes:
+person:Odysseus
+ a sdo:Person;
+ sdo:name
+ "Odysseus"@en,
+ "Οδυσσεύς"@gr.
+
+shp:Person
+ a sh:NodeShape;
+ sh:targetClass sdo:Person;
+ sh:property
+ [ sh:path sdo:name;
+ sh:uniqueLang true;
+ sh:datatype rdf:langString ].
+
+{
+ PersonConnection {
+ edges {
+ node {
+ id
+ name(language:"gr, en;q=.5")
+ }
+ }
+ }
+}
+
+{
+ "data": {
+ "PersonConnection": {
+ "edges": [
+ {
+ "node": {
+ "id": "https://example.org/person/Odysseus",
+ "name": {
+ "value": "Οδυσσεύς",
+ "language": "gr"
+ }
+ }
+ }
+ ]
+ }
+ }
+}
+
+Furthermore, there is possibility for nested filtering:
+{
+ BookConnection(
+ filter: {author: {name: {eq: "Homer"}}}
+ ) {
+ edges {
+ node {
+ id
+ }
+ }
+ }
+}
+
+and for combination of filters:
+{
+ BookConnection(
+ filter: {author: {name: {eq: "Homer"}}, name: {eq: "Odyssey"}}
+ ) {
+ edges {
+ node {
+ id
+ }
+ }
+ }
+}
+
+Note: The combination of filters is executed in an 'and' logic.
+ +SPARQL Construct and SPARQL Describe queries can return results in the JSON-LD format. Here is an example:
+[
+ {
+ "@id": "john",
+ "livesIn": { "@id": "amsterdam" }
+ },
+ {
+ "@id": "jane",
+ "livesIn": { "@id": "berlin" }
+ },
+ {
+ "@id": "tim",
+ "livesIn": { "@id": "berlin" }
+ }
+]
+
+JSON-LD is one of the serialization formats for RDF, and encodes a graph structure. For example, the JSON-LD snippet above encodes the following graph:
+The triples in a graphs do not have any specific order. In our graph picture, the triple about Tim is mentioned first, but this is arbitrary. A graph is a set of triples, so there is no 'first' or 'last' triple. Similarly, there is no 'primary' or 'secondary' element in a graph structure either. In our graph picture, persons occur on the left hand-side and cities occur on the right hand-side. In fact, the same information can be expressed with the following graph:
+Most RESTful APIs return data with a specific, often tree-shaped structure. For example:
+{
+ "amsterdam": {
+ "inhabitants": [
+ "john"
+ ]
+ },
+ "berlin": {
+ "inhabitants": [
+ "jane",
+ "tim"
+ ]
+ }
+}
+
+JSON-LD Framing is a standard that is used to assign additional structure to JSON-LD. With JSON-LD Framing, we can configure the extra structure that is needed to create RESTful APIs over SPARQL queries.
+JSON-LD Framing are a deterministic translation from a graph, which has an unordered set of triples where no node is "first" or "special", into a tree, which has ordered branches and exactly one "root" node. In other words, JSON-LD framing allows one to force a specific tree layout to a JSON-LD document. This makes it possible to translate SPARQL queries to REST-APIs.
+The TriplyDB API for saved queries has been equipped with a JSON-LD profiler which can apply a JSON-LD profile to a JSON-LD result, transforming the plain JSON-LD to framed JSON. To do this you need two things. A SPARQL construct query and a JSON-LD frame. When you have both of these, you can retrieve plain JSON from a SPARQL query. The cURL command when both the SPARQL query and JSON-LD frame are available is:
+curl -X POST [SAVED-QUERY-URL] \
+ -H 'Accept: application/ld+json;profile=http://www.w3.org/ns/json-ld#framed' \
+ -H 'Authorization: Bearer [YOUR_TOKEN]' \
+ -H 'Content-type: application/json' \
+ -d '[YOUR_FRAME]'
+
+When sending a curl request, a few things are important. First, the request needs to be a POST
request. Only a POST
request can accept a frame as a body. The Accept
header needs to be set to a specific value. The Accept
header needs to have both the expected returned content-type and the JSON-LD profile, e.g. application/ld+json;profile=http://www.w3.org/ns/json-ld#framed
. When querying an internal or private query you need to add an authorization token. Finally, it is important to set the Content-type
. It refers to the content-type of the input body and needs to be application/json
, as the frame is of type application/json
.
Let's start with the SPARQL query. A JSON-LD frame query needs a SPARQL Construct query to create an RDF graph that is self contained and populated with relevant vocabulary and data. The graph in JSON-LD is used as input for the RESTful API call. The SPARQL Construct query can be designed with API variables.
+Do note that API variables with OPTIONAL
s can sometimes behave a bit different than regular API variables. This is due to how SPARQL interprets OPTIONAL
s. If an API variable is used in an OPTIONAL
, the query will return false positives, as the OPTIONAL
does not filter out results matching the API-variable.
Also note that the use of UNION
s can have unexpected effects on the SPARQL query. A union could split up the result set of the SPARQL query. Meaning that the SPARQL engine first exhausts the top part of the UNION
and then starts with the second part of the UNION
. This means that the first part of the result set can be disconnected from the second part. If the limit is set too small the result set is separated in two different JSON-LD documents. This could result in missing data in the response.
Finally, please note that it can happen that you set a pageSize
of 10
but the response contains less than 10 results, while the next page is not empty. This is possible as the result set of the WHERE
clause is limited with a limit and not the Construct clause. This means that two rows of the resulting WHERE
clause are condensed into a single result in the Construct clause. Thus the response of the API can differ from the pageSize
.
The result is a set of triples according to the query. Saving the SPARQL query will resolve in a saved query. The saved query has an API URL that we can now use in our cURL command. The URL most of the time starts with api
and ends with run
.
The saved query url of an example query is:
+https://api.triplydb.com/queries/JD/JSON-LD-frame/run
+
+You could use API variables with a ?
e.g. ?[queryVariable]=[value]
The SPARQL query is not enough to provide the RDF data in a JSON serialization format. It requires additional syntactic conformities that cannot be defined in a SPARQL query. Thus the SPARQL query that was created needs a frame to restructure JSON-LD objects into JSON. The JSON-LD 1.1 standard allows for restructuring JSON-LD objects with a frame to JSON.
+A JSON-LD frame consists out of 2 parts. The @context
of the response, and the structure of the response. The complete specification on JSON-LD frames can be found online
The @context
is the translation of the linked data to the JSON naming. In the @context
all the IRIs that occur in the JSON-LD response are documented, with key-value pairs, where the key corresponds to a name the IRI will take in the REST-API response and the value corresponds to the IRI in the JSON-LD response. Most of the time the key-value pairs are one-to-one relations, where one key is mapped to a single string. Sometimes the value is an object. The object contains at least the @id
, which is the IRI in the JSON-LD response. The object can also contain other modifiers, that change the REST-API response. Examples are, @type
to define the datatype of the object value, or @container
to define the container where the value in the REST-API response is stored in. The context can also hold references to vocabularies or prefixes.
The second part of the JSON-LD frame is the structure of the data. The structure defines how the REST-API response will look like. Most of the time the structure starts with @type
to denote the type that the root node should have. Setting the @type
is the most straightforward way of selecting your root node. The structure is built outward from the root node. You can define a leaf node in the structure by adding an opening and closing bracket, as shown in the example. To define a nested node you first need to define the key that is a object property in the JSON-LD response that points to another IRI. Then from that IRI the node is created filling in the properties of that node.
{
+ "@context": {
+ "addresses": "ex:address",
+ "Address": "ex:Address",
+ "Object": "ex:Object",
+ "street": "ex:street",
+ "number": {
+ "@id": "ex:number",
+ "@type": "xsd:integer"
+ },
+ "labels": {
+ "@id": "ex:label",
+ "@container": "@set"
+ },
+ "ex": "https://triply.cc/example/",
+ "xsd": "http://www.w3.org/2001/XMLSchema#"
+ },
+ "@type": "Object",
+ "labels": {},
+ "addresses": {
+ "street": {},
+ "number": {}
+ }
+}
+
+The JSON-LD frame together with the SPARQL query will now result in a REST-API result:
+curl -X POST https://api.triplydb.com/queries/JD/JSON-LD-frame/run \
+ -H 'Accept: application/ld+json;profile=http://www.w3.org/ns/json-ld#framed' \
+ -H 'Content-type: application/json' \
+ -d '{
+ "@context": {
+ "addresses": "ex:address",
+ "Address": "ex:Address",
+ "Object": "ex:Object",
+ "street": "ex:street",
+ "number": {
+ "@id": "ex:number",
+ "@type": "xsd:integer"
+ },
+ "labels": {
+ "@id": "ex:label",
+ "@container": "@set"
+ },
+ "ex": "https://triply.cc/example/",
+ "xsd": "http://www.w3.org/2001/XMLSchema#"
+ },
+ "@type": "Object",
+ "labels": {},
+ "addresses": {
+ "street": {},
+ "number": {}
+ }
+ }'
+
+The JSON-LD frame turns SPARQL results for the query in step 1 into a format that is accepted as plain RESTful API request.
+Another way to create a frame is by using the SPARQL editor in TriplyDB.
+You can access the JSON-LD editor by clicking the three dots next to the SPARQL editor, and then selecting "To JSON-LD frame editor".
+ +Afterwards, the JSON script from above should be added to the JSON-LD Frame editor.
+ +Running the script results in the following REST-API result:
+ +This can also be accessed by the generated API Link above the SPARQL editor. +Copying and pasting the generated link will direct you to a page where you can view the script:
+ +Applications (see TriplyDB.js) and pipelines (see TriplyETL) often require access rights to interact with TriplyDB instances. Specifically, reading non-public data and writing any (public or non-public) data requires setting an API token. The token ensures that only users that are specifically authorized for certain datasets are able to access and/or modify those datasets.
+The following steps must be performed in order to create an API token:
+Many organizations use their own TriplyDB server. If your organization does not yet have a TriplyDB server, you can also create a free account over at TriplyDB.com.
+Go to your user settings page. This page is reached by clicking on the user menu in the top-right corner and choosing “User settings”.
+Go to the “API tokens” tab.
+Click on “Create token”.
+Enter a name that describes the purpose of the token. This can be the name of the application or pipeline for which the API token will be used.
+You can use the name to manage the token later. For example, you can remove tokens for applications that are no longer used later on. It is good practice to create different API tokens for different applications.
+Choose the permission level that is sufficient for what you want to do with the API token. Notice that “Management access” is often not needed. “Read access” is sufficient for read-only applications. “Write access” is sufficient for most pipelines and applications that require write access.
+Management access: if your application must create or change organization accounts in the TriplyDB server.
+Write access: if your application must write (meta)data in the TriplyDB server.
+Read access: if your application must read public and/or private data from the TriplyDB server.
+Click the “Create” button to create your token. The token (a long sequence of characters) will now appear in a dialog.
+For security reasons, the token will only be shown once. You can copy the token over to the application where you want to use it.
+ +This page explains how to retrieve all results from a SPARQL query using pagination.
+Often SPARQL queries can return more than 10.000 results, but due to limitations the result set will only consist out of the first 10.000 results. To retrieve more than 10.000 results you can use pagination. TriplyDB supports two methods to retrieve all results from a SPARQL query. Pagination with the saved query API or Pagination with TriplyDB.js.
+Each TriplyDB instance has a fully RESTful API. The TriplyDB RESTful API is extended for saved SPARQL queries. The API for saved queries is extended with two arguments that the query is able to process paginated result sets. The arguments are ‘page’ and ‘pageSize’. An example of a paginated saved SPARQL query request would look like:
+https://api.triplydb.com/queries/academy/pokemon-color/run?page=3&pageSize=100
The example request argument ‘page’ corresponds to the requested page. In the example request this would correspond to the third page of paginated SPARQL query, according to the ‘pageSize’. There is no maximum ‘page’ limit, as a SPARQL query could return an arbitrary number of results. When no results can be retrieved for the requested page an empty page will be returned.
+The argument ‘pageSize’ corresponds to how many results each page would contain. The ‘pageSize’ has a default of 100 returned results and a maximum ‘pageSize’ limit of 10.000 returned results. The request will return an error when the ‘pageSize’ is set higher than 10.000.
+The RESTful API for the saved SPARQL queries follows the RFC 8288 standard.
+The request will return a response body containing the result set and a response header. The response header contains a link header with the relative "next" request, the relative "prev" request, and the relative "first" request. By following the "next" link header request you can chain the pagination and retrieve all results.
+link:
+ <https://api.triplydb.com/queries/academy/pokemon-color/run?page=4&pageSize=100>; rel="next",
+ <https://api.triplydb.com/queries/academy/pokemon-color/run?page=2&pageSize=100>; rel="prev",
+ <https://api.triplydb.com/queries/academy/pokemon-color/run?page=1&pageSize=100>; rel="first"
+
+TriplyDB.js is the official programming library for interacting with TriplyDB. TriplyDB.js allows the user to connect to a TriplyDB instance via the TypeScript language. TriplyDB.js has the advantage that it can handle pagination internally so it can reliably retrieve a large number of results.
+To get the output for a construct
or select
query, follow these steps:
1. Import the triplyDB.js library and set your parameters, regarding the TriplyDB instance and the account in which you have saved the query as well as the name of the query. Do not forget that we perform TriplyDB.js requests within an async context.
+import Client from '@triply/triplydb'
+async function run() {
+ // Your code goes here.
+ const client = Client.get({token: process.env.TRIPLYDB_TOKEN})
+ const account = await client.getAccount('account-name')
+ const query = await account.getQuery('name-of-some-query')
+}
+run()
+
+2. Get the results of a query by setting a results
variable. More specifically, for construct queries you use the statements()
call:
const query = await account.getQuery('name-of-some-query')
+const results = query.results().statements()
+
+For select queries you use the bindings()
call:
const query = await account.getQuery('name-of-some-query')
+const results = query.results().bindings()
+
+Additionally, saved queries can have 'API variables' that allow you to specify variables that are used in the query. Thus, if you have query parameters, pass their values as the first argument to results
as follows:
// For SPARQL construct queries.
+const results = query.results({
+ someVariable: 'value of someVariable',
+ anotherVariable: 'value of anotherVariable'
+}).statements()
+// For SPARQL select queries.
+const results = query.results({
+ someVariable: 'value of someVariable',
+ anotherVariable: 'value of anotherVariable'
+}).bindings()
+
+3. To iterate the results of your SPARQL query you have three options:
+3.1. Iterate through the results per row in a for
-loop:
// Iterating over the results.
+for await (const row of results) {
+ // execute something
+}
+
+Note: For select queries the for
-loop iterates over the rows of the result set. For construct queries the for
-loop iterates over the statements in the result set.
3.2. Save the results to a file. This is only supported for SPARQL construct
queries:
// Saving the results of a SPARQL construct query to a file.
+await results.toFile('my-file.nt')
+
+3.3. Load all results into memory in the form of an Array. Note that this is almost never used. If you want to process results, then use the 3a option; if you want to persist results, then option 3b suits better.
+// Loading results for a SPARQL construct or SPARQL select query into memory.
+const array = await results.toArray()
+
+
+ What can we help you with?
+TriplyDB is a state-of-the-art linked database / triple store that is used by organizations of any size: from start-ups to orgs with 10K+ employees.
+Learn more about how to use TriplyDB
+Use TriplyETL to quickly connect your data sources to your linked database / triple store. TriplyETL can be extract, transform, enrich, validate, and load linked data.
+Learn more about how to use TriplyETL
+Didn't find what you were looking for? Contact us via our form or by e-mailing to info@triply.cc.
+' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..53a5ae53 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"On this page: Triply Documentation TriplyDB TriplyETL Triply Documentation \u00b6 What can we help you with? TriplyDB \u00b6 TriplyDB is a state-of-the-art linked database / triple store that is used by organizations of any size: from start-ups to orgs with 10K+ employees. Learn more about how to use TriplyDB TriplyETL \u00b6 Use TriplyETL to quickly connect your data sources to your linked database / triple store. TriplyETL can be extract, transform, enrich, validate, and load linked data. Learn more about how to use TriplyETL Didn't find what you were looking for? Contact us via our form or by e-mailing to info@triply.cc .","title":"Home"},{"location":"#triply-documentation","text":"What can we help you with?","title":"Triply Documentation"},{"location":"#triplydb","text":"TriplyDB is a state-of-the-art linked database / triple store that is used by organizations of any size: from start-ups to orgs with 10K+ employees. Learn more about how to use TriplyDB","title":"TriplyDB"},{"location":"#triplyetl","text":"Use TriplyETL to quickly connect your data sources to your linked database / triple store. TriplyETL can be extract, transform, enrich, validate, and load linked data. Learn more about how to use TriplyETL Didn't find what you were looking for? Contact us via our form or by e-mailing to info@triply.cc .","title":"TriplyETL"},{"location":"generics/Graphql/","text":"On this page: Graphql implementation Schema Object types Fields IDs Naming Renaming Queries Global Object identification Pagination Filtering Simple cases Language filtering Advanced filtering Graphql implementation \u00b6 Some TriplyDB instances expose a GraphQL endpoint. This endpoint uses information from user-provided SHACL shapes for the schema creation. The goal of this documentation is to inform users about Triply's implementation of the GraphQL endpoint. For more generic information about GraphQL, you can visit graphql.org or other resources. In order to understand this documentation, you have to be familiar with the SHACL language. Note: in order to avoid confusion we will use the noun object as a synonym for resource and triple object when referring to the third element of a triple. Schema \u00b6 Object types \u00b6 A basic element of the schema is object types, which represents the type of the resources that you can query. type Book { id:ID! title:[XsdString]! } This object type corresponds to the shape below: shp:Book a sh:NodeShape; sh:targetClass sdo:Book; sh:property [ sh:path dc:title; sh:datatype xsd:string ]. Fields \u00b6 Fields in object types, such as title , represent properties of nodes. By default, fields return arrays of values. The only exception is when the property has sh:maxCount: 1 , then the field returns a single value. Thus, for the shape: shp:Book a sh:NodeShape; sh:targetClass sdo:Book; sh:property [ sh:path dc:title; sh:maxCount 1; sh:datatype xsd:string ]. The object type will be: type Book { id:ID! title:XsdString } Additionally, following the best practices , fields can give null results, except for: IDs, which represents the IRI of the resource. Lists, but not their elements Properties that have sh:minCount 1 and sh:maxCount 1 Thus, for this shape: shp:Book a sh:NodeShape; sh:targetClass sdo:Book; sh:property [ sh:path dc:title; sh:maxCount 1; sh:minCount 1; sh:datatype xsd:string ]. The corresponding object type is: type Book { id:ID! title:XsdString! } If the property shape includes an sh:datatype , the field returns values of GraphQL scalar type (see example above). On the other hand, if the property shape has an sh:class pointing to a class that: - is the sh:targetClass of a node shape, the field returns values of the corresponding object type. - is not mentioned as a sh:targetClass in a node shape, then the type of the returned values is ExternalIri . Therefore, the shapes: shp:Book a sh:NodeShape; sh:targetClass sdo:Book; sh:property [ sh:path sdo:author; sh:class sdo:Person ]; [ sh:path sdo:audio; sh:class sdo:AudioObject ]. shp:Person a sh:NodeShape; sh:targetClass sdo:Person; sh:property [ sh:path sdo:name; sh:datatype xsd:string ]. correspond to the below graphql types: type Book { id:ID! author:[Person]! audio:[ExternalIri]! } type Person { id:ID! name:[XsdString]! } IDs \u00b6 The id field is of type ID, which represents the IRI of each resource. This ID is unique. For example: book:Odyssey a sdo:Book; dct:title \"Odyssey\". The id field of this resource would be https://example.org/book/Odyssey . You can read more information on the ID scalar in graphql.org . Also, the use of the id field is mentioned later in the section Object Global Identification . Naming \u00b6 In order to name the GraphQL types in correspondence to shapes, we follow the below conventions: - For object types, we use the sh:targetClass of the node shape. - For object type fields, we use the sh:path of the property shape. More specifically, the name comes from the part of the IRI after the last # or otherwise the last / , converted from kebab-case to camelCase. Notice that if the selected name is illegal or causes a name collision, we'll return an error informing the user about the problem and ignore this type or field. Renaming \u00b6 Shape designers are able use their custom names by using a special property: . Relation to standards \u00b6 This function is an implementation of the SPARQL Construct, for more information on the standard see SPARQL Construct .","title":"SPARQL Construct"},{"location":"triply-etl/enrich/sparql/construct/#sparql-construct","text":"SPARQL Construct queries can be used to enrich the data that is in the Internal Store. The following full TriplyETL script loads one triple into the Internal Store, and then uses a SPARQL Construct query to add a second triple:","title":"SPARQL Construct"},{"location":"triply-etl/enrich/sparql/construct/#signature","text":"This function has the following signature: construct(query, opts?)","title":"Signature"},{"location":"triply-etl/enrich/sparql/construct/#parameters","text":"query : is a query string, this can be a SPARQL query string, reference to a query file, or an operation on the Context ( (ctx: Context) => string|string[] ). The aforementioned can query arguments can also be provided in an array of arguments for the query parameter. opts : an optional object containing options for SPARQL Construct toGraph : an optional argument to store the construct query results provided graph, defaults to the ETL's default graph.","title":"Parameters"},{"location":"triply-etl/enrich/sparql/construct/#example-usage","text":"import { logQuads } from '@triplyetl/etl/debug' import { Etl, loadRdf, Source } from '@triplyetl/etl/generic' import { construct } from '@triplyetl/etl/sparql' export default async function (): Promise .","title":"Example Usage"},{"location":"triply-etl/enrich/sparql/construct/#relation-to-standards","text":"This function is an implementation of the SPARQL Construct, for more information on the standard see SPARQL Construct .","title":"Relation to standards"},{"location":"triply-etl/enrich/sparql/update/","text":"On this page: SPARQL Update Insert Data Using prefix declarations Delete Data Delete Insert Where SPARQL Update \u00b6 SPARQL is a powerful query language that can be used to modify and enrich linked data in the Internal Store. With SPARQL, you can generate new linked data based on existing linked data, thereby enhancing the contents of the store. The function for using SPARQL Update can be imported as follows: import { update } from '@triplyetl/etl/sparql' Insert Data \u00b6 Insert Data can be used to add linked data to the Internal Store. The following example adds one triple: import { logQuads } from '@triplyetl/etl/debug' import { Etl } from '@triplyetl/etl/generic' import { update } from '@triplyetl/etl/sparql' export default async function (): Promise