Copyright © 2010-2018 the Contributors to the JSON-LD 1.1 Processing Algorithms and API Specification, published by the JSON for Linking Data W3C Community Group under the W3C Community Final Specification Agreement (FSA). A human-readable summary is available.
This specification defines a set of algorithms for programmatic transformations of JSON-LD documents. Restructuring data according to the defined transformations often dramatically simplifies its usage. Furthermore, this document proposes an Application Programming Interface (API) for developers implementing the specified algorithms.
This specification was published by the JSON for Linking Data W3C Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Final Specification Agreement (FSA) other conditions apply. Learn more about W3C Community and Business Groups.
This document has been developed by the JSON for Linking Data W3C Community Group as an update to the 1.0 recommendation [JSON-LD-API] developed by the RDF Working Group. The specification has undergone significant development, review, and changes during the course of several years.
There are several independent interoperable implementations of this specification, a test suite [JSON-LD-TESTS] and a live JSON-LD playground that is capable of demonstrating the features described in this document.
If you wish to make comments regarding this document, please send them to public-linked-json@w3.org (subscribe, archives).
This document is one of three JSON-LD 1.1 Recommendations produced by the JSON for Linking Data W3C Community Group:
This section is non-normative.
This document is a detailed specification of the JSON-LD processing algorithms. The document is primarily intended for the following audiences:
To understand the basics in this specification you must first be familiar with JSON, which is detailed in [RFC7159]. You must also understand the JSON-LD syntax defined in the JSON-LD 1.1 Syntax specification [JSON-LD11CG], which is the base syntax used by all of the algorithms in this document. To understand the API and how it is intended to operate in a programming environment, it is useful to have working knowledge of the JavaScript programming language [ECMASCRIPT-6.0] and WebIDL [WEBIDL]. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [RDF11-CONCEPTS].
There are a number of ways that one may participate in the development of this specification:
This document uses the following terms as defined in JSON [RFC7159]. Refer to the JSON Grammar section in [RFC7159] for formal definitions.
@context
where
the value, or the @id
of the value, is null
explicitly decouples a term's association with an IRI. A key-value pair in
the body of a JSON-LD document whose value is null
has the
same meaning as if the key-value pair was not defined. If
@value
, @list
, or @set
is set to
null
in expanded form, then the entire JSON
object is ignored.Furthermore, the following terminology is used throughout this document:
_:
._:
.@language
key whose
value MUST be a string representing a [BCP47] language code or null
.@graph
member, and may also have
@id
, and @index
members.
A simple graph object is a
graph object which does not have an @id
member. Note
that node objects may have a @graph
member, but are
not considered graph objects if they include any other properties.
A top-level object consisting of @graph
is also not a graph object.@container
set to @id
, who's keys are
interpreted as IRIs representing the @id
of the associated node object; value MUST be a node object.
If the value contains a property expanding to @id
, it's value MUST
be equivalent to the referencing key.@container
set to @index
, whose values MUST be any of the following types:
string,
number,
true,
false,
null,
node object,
value object,
list object,
set object, or
an array of zero or more of the above possibilities.
@container
set to @language
, whose keys MUST be strings representing
[BCP47] language codes and the values MUST be any of the following types:
null,
string, or
an array of zero or more of the above possibilities.
@list
member.@context
keyword.@value
, @list
,
or @set
keywords, or@graph
and @context
.@version
member in a
context, or via explicit API option, other processing modes
can be accessed. This specification defines extensions for the
json-ld-1.1
processing mode.@type
, and values of terms defined to be vocabulary relative
are resolved relative to the vocabulary mapping, not the base IRI.@set
member.@container
set to @type
, who's keys are
interpreted as IRIs representing the @type
of the associated node object;
value MUST be a node object, or array of node objects.
If the value contains a property expanding to @type
, it's values
are merged with the map value when expanding.@value
member.@vocab
key whose
value MUST be an absolute IRI null
.The Following terms are used within specific algorithms.
The following typographic conventions are used in this specification:
markup
markup definition reference
markup external definition reference
Notes are in light green boxes with a green left border and with a "Note" header in green. Notes are normative or informative depending on the whether they are in a normative or informative section, respectively.
Examples are in light khaki boxes, with khaki left border, and with a
numbered "Example" header in khaki. Examples are always informative.
The content of the example is in monospace font and may be syntax colored.
Note that in the examples used in this document, output is of necessity shown in serialized form as JSON. While the algorithms describe operations on the JSON-LD internal representation, when they as displayed as examples, the JSON serialization is used. In particular, the internal representation use of dictionaries are represented using JSON objects.
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "knows": "http://xmlns.com/foaf/0.1/knows" }, "@id": "http://me.markus-lanthaler.com/", "name": "Markus Lanthaler", "knows": [ { "name": "Dave Longley" } ] }
In the internal representation, the example above would be of a
dictionary containing @context
, @id
, name
, and knows
keys,
with either dictionaries, strings, or arrays of
dictionaries or strings values. In the JSON serialization, JSON objects are used
for dictionaries, while arrays and strings are serialized using a
convention common to many programming languages.
This section is non-normative.
The JSON-LD 1.1 Syntax specification [JSON-LD11CG] defines a syntax to express Linked Data in JSON. Because there is more than one way to express Linked Data using this syntax, it is often useful to be able to transform JSON-LD documents so that they may be more easily consumed by specific applications.
To allow these algorithms to be adapted for syntaxes other than JSON, the algorithms operate on the JSON-LD internal representation, which uses the generic concepts of arrays, dictionaries, strings, numbers, booleans, and null to describe the data represented by a JSON document. Algorithms act on this internal representation with API entry points responsible for transforming between the concrete and internal representations.
JSON-LD uses contexts to allow Linked Data to be expressed in a way that is specifically tailored to a particular person or application. By providing a context, JSON data can be expressed in a way that is a natural fit for a particular person or application whilst also indicating how the data should be understood at a global scale. In order for people or applications to share data that was created using a context that is different from their own, a JSON-LD processor must be able to transform a document from one context to another. Instead of requiring JSON-LD processors to write specific code for every imaginable context switching scenario, it is much easier to specify a single algorithm that can remove any context. Similarly, another algorithm can be specified to subsequently apply any context. These two algorithms represent the most basic transformations of JSON-LD documents. They are referred to as expansion and compaction, respectively.
JSON-LD 1.1 introduces new features that are
compatible with JSON-LD 1.0 [JSON-LD],
but if processed by a JSON-LD 1.0 processor may produce different results.
In order to detect this JSON-LD 1.1 requires that the processing
mode be explicitly set to json-ld-1.1
, either through the
processingMode
API option, or using the
@version
member within a context.
There are four major types of transformation that are discussed in this document: expansion, compaction, flattening, and RDF serialization/deserialization.
This section is non-normative.
The algorithm that removes context is called expansion. Before performing any other transformations on a JSON-LD document, it is easiest to remove any context from it and to make data structures more regular.
To get an idea of how context and data structuring affects the same data, here is an example of JSON-LD that uses only terms and is fairly compact:
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": { "@id": "http://xmlns.com/foaf/0.1/homepage", "@type": "@id" } }, "@id": "http://me.markus-lanthaler.com/", "name": "Markus Lanthaler", "homepage": "http://www.markus-lanthaler.com/" }
The next input example uses one IRI to express a property and an array to encapsulate another, but leaves the rest of the information untouched.
{ "@context": { "website": "http://xmlns.com/foaf/0.1/homepage" }, "@id": "http://me.markus-lanthaler.com/", "http://xmlns.com/foaf/0.1/name": "Markus Lanthaler", "website": { "@id": "http://www.markus-lanthaler.com/" } }
Note that both inputs are valid JSON-LD and both represent the same information. The difference is in their context information and in the data structures used. A JSON-LD processor can remove context and ensure that the data is more regular by employing expansion.
Expansion has two important goals: removing any contextual
information from the document, and ensuring all values are represented
in a regular form. These goals are accomplished by expanding all properties
to absolute IRIs and by expressing all
values in arrays in
expanded form. Expanded form is the most verbose
and regular way of expressing of values in JSON-LD; all contextual
information from the document is instead stored locally with each value.
Running the Expansion algorithm
(expand
)
operation) against the above examples results in the following output:
[ { "@id": "http://me.markus-lanthaler.com/", "http://xmlns.com/foaf/0.1/name": [ { "@value": "Markus Lanthaler" } ], "http://xmlns.com/foaf/0.1/homepage": [ { "@id": "http://www.markus-lanthaler.com/" } ] } ]
The example above is the JSON-LD serialization of the output of the expansion algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that in the output above all context definitions have been removed, all terms and compact IRIs have been expanded to absolute IRIs, and all JSON-LD values are expressed in arrays in expanded form. While the output is more verbose and difficult for a human to read, it establishes a baseline that makes JSON-LD processing easier because of its very regular structure.
This section is non-normative.
While expansion removes context from a given input, compaction's primary function is to perform the opposite operation: to express a given input according to a particular context. Compaction applies a context that specifically tailors the way information is expressed for a particular person or application. This simplifies applications that consume JSON or JSON-LD by expressing the data in application-specific terms, and it makes the data easier to read by humans.
Compaction uses a developer-supplied context to shorten IRIs to terms or compact IRIs and JSON-LD values expressed in expanded form to simple values such as strings or numbers.
For example, assume the following expanded JSON-LD input document:
[ { "@id": "http://me.markus-lanthaler.com/", "http://xmlns.com/foaf/0.1/name": [ { "@value": "Markus Lanthaler" } ], "http://xmlns.com/foaf/0.1/homepage": [ { "@id": "http://www.markus-lanthaler.com/" } ] } ]
Additionally, assume the following developer-supplied JSON-LD context:
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": { "@id": "http://xmlns.com/foaf/0.1/homepage", "@type": "@id" } } }
Running the Compaction Algorithm
(compact
)
operation) given the context supplied above against the JSON-LD input
document provided above would result in the following output:
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": { "@id": "http://xmlns.com/foaf/0.1/homepage", "@type": "@id" } }, "@id": "http://me.markus-lanthaler.com/", "name": "Markus Lanthaler", "homepage": "http://www.markus-lanthaler.com/" }
The example above is the JSON-LD serialization of the output of the compaction algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that all IRIs have been compacted to
terms as specified in the context,
which has been injected into the output. While compacted output is
useful to humans, it is also used to generate structures that are easy to
program against. Compaction enables developers to map any expanded document
into an application-specific compacted document. While the context provided
above mapped http://xmlns.com/foaf/0.1/name
to name
, it
could also have been mapped to any other term provided by the developer.
This section is non-normative.
While expansion ensures that a document is in a uniform structure, flattening goes a step further to ensure that the shape of the data is deterministic. In expanded documents, the properties of a single node may be spread across a number of different dictionaries. By flattening a document, all properties of a node are collected in a single dictionary and all blank nodes are labeled with a blank node identifier. This may drastically simplify the code required to process JSON-LD data in certain applications.
For example, assume the following JSON-LD input document:
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "knows": "http://xmlns.com/foaf/0.1/knows" }, "@id": "http://me.markus-lanthaler.com/", "name": "Markus Lanthaler", "knows": [ { "name": "Dave Longley" } ] }
Running the Flattening Algorithm
(flatten
)
operation) with a context set to null to prevent compaction
returns the following document:
[ { "@id": "_:t0", "http://xmlns.com/foaf/0.1/name": [ { "@value": "Dave Longley" } ] }, { "@id": "http://me.markus-lanthaler.com/", "http://xmlns.com/foaf/0.1/name": [ { "@value": "Markus Lanthaler" } ], "http://xmlns.com/foaf/0.1/knows": [ { "@id": "_:t0" } ] } ]
The example above is the JSON-LD serialization of the output of the flattening algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note how in the output above all properties of a node are collected in a
single dictionary and how the blank node representing
"Dave Longley" has been assigned the blank node identifier
_:t0
.
To make it easier for humans to read or for certain applications to process it, a flattened document can be compacted by passing a context. Using the same context as the input document, the flattened and compacted document looks as follows:
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "knows": "http://xmlns.com/foaf/0.1/knows" }, "@graph": [ { "@id": "_:t0", "name": "Dave Longley" }, { "@id": "http://me.markus-lanthaler.com/", "name": "Markus Lanthaler", "knows": { "@id": "_:t0" } } ] }
Please note that the result of flattening and compacting a document
is always a dictionary,
(represented as a JSON object when serialized),
which contains an @graph
member that represents the default graph.
This section is non-normative.
JSON-LD can be used to serialize RDF data as described in [RDF11-CONCEPTS]. This ensures that data can be round-tripped to and from any RDF syntax without any loss in fidelity.
For example, assume the following RDF input serialized in Turtle [TURTLE]:
<http://me.markus-lanthaler.com/> <http://xmlns.com/foaf/0.1/name> "Markus Lanthaler" . <http://me.markus-lanthaler.com/> <http://xmlns.com/foaf/0.1/homepage> <http://www.markus-lanthaler.com/> .
Using the Serialize RDF as JSON-LD algorithm a developer could transform this document into expanded JSON-LD:
[ { "@id": "http://me.markus-lanthaler.com/", "http://xmlns.com/foaf/0.1/name": [ { "@value": "Markus Lanthaler" } ], "http://xmlns.com/foaf/0.1/homepage": [ { "@id": "http://www.markus-lanthaler.com/" } ] } ]
The example above is the JSON-LD serialization of the output of the Serialize RDF as JSON-LD algorithm, where the algorithm's use of dictionaries are replaced with JSON objects.
Note that the output above could easily be compacted using the technique outlined in the previous section. It is also possible to deserialize the JSON-LD document back to RDF using the Deserialize JSON-LD to RDF algorithm.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, and MUST NOT are to be interpreted as described in [RFC2119].
There are two classes of products that can claim conformance to this specification: JSON-LD Processors, and RDF Serializers/Deserializers.
A conforming JSON-LD Processor is a system which can perform the Expansion, Compaction, and Flattening operations in a manner consistent with the algorithms defined in this specification.
JSON-LD Processors MUST NOT attempt to correct malformed IRIs or language tags; however, they MAY issue validation warnings. IRIs are not modified other than conversion between relative and absolute IRIs.
A conforming RDF Serializer/Deserializer is a system that can deserialize JSON-LD to RDF and serialize RDF as JSON-LD as defined in this specification.
The algorithms in this specification are generally written with more concern for clarity than efficiency. Thus, JSON-LD Processors may implement the algorithms given in this specification in any way desired, so long as the end result is indistinguishable from the result that would be obtained by the specification's algorithms.
In algorithm steps that describe operations on keywords, those steps also apply to keyword aliases.
Implementers can partially check their level of conformance to this specification by successfully passing the test cases of the JSON-LD test suite [JSON-LD-TESTS]. Note, however, that passing all the tests in the test suite does not imply complete conformance to this specification. It only implies that the implementation conforms to aspects tested by the test suite.
When processing a JSON-LD data structure, each processing rule is applied using information provided by the active context. This section describes how to produce an active context.
The active context contains the active term definitions which specify how properties and values have to be interpreted as well as the current base IRI, the vocabulary mapping and the default language. Each term definition consists of an IRI mapping, a boolean flag reverse property, an optional type mapping or language mapping, an optional context, an optional nest value, an optional prefix flag, and an optional container mapping. A term definition can not only be used to map a term to an IRI, but also to map a term to a keyword, in which case it is referred to as a keyword alias.
When processing, active context is initialized without any term definitions, vocabulary mapping, or default language. If a local context is encountered during processing, a new active context is created by cloning the existing active context. Then the information from the local context is merged into the new active context. Given that local contexts may contain references to remote contexts, this includes their retrieval.
This section is non-normative.
First we prepare a new active context result by cloning the current active context. Then we normalize the form of the original local context to an array. Local contexts may be in the form of a dictionary, a string, or an array containing a combination of the two. Finally we process each context contained in the local context array as follows.
Unless specified using
processingMode
API option,
the processing mode is set using the @version
member
in a local context and
affects the behavior of algorithms including expansion and compaction.
If context is a string, it represents a reference to
a remote context. We dereference the remote context and replace context
with the value of the @context
key of the top-level object in the
retrieved JSON-LD document. If there's no such key, an
invalid remote context
has been detected. Otherwise, we process context by recursively using
this algorithm ensuring that there is no cyclical reference.
If context is a dictionary, we first update the
base IRI, the vocabulary mapping, processing mode, and the
default language by processing three specific keywords:
@base
, @vocab
, @version
, and @language
.
These are handled before any other keys in the local context because
they affect how the other keys are processed. Please note that @base
is
ignored when processing remote contexts.
Then, for every other key in local context, we update the term definition in result. Since term definitions in a local context may themselves contain terms or compact IRIs, we may need to recurse. When doing so, we must ensure that there is no cyclical dependency, which is an error. After we have processed any term definition dependencies, we update the current term definition, which may be a keyword alias.
Finally, we return result as the new active context.
This algorithm specifies how a new active context is updated with a local context. The algorithm takes three input variables: an active context, a local context, and an array remote contexts which is used to detect cyclical context inclusions. If remote contexts is not passed, it is initialized to an empty array.
null
, set result to a
newly-initialized active context and continue with the
next context.
In JSON-LD 1.0, the base IRI was given
a default value here; this is now described conditionally
in section 9. The Application Programming Interface.recursive context inclusion
error has been detected and processing is aborted;
otherwise, add context to remote contexts.loading remote context failed
error has been detected and processing is aborted. If the dereferenced document has no
top-level dictionary with an @context
member, an
invalid remote context
has been detected and processing is aborted; otherwise,
set context to the value of that member.invalid local context
error has been detected and processing is aborted.@base
key and remote contexts is empty, i.e., the currently
being processed context is not a remote context:
@base
key.null
, remove the
base IRI of result.null
,
set the base IRI of result to the result of
resolving value against the current base IRI
of result.invalid base IRI
error has been detected and processing is aborted.@version
key:
1.1
,
an invalid @version value
has been detected, and processing is aborted.json-ld-1.0
,
a processing mode conflict
error has been detected and processing is aborted.json-ld-1.1
, if not already set.@vocab
key:
@vocab
key.""
),
the effective value is the current base IRI.invalid vocab mapping
error has been detected and processing is aborted.@language
key:
@language
key.null
, remove
any default language from result.invalid default language
error has been detected and processing is aborted.@base
, @vocab
, or
@language
, invoke the
Create Term Definition algorithm,
passing result for active context,
context for local context, key,
and defined.This algorithm is called from the Context Processing algorithm to create a term definition in the active context for a term being processed in a local context.
This section is non-normative.
term definitions are created by parsing the information in the given local context for the given term. If the given term is a compact IRI, it may omit an IRI mapping by depending on its prefix having its own term definition. If the prefix is a key in the local context, then its term definition must first be created, through recursion, before continuing. Because a term definition can depend on other term definitions, a mechanism must be used to detect cyclical dependencies. The solution employed here uses a map, defined, that keeps track of whether or not a term has been defined or is currently in the process of being defined. This map is checked before any recursion is attempted.
After all dependencies for a term have been defined, the rest of the information in the local context for the given term is taken into account, creating the appropriate IRI mapping, container mapping, and type mapping or language mapping for the term.
The algorithm has four required inputs which are: an active context, a local context, a term, and a map defined.
true
(indicating that the
term definition has already been created), return. Otherwise,
if the value is false
, a
cyclic IRI mapping
error has been detected and processing is aborted.false
. This indicates that the term definition
is now being created but is not yet complete.keyword redefinition
error has been detected and processing is aborted.null
or value
is a dictionary containing the key-value pair
@id
-null
, set the
term definition in active context to
null
, set the value associated with defined's
key term to true
, and return.@id
and whose value is value.
Set simple term to true
.invalid term definition
error has been detected and processing is aborted.
Set simple term to false
.@type
:
@type
key, which must be a string. Otherwise, an
invalid type mapping
error has been detected and processing is aborted.true
for vocab,
local context, and defined. If the expanded type is
neither @id
, nor @vocab
, nor an absolute IRI, an
invalid type mapping
error has been detected and processing is aborted.@reverse
:
@id
or @nest
, members, an
invalid reverse property
error has been detected and processing is aborted.@reverse
key
is not a string, an
invalid IRI mapping
error has been detected and processing is aborted.@reverse
key for value, true
for vocab,
local context, and defined. If the result
is neither an absolute IRI nor a blank node identifier,
i.e., it contains no colon (:
), an
invalid IRI mapping
error has been detected and processing is aborted.@container
member,
set the container mapping of definition
to its value; if its value is neither @set
, nor
@index
, nor null
, an
invalid reverse property
error has been detected (reverse properties only support set- and
index-containers) and processing is aborted.true
.true
and return.false
.@id
and its value
does not equal term:
@id
key is not a string, an
invalid IRI mapping
error has been detected and processing is aborted.@id
key for
value, true
for vocab,
local context, and defined. If the resulting
IRI mapping is neither a keyword, nor an
absolute IRI, nor a blank node identifier, an
invalid IRI mapping
error has been detected and processing is aborted; if it equals @context
, an
invalid keyword alias
error has been detected and processing is aborted.:
),
simple term is true
, and the,
IRI mapping of definition ends with a URI
gen-delim character,
set the prefix flag in definition to true
.:
):
invalid IRI mapping
error been detected and processing is aborted.@container
:
@container
key, which must be either
@graph
,
@id
,
@index
,
@language
,
@list
,
@set
, or
@type
.
or an array containing exactly any one of those
keywords, an array containing @graph
and
either @id
or @index
optionally
including @set
, or an array containing a
combination of @set
and any of
@index
, @id
, @type
,
@language
in any order
.
Otherwise, an
invalid container mapping
has been detected and processing is aborted.processingMode
is json-ld-1.0
and the container value
is @graph
, @id
, or @type
, or is otherwise not a string, an
invalid container mapping
has been detected and processing is aborted.@context
:
processingMode
is json-ld-1.0
, an
invalid term definition
has been detected and processing is aborted.@context
key, which is treated as a local context.invalid scoped context
error
has been detected and processing is aborted.@language
and
does not contain the key @type
:
@language
key, which must be either null
or a string. Otherwise, an
invalid language mapping
error has been detected and processing is aborted.@nest
:
processingMode
is json-ld-1.0
, an
invalid term definition
has been detected and processing is aborted.@nest
key, which must be a string and
must not be a keyword other than @nest
. Otherwise, an
invalid @nest value
error has been detected and processing is aborted.@prefix
:
processingMode
is json-ld-1.0
, or if
term contains a colon (:
), an
invalid term definition
has been detected and processing is aborted.@prefix
key, which must be a boolean. Otherwise, an
invalid @prefix value
error has been detected and processing is aborted.@id
,
@reverse
, @container
,
@context
, @nest
,
@prefix
, or @type
, an
invalid term definition
error has
been detected and processing is aborted.true
.In JSON-LD documents, some keys and values may represent IRIs. This section defines an algorithm for transforming a string that represents an IRI into an absolute IRI or blank node identifier. It also covers transforming keyword aliases into keywords.
IRI expansion may occur during context processing or during any of the other JSON-LD algorithms. If IRI expansion occurs during context processing, then the local context and its related defined map from the Context Processing algorithm are passed to this algorithm. This allows for term definition dependencies to be processed via the Create Term Definition algorithm.
This section is non-normative.
In order to expand value to an absolute IRI, we must
first determine if it is null
, a term, a
keyword alias, or some form of IRI. Based on what
we find, we handle the specific kind of expansion; for example, we expand
a keyword alias to a keyword and a term
to an absolute IRI according to its IRI mapping
in the active context. While inspecting value we
may also find that we need to create term definition
dependencies because we're running this algorithm during context processing.
We can tell whether or not we're running during context processing by
checking local context against null
.
We know we need to create a term definition in the
active context when value is
a key in the local context and the defined map
does not have a key for value with an associated value of
true
. The defined map is used during
Context Processing to keep track of
which terms have already been defined or are
in the process of being defined. We create a
term definition by using the
Create Term Definition algorithm.
The algorithm takes two required and four optional input variables. The
required inputs are an active context and a value
to be expanded. The optional inputs are two flags,
document relative and vocab, that specifying
whether value can be interpreted as a relative IRI
against the document's base IRI or the
active context's
vocabulary mapping, respectively, and
a local context and a map defined to be used when
this algorithm is used during Context Processing.
If not passed, the two flags are set to false
and
local context and defined are initialized to null
.
null
,
return value as is.null
, it contains
a key that equals value, and the value associated with the key
that equals value in defined is not true
,
invoke the Create Term Definition algorithm,
passing active context, local context,
value as term, and defined. This will ensure that
a term definition is created for value in
active context during Context Processing.
true
and the
active context has a term definition for
value, return the associated IRI mapping.:
), it is either
an absolute IRI, a compact IRI, or a
blank node identifier:
:
)._
)
or suffix begins with double-forward-slash
(//
), return value as it is already an
absolute IRI or a blank node identifier.null
, it
contains a key that equals prefix, and the value
associated with the key that equals prefix in defined
is not true
, invoke the
Create Term Definition algorithm,
passing active context,
local context, prefix as term,
and defined. This will ensure that a
term definition is created for prefix
in active context during
Context Processing.true
, and
active context has a vocabulary mapping,
return the result of concatenating the vocabulary mapping
with value.true
set value to the result of resolving value against
the base IRI. Only the basic algorithm in
section 5.2
of [RFC3986] is used; neither
Syntax-Based Normalization nor
Scheme-Based Normalization
are performed. Characters additionally allowed in IRI references are treated
in the same way that unreserved characters are treated in URI references, per
section 6.5
of [RFC3987].This algorithm expands a JSON-LD document, such that all context definitions are removed, all terms and compact IRIs are expanded to absolute IRIs, blank node identifiers, or keywords and all JSON-LD values are expressed in arrays in expanded form.
This section is non-normative.
Starting with its root element, we can process the JSON-LD document recursively, until we have a fully expanded result. When expanding an element, we can treat each one differently according to its type, in order to break down the problem:
null
, there is nothing
to expand.Finally, after ensuring result is in an array, we return result.
The algorithm takes three required and one optional input variables.
The required inputs are an active context,
an active property, and an element to be expanded.
The optional input is the flag frame expansion the allows
special forms of input used for frame expansion.
To begin, the active property is set to null
,
and element is set to the JSON-LD input.
If not passed, the frame expansion flag is set to false
.
The algorithm also performs processing steps specific to expanding
a JSON-LD Frame. For a frame, the @id
and
@type
properties can accept an array of IRIs or
an empty dictionary. The properties of a value object can also
accept an array of strings, or an empty dictionary.
Framing also uses additional keyword properties:
(@explicit
, @default
,
@embed
, @explicit
, @omitDefault
, or
@requireAll
) which are preserved through expansion.
Special processing for a JSON-LD Frame is invoked when the
frame expansion flag is set to true
.
null
, return null
.@default
,
set the frame expansion flag to false
.null
or @graph
,
drop the free-floating scalar by returning null
.@list
or its
container mapping includes @list
, the
expanded item must not be an array or a
list object, otherwise a
list of lists
error has been detected and processing is aborted.@context
, set
active context to the result of the
Context Processing algorithm,
passing active context and the value of the
@context
key as local context.@type
using the
IRI Expansion algorithm,
passing active context, key for
value, and true
for vocab:
@context
, continue to
the next key.true
for vocab.null
or it neither
contains a colon (:
) nor it is a keyword,
drop key by continuing to the next key.@reverse
, an
invalid reverse property map
error has been detected and processing is aborted.colliding keywords
error has been detected and processing is aborted.@id
and
value is not a string, an
invalid @id value
error has been detected and processing is aborted. Otherwise,
set expanded value to the result of using the
IRI Expansion algorithm,
passing active context, value, and true
for document relative.
When the frame expansion flag is set, value
may be an empty dictionary, or an array of one
or more strings. expanded value will be
an array of one or more of these, with string
values expanded using the IRI Expansion Algorithm.@type
and value
is neither a string nor an array of
strings, an
invalid type value
error has been detected and processing is aborted. Otherwise,
set expanded value to the result of using the
IRI Expansion algorithm, passing
active context, true
for vocab,
and true
for document relative to expand the value
or each of its items.
When the frame expansion flag is set, value
may also be an empty dictionary.@graph
, set
expanded value to the result of using this algorithm
recursively passing active context, @graph
for active property, value for element,
and the frame expansion flag,
ensuring that expanded value is an array of one or more dictionaries.@value
and
value is not a scalar or null
, an
invalid value object value
error has been detected and processing is aborted. Otherwise,
set expanded value to value. If expanded value
is null
, set the @value
member of result to null
and continue with the
next key from element. Null values need to be preserved
in this case as the meaning of an @type
member depends
on the existence of an @value
member.
When the frame expansion flag is set, value
may also be an empty dictionary or an array of
scalar values. expanded value will be null, or an
array of one or more scalar values.@language
and
value is not a string, an
invalid language-tagged string
error has been detected and processing is aborted.
Otherwise, set expanded value to lowercased value.
When the frame expansion flag is set, value
may also be an empty dictionary or an array of zero or
strings. expanded value will be an
array of one or more string values converted to lower case.@index
and
value is not a string, an
invalid @index value
error has been detected and processing is aborted. Otherwise,
set expanded value to value.@list
:
null
or
@graph
, continue with the next key
from element to remove the free-floating list.list of lists
error has been detected and processing is aborted.@set
, set
expanded value to the result of using this algorithm
recursively, passing active context,
active property, value for element,
and the frame expansion flag.@reverse
and
value is not a dictionary, an
invalid @reverse value
error has been detected and processing is aborted. Otherwise
@reverse
as active property,
value as element,
and the frame expansion flag.@reverse
member,
i.e., properties that are reversed twice, execute for each of its
property and item the following steps:
@reverse
:
@reverse
member, create
one and set its value to an empty dictionary.@reverse
member in result
using the variable reverse map.@reverse
:
invalid reverse property value
has been detected and processing is aborted.@nest
,
add key to nests, initializing it to an empty array,
if necessary.
Continue with the next key from element.@explicit
, @default
,
@embed
, @explicit
, @omitDefault
, or
@requireAll
),
set expanded value to the result of performing the
Expansion Algorithm
recursively, passing active context,
active property, value for element,
and the frame expansion flag.null
, set
the expanded property member of result to
expanded value.@language
and
value is a dictionary then value
is expanded from a language map
as follows:
null
,
otherwise an
invalid language map value
error has been detected and processing is aborted.@value
-item)
and (@language
-lowercased
language),
unless item is null
.
If language is @none
,
or expands to @none
, do not set the @language
member.
@index
,
@type
, or @id
and
value is a dictionary then value
is expanded from an map as follows:
@type
,
and index's term definition in
term context has a local context, set
map context to the result of the Context Processing
algorithm, passing term context as active context and the
value of the index's local context as
local context. Otherwise, set map context
to term context.true
for vocab.@graph
and if item is not a
graph object, set item to a new
dictionary containing the key-value pair
@graph
-item, ensuring that the
value is represented using an array.@index
and item does not have the key
@index
and expanded index is not @none
,
add the key-value pair
(@index
-index) to item.@id
and item does not have the key
@id
, add the key-value pair
(@id
-expanded index) to
item, where expanded index is set to the result of
using the
IRI Expansion algorithm,
passing active context, index, and true
for document relative, unless expanded index
is already set to @none
.@type
set types to the concatenation of
expanded index with any existing values of
@type
in item.
If expanded index is @none
,
do not concatenate expanded index to types.
Add the key-value pair
(@type
-types) to
item.null
, ignore key
by continuing to the next key from element.@list
and
expanded value is not already a list object,
convert expanded value to a list object
by first setting it to an array containing only
expanded value if it is not already an array,
and then by setting it to a dictionary containing
the key-value pair @list
-expanded value.@graph
, convert expanded value into an array, if necessary,
then convert each value ev in expanded value into a
graph object:
@graph
-ev
where ev is represented as an array.@reverse
member, create
one and initialize its value to an empty dictionary.@reverse
member in result
using the variable reverse map.invalid reverse property value
has been detected and processing is aborted.@value
, an
invalid @nest value
error
has been detected and processing is aborted.@value
:
@value
, @language
, @type
,
and @index
. It must not contain both the
@language
key and the @type
key.
Otherwise, an
invalid value object
error has been detected and processing is aborted.@value
key is
null
, then set result to null
.@value
member
is not a string and result contains the key
@language
, an
invalid language-tagged value
error has been detected (only strings
can be language-tagged) and processing is aborted.@type
member
and its value is not an IRI, an
invalid typed value
error has been detected and processing is aborted.@type
and its associated value is not an array, set it to
an array containing only the associated value.@set
or @list
:
@index
. Otherwise, an
invalid set or list object
error has been detected and processing is aborted.@set
, then
set result to the key's associated value.@language
, set result to null
.null
or @graph
,
drop free-floating values as follows:
@value
or @list
, set result to
null
.@id
, set result to null
.
When the frame expansion flag is set, a dictionary
containing only the @id
key is retained.If, after the above algorithm is run, the result is a
dictionary that contains only an @graph
key, set the
result to the value of @graph
's value. Otherwise, if the result
is null
, set it to an empty array. Finally, if
the result is not an array, then set the result to an
array containing only the result.
Some values in JSON-LD can be expressed in a compact form. These values are required to be expanded at times when processing JSON-LD documents. A value is said to be in expanded form after the application of this algorithm.
This section is non-normative.
If active property has a type mapping in the
active context set to @id
or @vocab
,
and the value is a string,
a dictionary with a single member @id
whose
value is the result of using the
IRI Expansion algorithm on value
is returned.
Otherwise, the result will be a dictionary containing
an @value
member whose value is the passed value.
Additionally, an @type
member will be included if there is a
type mapping associated with the active property
or an @language
member if value is a
string and there is language mapping associated
with the active property.
Note that values interpreted as IRIs fall into two categories:
those that are document relative, and those that are
vocabulary relative. Properties and values of @type
,
along with terms marked as "@type": "@vocab"
are vocabulary relative, meaning that they need to be either
a defined term, a compact IRI
where the prefix is a term,
or a string which is turned into an absolute IRI using
the vocabulary mapping.
The algorithm takes three required inputs: an active context, an active property, and a value to expand.
@id
,
and the value is a string,
return a new
dictionary containing a single key-value pair where the
key is @id
and the value is the result of using the
IRI Expansion algorithm, passing
active context, value, and true
for
document relative.@vocab
,
and the value is a string,
return a new
dictionary containing a single key-value pair where the
key is @id
and the value is the result of using the
IRI Expansion algorithm, passing
active context, value, true
for
vocab, and true
for
document relative.@value
member whose value is set to
value.@id
or @vocab
,
add an @type
member to
result and set its value to the value associated with the
type mapping.@language
to result and set its
value to the language code associated with the
language mapping; unless the
language mapping is set to null
in
which case no member is added.@language
to result and set its value to the
default language.This algorithm compacts a JSON-LD document, such that the given context is applied. This must result in shortening any applicable IRIs to terms or compact IRIs, any applicable keywords to keyword aliases, and any applicable JSON-LD values expressed in expanded form to simple values such as strings or numbers.
This section is non-normative.
Starting with its root element, we can process the JSON-LD document recursively, until we have a fully compacted result. When compacting an element, we can treat each one differently according to its type, in order to break down the problem:
@index
or @language
maps.The final output is a dictionary with an @context
key, if a non-empty context was given, where the dictionary
is either result or a wrapper for it where result appears
as the value of an (aliased) @graph
key because result
contained two or more items in an array.
The algorithm takes five required input variables: an active context,
an inverse context, an active property, an
element to be compacted, and a flag
compactArrays
To begin, the active context is set to the result of
performing Context Processing
on the passed context, the inverse context is
set to the result of performing the
Inverse Context Creation algorithm
on active context, the active property is
set to null
, element is set to the result of
performing the Expansion algorithm
on the JSON-LD input, and, if not passed,
compactArrays
is set to true
.
null
, then append
it to result.1
),
active property is not @graph
or @set
,
or the container mapping for active property in
active context does not include @list
or @set
,
and compactArrays
is true
, set result to its only item.@value
or @id
member and the result of using the
Value Compaction algorithm,
passing active context, inverse context,
active property,and element as value is
a scalar, return that result.true
if
active property equals @reverse
,
otherwise to false
.@type
member,
create a new array compacted types initialized
by transforming each expanded type of that member
into it's compacted form using the IRI Compaction algorithm,
passing active context, inverse context,
expanded type for var, and
true
for vocab. Then, for each term
in compacted types ordered lexicographically:
@id
or
@type
:
true
for vocab if
expanded property is @type
,
false
otherwise.@type
array:
true
for vocab.1
), then
set compacted value to its only item.true
for vocab.@reverse
:
@reverse
for
active property, and expanded value
for element.@set
or
compactArrays
is false
, and value is not an
array, set value to a new
array containing only value.@reverse
for var,
and true
for vocab.@preserve
then:
@preserve
in result unless expanded value is an empty array.@index
and
active property has a container mapping
in active context that includes @index
,
then the compacted result will be inside of an @index
container, drop the @index
property by continuing
to the next expanded property.@index
,
@value
, or @language
:
true
for vocab.true
for vocab, and
inside reverse.@nest
, or a term in the
active context that expands to @nest
,
otherwise an invalid @nest
value error has been detected, and processing is aborted.
If result does not have the key that equals nest
term, initialize it to an empty JSON object (nest
object). If nest object does not have the key
that equals item active property, set this key's
value in nest object to an empty
array.Otherwise, if the key's value is not an
array, then set it to one containing only the
value.true
for vocab, and
inside reverse.@nest
, or a term in the
active context that expands to @nest
,
otherwise an invalid @nest
value error has been detected, and processing is aborted.
Set nest result to the value of nest term in result,
initializing it to a new dictionary, if necessary; otherwise
set nest result to result.null
. If there
is a container mapping for
item active property in active context,
set container to the first
such value other than @set
.true
or false
depending on if the container mapping for
item active property in active context
includes @set
or if item active property
is @graph
or @list
.@list
and is not a graph object containing @list
,
otherwise pass the key's associated value for element.@list
:
@list
for var, and compacted item
for value and the value is the original compacted item.@index
, then add a key-value pair
to compacted item where the key is the
result of the IRI Compaction algorithm,
passing active context, inverse context,
@index
as var, and the value associated with the
@index
key in expanded item as value.compaction to list of lists
error has been detected and processing is aborted.@graph
and @id
:
@id
in expanded item
or @none
if no such value exists as var, with vocab set to true
if there is no @id
member in expanded item.true
,
set compacted item to an array containing that value.@graph
and @index
and expanded item is a simple graph object:
@index
in
expanded item or @none
, if no such
value exists.true
,
set compacted item to an array containing that value.@graph
and expanded item is a simple graph
object the value cannot be represented as a map
object. If compacted item is not an array
and as array is true
, set
compacted item to an array containing
that value. If the value associated with the key that
equals item active property in
nest result is not an array,
set it to a new array containing only the value.
Then append compacted item to the value if
compacted item is not an array,
otherwise, concatenate it.
@graph
or otherwise does not match one of the previous cases, redo compacted item.
@graph
as
var, and true
for
vocab using the original
compacted item as a value.@id
,
add the key resulting from calling the IRI Compaction algorithm
passing active context, @id
as
var, and true
for
vocab using the value resulting from calling the IRI Compaction algorithm
passing active context, the value of @id
in expanded item as
var.@index
,
add the key resulting from calling the IRI Compaction algorithm
passing active context, @index
as
var, and true
for
vocab using the value of @index
in expanded item.true
,
set compacted item to an array
containing that value.@language
,
@index
, @id
,
or @type
and container does not include @graph
:
@language
, @index
, @id
, or @type
based on the contents of container, as var, and true
for vocab.@language
and
expanded item contains the key
@value
, then set compacted item
to the value associated with its @value
key.
Set map key to the value of @language
in expanded item, if any.@index
set map key to the value of @index
in expanded item, if any,
and remove container key from compacted item.@id
, set
map key to the value of container key in
compacted item and remove container key from compacted item.@type
,
set map key to the first value of container key in compacted item, if any.
If there are remaining values in compacted item
for compacted container, set the value of
compacted container in compacted value
to those remaining values. Otherwise, remove that
key-value pair from compacted item.true
,
set compacted item to an array containing that value.null
, set it to the result of calling the
IRI Compaction algorithm
passing active context, @none
as
var, and true
for
vocab.compactArrays
is false
, as array is true
and
compacted item is not an array,
set it to a new array
containing only compacted item.If, after the algorithm outlined above is run, result
is an empty array, replace it with a new dictionary.
Otherwise, if result is an array, replace it with a new
dictionary with a single member whose key is the result
of using the IRI Compaction algorithm,
passing active context, inverse context, and
@graph
as var and whose value is the array
result.
Finally, if a non-empty context has been passed,
add an @context
member to result and set its value
to the passed context.
When there is more than one term that could be chosen to compact an IRI, it has to be ensured that the term selection is both deterministic and represents the most context-appropriate choice whilst taking into consideration algorithmic complexity.
In order to make term selections, the concept of an inverse context is introduced. An inverse context is essentially a reverse lookup table that maps container mapping, type mappings, and language mappings to a simple term for a given active context. A inverse context only needs to be generated for an active context if it is being used for compaction.
To make use of an inverse context, a list of preferred container mapping and the type mapping or language mapping are gathered for a particular value associated with an IRI. These parameters are then fed to the Term Selection algorithm, which will find the term that most appropriately matches the value's mappings.
This section is non-normative.
To create an inverse context for a given
active context, each term in the
active context is visited, ordered by length, shortest
first (ties are broken by choosing the lexicographically least
term). For each term, an entry is added to
the inverse context for each possible combination of
container mapping and type mapping
or language mapping that would legally match the
term. Illegal matches include differences between a
value's type mapping or language mapping and
that of the term. If a term has no
container mapping, type mapping, or
language mapping (or some combination of these), then it
will have an entry in the inverse context using the special
key @none
. This allows the
Term Selection algorithm to fall back
to choosing more generic terms when a more
specifically-matching term is not available for a particular
IRI and value combination.
The algorithm takes one required input: the active context that the inverse context is being created for.
@none
. If the
active context has a default language,
set default language to it.null
,
term cannot be selected during compaction,
so continue to the next term.@none
.
If the container mapping is not empty, set container
to the concatenation of all values of the container mapping
in lexicographically order
.@language
and its value is a new empty
dictionary, the second member is @type
and its value is a new empty dictionary,
and the third member is @any
and its value is a new dictionary with the member
@none
set to the term being processed.@type
member in type/language map using the variable
type map.@reverse
member, create one and set its value to the term
being processed.@type
member in type/language map using the variable
type map.null
):
@language
member in type/language map using the variable
language map.null
,
set language to @null
; otherwise set it
to the language code in language mapping.@language
member in type/language map using the variable
language map.@none
member, create one and set its value to the term
being processed.@type
member in type/language map using the variable
type map.@none
member, create one and set its value to the term
being processed.This algorithm compacts an IRI to a term or compact IRI, or a keyword to a keyword alias. A value that is associated with the IRI may be passed in order to assist in selecting the most context-appropriate term.
This section is non-normative.
If the passed IRI is null
, we simply
return null
. Otherwise, we first try to find a term
that the IRI or keyword can be compacted to if
it is relative to active context's
vocabulary mapping. In order to select the most appropriate
term, we may have to collect information about the passed
value. This information includes which
container mapping
would be preferred for expressing the value, and what its
type mapping or language mapping is. For
JSON-LD lists, the type mapping
or language mapping will be chosen based on the most
specific values that work for all items in the list. Once this
information is gathered, it is passed to the
Term Selection algorithm, which will
return the most appropriate term to use.
If no term was found that could be used to compact the
IRI, an attempt is made to compact the IRI using the
active context's vocabulary mapping,
if there is one. If the IRI could not be compacted, an
attempt is made to find a compact IRI.
A term will be used to create a compact IRI
only if the term definition contains the prefix flag
with the value true
.
If there is no appropriate compact IRI,
and the compactToRelative
option is true
,
the IRI is
transformed to a relative IRI using the document's
base IRI. Finally, if the IRI or
keyword still could not be compacted, it is returned
as is.
This algorithm takes three required inputs and three optional inputs.
The required inputs are an active context, an inverse context,
and the var to be compacted. The optional inputs are a value associated
with the var, a vocab flag which specifies whether the
passed var should be compacted using the
active context's
vocabulary mapping, and a reverse flag which specifies whether
a reverse property is being compacted. If not passed, value is set to
null
and vocab and reverse are both set to
false
.
null
, return null
.true
and var is a
key in inverse context:
@none
.@preserve
, use the first
element from the value of @preserve
as value.@language
,
and type/language value to @null
. These two
variables will keep track of the preferred
type mapping or language mapping for
a term, based on what is compatible with value.@index
,
and value is not a graph object
then append the values @index
and @index@set
to containers.true
, set type/language
to @type
, type/language value to
@reverse
, and append @set
to containers.@index
is a not key in value, then
append @list
to containers.@list
in value.null
. If
list is empty, set common language to
default language.@none
and
item type to @none
.@value
:
@language
,
then set item language to its associated
value.@type
, set item type to its
associated value.@null
.@id
.null
, set it
to item language.@value
, then set common language
to @none
because list items have conflicting
languages.null
, set it
to item type.@none
because list items have conflicting
types.@none
and
common type is @none
, then
stop processing items in the list because it has been
detected that there is no common language or type amongst
the items.null
, set it to
@none
.null
, set it to
@none
.@none
then set
type/language to @type
and
type/language value to common type.@index
,
append the values @graph@index
and @graph@index@set
to containers.@id
,
append the values @graph@id
and @graph@id@set
to containers.@graph
@graph@set
,
and @set
to containers.@index
,
append the values @graph@index
and @graph@index@set
to containers.@id
,
append the values @graph@id
and @graph@id@set
to containers.@index
and @index@set
to containers.@language
and does not contain the key @index
,
then set type/language value to its associated
value and, append @language
and @language@set
to
containers.@type
, then set type/language value to
its associated value and set type/language to
@type
.@type
and set type/language value to @id
,
and append @id
, @id@set
,
@type
, and @set@type
,
to containers.@set
to containers.@none
to containers. This represents
the non-existence of a container mapping, and it will
be the last container mapping value to be checked as it
is the most generic.json-ld-1.1
and value does not contain the key @index
, append
@index
and @index@set
to containers.
json-ld-1.1
and value contains only the key @value
, append
@language
and @language@set
to containers.
null
, set it to
@null
. This is the key under which null
values
are stored in the inverse context entry.@reverse
, append
@reverse
to preferred values.@id
or @reverse
and value has an @id
member:
@id
key in value for
var, and true
for vocab has a
term definition in the active context
with an IRI mapping that equals the value associated
with the @id
key in value,
then append @vocab
, @id
, and
@none
, in that order, to preferred values.@id
, @vocab
, and
@none
, in that order, to preferred values.@none
, in
that order, to preferred values.
If value is an empty list object,
set type/language to @any
.null
, return term.true
and
active context has a vocabulary mapping:
null
. This variable will be used to
tore the created compact IRI, if any.null
,
its IRI mapping equals var, its
IRI mapping is not a substring at the beginning of
var,
or the term definition does not contain
the prefix flag having a value of true
,
the term cannot be used as a prefix.
Continue with the next term.:
), and the substring of var
that follows after the value of the
term definition's
IRI mapping.null
, candidate is
shorter or the same length but lexicographically less than
compact IRI and candidate does not have a
term definition in active context, or if the
term definition has an IRI mapping
that equals var and value is null
,
set compact IRI to candidate.null
, return compact IRI.false
,
transform var to a relative IRI using
the base IRI from active context, if it exists.This algorithm, invoked via the IRI Compaction algorithm, makes use of an active context's inverse context to find the term that is best used to compact an IRI. Other information about a value associated with the IRI is given, including which container mapping and which type mapping or language mapping would be best used to express the value.
This section is non-normative.
The inverse context's entry for the IRI will be first searched according to the preferred container mapping, in the order that they are given. Amongst terms with a matching container mapping, preference will be given to those with a matching type mapping or language mapping, over those without a type mapping or language mapping. If there is no term with a matching container mapping then the term without a container mapping that matches the given type mapping or language mapping is selected. If there is still no selected term, then a term with no type mapping or language mapping will be selected if available. No term will be selected that has a conflicting type mapping or language mapping. Ties between terms that have the same mappings are resolved by first choosing the shortest terms, and then by choosing the lexicographically least term. Note that these ties are resolved automatically because they were previously resolved when the Inverse Context Creation algorithm was used to create the inverse context.
This algorithm has five required inputs. They are: an inverse context, a keyword or IRI var, an array containers that represents an ordered list of preferred container mapping, a string type/language that indicates whether to look for a term with a matching type mapping or language mapping, and an array representing an ordered list of preferred values for the type mapping or language mapping to look for.
null
.This section is non-normative.
The following examples are intended to illustrate how the term selection algorithm behaves for different term definitions and values. It is not comprehensive, but intended to illustrate different parts of the algorithm.
If the term definition has "@container": "@language"
, it will only match a
value object having no @type
.
{
"@context": {"t": {"@id": "http://example/t", "@container": "@language"}}
}
The inverse context will contain the following:
{
"@language": {
"@language": {"@none": "t"},
"@type": {"@none": "t"},
"@any": {"@none": "t"}
}
}
If the term definition has a datatype, it will only match a value object having a matching datatype.
{
"@context": {"t": {"@id": "http://example/t", "@type": "http:/example/type"}}
}
The inverse context will contain the following:
{
"@none": {
"@language": {},
"@type": {"http:/example/type": "t"},
"@any": {"@none": "t"}
}
}
Expansion transforms all values into expanded form in JSON-LD. This algorithm performs the opposite operation, transforming a value into compacted form. This algorithm compacts a value according to the term definition in the given active context that is associated with the value's associated active property.
This section is non-normative.
The value to compact has either an @id
or an
@value
member.
For the former case, if the type mapping of
active property is set to @id
or @vocab
and value consists of only an @id
member and, if
the container mapping of active property
includes @index
, an @index
member, value
can be compacted to a string by returning the result of
using the IRI Compaction algorithm
to compact the value associated with the @id
member.
Otherwise, value cannot be compacted and is returned as is.
For the latter case, it might be possible to compact value
just into the value associated with the @value
member.
This can be done if the active property has a matching
type mapping or language mapping and there
is either no @index
member or the container mapping
of active property includes @index
. It can
also be done if @value
is the only member in value
(apart an @index
member in case the container mapping
of active property includes @index
) and
either its associated value is not a string, there is
no default language, or there is an explicit
null
language mapping for the
active property.
This algorithm has four required inputs: an active context, an inverse context, an active property, and a value to be compacted.
@index
member and the
container mapping associated to active property
includes @index
, decrease number members by
1
.2
, return
value as it cannot be compacted.@id
member:
1
and
the type mapping of active property
is set to @id
, return the result of using the
IRI compaction algorithm,
passing active context, inverse context,
and the value of the @id
member for var.1
and
the type mapping of active property
is set to @vocab
, return the result of using the
IRI compaction algorithm,
passing active context, inverse context,
the value of the @id
member for var, and
true
for vocab.@type
member whose
value matches the type mapping of active property,
return the value associated with the @value
member
of value.@language
member whose
value matches the language mapping of
active property, return the value associated with the
@value
member of value.1
and either
the value of the @value
member is not a string,
or the active context has no default language,
or the language mapping of active property
is set to null
,, return the value associated with the
@value
member.This algorithm flattens an expanded JSON-LD document by collecting all properties of a node in a single dictionary and labeling all blank nodes with blank node identifiers. This resulting uniform shape of the document, may drastically simplify the code required to process JSON-LD data in certain applications.
This section is non-normative.
First, a node map is generated using the Node Map Generation algorithm which collects all properties of a node in a single dictionary. In the next step, the node map is converted to a JSON-LD document in flattened document form. Finally, if a context has been passed, the flattened document is compacted using the Compaction algorithm before being returned.
The algorithm takes two input variables, an element to flatten and
an optional context used to compact the flattened document. If not
passed, context is set to null
.
This algorithm generates new blank node identifiers
and relabels existing blank node identifiers.
The Generate Blank Node Identifier algorithm
keeps an identifier map and a counter to ensure consistent
relabeling and avoid collisions. Thus, before this algorithm is run,
the identifier map is reset and the counter is initialized
to 0
.
@default
and whose value is
an empty dictionary.@default
member of node map, which is a dictionary representing
the default graph.@default
, perform the following steps:
@id
member whose value is set to graph name.@graph
member to entry and set it to an
empty array.@graph
member of entry,
unless the only member of node is @id
.@id
.null
, return flattened.@graph
keyword (or its alias)
at the top-level other than @context
, even if the context is empty or if there is only one element to
put in the @graph
array. This ensures that the returned
document has a deterministic structure.This algorithm creates a dictionary node map holding an indexed
representation of the graphs and nodes
represented in the passed expanded document. All nodes that are not
uniquely identified by an IRI get assigned a (new) blank node identifier.
The resulting node map will have a member for every graph in the document whose
value is another object with a member for every node represented in the document.
The default graph is stored under the @default
member, all other graphs are
stored under their graph name.
This section is non-normative.
The algorithm recursively runs over an expanded JSON-LD document to
collect all properties of a node
in a single dictionary. The algorithm constructs a
dictionary node map whose keys represent the
graph names used in the document
(the default graph is stored under the key @default
)
and whose associated values are dictionaries
which index the nodes in the
graph. If a
property's value is a node object,
it is replaced by a node object consisting of only an
@id
member. If a node object has no @id
member or it is identified by a blank node identifier,
a new blank node identifier is generated. This relabeling
of blank node identifiers is
also done for properties and values of
@type
.
The algorithm takes as input an expanded JSON-LD document element and a reference to
a dictionary node map. Furthermore it has the optional parameters
active graph (which defaults to @default
), an active subject,
active property, and a reference to a dictionary list. If
not passed, active subject, active property, and list are
set to null
.
null
, set node to null
otherwise reference the active subject member of graph using the
variable node.@type
member, perform for each
item the following steps:
@value
member, perform the following steps:
null
:
@list
member of list.@list
member, perform
the following steps:
@list
whose value is initialized to an empty array.@list
member for element, active graph,
active subject, active property, and
result for list.@id
member, set id
to its value and remove the member from element. If id
is a blank node identifier, replace it with a newly
generated blank node identifier
passing id for identifier.null
for identifier.@id
whose
value is id.null
, perform the following steps:
@id
whose value is id.null
:
@list
member of list.@type
key, append
each item of its associated array to the
array associated with the @type
key of
node unless it is already in that array. Finally
remove the @type
member from element.@index
member, set the @index
member of node to its value. If node has already an
@index
member with a different value, a
conflicting indexes
error has been detected and processing is aborted. Otherwise, continue by
removing the @index
member from element.@reverse
member:
@id
whose
value is id.@reverse
member of
element.@reverse
member from element.@graph
member, recursively invoke this
algorithm passing the value of the @graph
member for element,
node map, and id for active graph before removing
the @graph
member from element.This algorithm is used to generate new blank node identifiers or to relabel an existing blank node identifier to avoid collision by the introduction of new ones.
This section is non-normative.
The simplest case is if there exists already a blank node identifier
in the identifier map for the passed identifier, in which
case it is simply returned. Otherwise, a new blank node identifier
is generated by concatenating the string _:b
and the
counter. If the passed identifier is not null
,
an entry is created in the identifier map associating the
identifier with the blank node identifier. Finally,
the counter is increased by one and the new
blank node identifier is returned.
The algorithm takes a single input variable identifier which may
be null
. Between its executions, the algorithm needs to
keep an identifier map to relabel existing
blank node identifiers
consistently and a counter to generate new
blank node identifiers. The
counter is initialized to 0
by default.
null
and has an entry in the
identifier map, return the mapped identifier._:b
and counter.1
.null
, create a new entry
for identifier in identifier map and set its value
to the new blank node identifier.This algorithm creates a new map of subjects to nodes using all graphs contained in the graph map created using the Node Map Generation algorithm to create merged node objects containing information defined for a given subject in each graph contained in the node map.
@id
whose value is id, if it does not exist.This section describes algorithms to deserialize a JSON-LD document to an RDF dataset and vice versa. The algorithms are designed for in-memory implementations with random access to dictionary elements.
Throughout this section, the following vocabulary prefixes are used in compact IRIs:
Prefix | IRI |
---|---|
rdf | http://www.w3.org/1999/02/22-rdf-syntax-ns# |
rdfs | http://www.w3.org/2000/01/rdf-schema# |
xsd | http://www.w3.org/2001/XMLSchema# |
This algorithm deserializes a JSON-LD document to an RDF dataset. Please note that RDF does not allow a blank node to be used as a property, while JSON-LD does. Therefore, by default RDF triples that would have contained blank nodes as properties are discarded when interpreting JSON-LD as RDF.
This section is non-normative.
The JSON-LD document is expanded and converted to a node map using the
Node Map Generation algorithm.
This allows each graph represented within the document to be
extracted and flattened, making it easier to process each
node object. Each graph from the node map
is processed to extract RDF triple,
to which any (non-default) graph name is applied to create an
RDF dataset. Each node object in the
node map has an @id
member which corresponds to the
RDF subject, the other members
represent RDF predicates. Each
member value is either an IRI or
blank node identifier or can be transformed to an
RDF literal
to generate an RDF triple. Lists
are transformed into an
RDF collection
using the List to RDF Conversion algorithm.
The algorithm takes a JSON-LD document element and returns an
RDF dataset. Unless the produceGeneralizedRdf
option
is set to true
, RDF triple
containing a blank node predicate
are excluded from output.
This algorithm generates new blank node identifiers
and relabels existing blank node identifiers.
The Generate Blank Node Identifier algorithm
keeps an identifier map and a counter to ensure consistent
relabeling and avoid collisions. Thus, before this algorithm is run,
the identifier map is reset and the counter is initialized
to 0
.
@type
, then for each
type in values, append a triple
composed of subject, rdf:type
,
and type to triples.produceGeneralizedRdf
option is not true
,
continue with the next property-values pair.@list
key from
item and list triples. Append first a
triple composed of subject,
property, and list head to triples and
finally append all triples from
list triples to triples.null
, indicating a relative IRI that has
to be ignored.@default
, add
triples to the default graph in dataset.This algorithm takes a node object or value object
and transforms it into an
RDF resource
to be used as the object of an RDF triple. If a
node object containing a relative IRI is passed to
the algorithm, null
is returned which then causes the resulting
RDF triple to be ignored.
This section is non-normative.
Value objects are transformed to
RDF literals as described in
section 8.6 Data Round Tripping
whereas node objects are transformed
to IRIs,
blank node identifiers,
or null
.
The algorithm takes as its sole argument item which MUST be either a value object or node object.
@id
member is a relative IRI, return
null
.@id
member.@value
member in item.
@type
member of item or null
if
item does not have such a member.true
or
false
, set value to the string
true
or false
which is the
canonical lexical form as described in
section 8.6 Data Round Tripping
If datatype is null
, set it to
xsd:boolean
.xsd:double
, convert value to a
string in canonical lexical form of
an xsd:double
as defined in [XMLSCHEMA11-2]
and described in
section 8.6 Data Round Tripping.
If datatype is null
, set it to
xsd:double
.xsd:integer
, convert value to a
string in canonical lexical form of
an xsd:integer
as defined in [XMLSCHEMA11-2]
and described in
section 8.6 Data Round Tripping.
If datatype is null
, set it to
xsd:integer
.null
, set it to
xsd:string
or rdf:langString
, depending on if
item has an @language
member.@language
member, add the value associated with the
@language
key as the language tag of literal.List Conversion is the process of taking a list object and transforming it into an RDF collection as defined in RDF Semantics [RDF11-MT].
This section is non-normative.
For each element of the list a new blank node identifier
is allocated which is used to generate rdf:first
and
rdf:rest
ABBR. The
algorithm returns the list head, which is either the first allocated
blank node identifier or rdf:nil
if the
list is empty. If a list element represents a relative IRI,
the corresponding rdf:first
triple is omitted.
The algorithm takes two inputs: an array list and an empty array list triples used for returning the generated triples.
rdf:nil
.null
, append a triple
composed of subject, rdf:first
, and object.rdf:nil
. Append a
triple composed of subject,
rdf:rest
, and rest to list triples.rdf:nil
if bnodes is empty.This algorithm serializes an RDF dataset consisting of a default graph and zero or more named graphs into a JSON-LD document.
In the RDF abstract syntax, RDF literals have a lexical form, as defined in [RDF11-CONCEPTS]. The form of these literals is used when creating JSON-LD values based on these literals.
This section is non-normative.
Iterate through each graph in the dataset, converting each
RDF collection into a list
and generating a JSON-LD document in expanded form for all
RDF literals, IRIs
and blank node identifiers.
If the use native types flag is set to true
,
RDF literals with a
datatype IRI
that equals xsd:integer
or xsd:double
are converted
to a JSON numbers and RDF literals
with a datatype IRI
that equals xsd:boolean
are converted to true
or
false
based on their
lexical form
as described in
section 8.6 Data Round Tripping.
Unless the use rdf:type
flag is set to true, rdf:type
predicates will be serialized as @type
as long as the associated object is
either an IRI or blank node identifier.
The algorithm takes one required and two optional inputs: an RDF dataset dataset
and the two flags use native types and use rdf:type
that both default to false
.
@default
whose value references
default graph.@default
, otherwise to the
graph name associated with graph.@id
whose value is name.@id
whose value is
set to subject.@id
whose value is
set to object.rdf:type
, the
use rdf:type
flag is not true
, and object
is an IRI or blank node identifier,
append object to the value of the @type
member of node; unless such an item already exists.
If no such member exists, create one
and initialize it to an array whose only item is
object. Finally, continue to the next
RDF triple.@id
member of node
to
the object member of node usage map.usages
member, create one and initialize it to
an empty array.usages
member of the object
member of node map using the variable usages.node
, property
, and value
to the usages array. The node
member
is set to a reference to node, property
to predicate,
and value
to a reference to value.rdf:nil
member, continue
with the next name-graph object pair as the graph does
not contain any lists that need to be converted.rdf:nil
member
of graph object.usages
member of
nil, perform the following steps:
node
member of usage, property to
the value of the property
member of usage,
and head to the value of the value
member
of usage.rdf:rest
,
the value of the @id
member
of node is a blank node identifier,
the array value of the member of node usage map associated with the @id
member of node
has only one member,
the value associated to the usages
member of node has
exactly 1 entry,
node has a rdf:first
and rdf:rest
property,
both of which have as value an array consisting of a single element,
and node has no other members apart from an optional @type
member whose value is an array with a single item equal to
rdf:List
,
node represents a well-formed list node.
Perform the following steps to traverse the list backwards towards its head:
rdf:first
member of
node to the list array.@id
member of
node to the list nodes array.usages
member of node.node
member
of node usage, property to the value of the
property
member of node usage, and
head to the value of the value
member
of node usage.@id
member of node is an
IRI instead of a blank node identifier,
exit the while loop.rdf:first
, i.e., the
detected list is nested inside another list
@id
of node equals
rdf:nil
, i.e., the detected list is empty,
continue with the next usage item. The
rdf:nil
node cannot be converted to a
list object as it would result in a list of
lists, which isn't supported.@id
member of head.rdf:rest
member of head.@id
member from head.@list
member to head and initialize
its value to the list array.@graph
member to node and initialize
its value to an empty array.@graph
member of node after
removing its usages
member, unless the only
remaining member of n is @id
.usages
member, unless the only remaining member of
node is @id
.This algorithm transforms an RDF literal to a JSON-LD value object and a RDF blank node or IRI to an JSON-LD node object.
This section is non-normative.
RDF literals are transformed to
value objects whereas IRIs and
blank node identifiers are
transformed to node objects.
If the use native types flag is set to true
,
RDF literals with a
datatype IRI
that equals xsd:integer
or xsd:double
are converted
to a JSON numbers and RDF literals
with a datatype IRI
that equals xsd:boolean
are converted to true
or
false
based on their
lexical form
as described in
section 8.6 Data Round Tripping.
This algorithm takes two required inputs: a value to be converted to a dictionary and a flag use native types.
@id
whose value is set to
value.null
true
xsd:string
, set
converted value to the
lexical form
of value.xsd:boolean
, set
converted value to true
if the
lexical form
of value matches true
, or false
if it matches false
. If it matches neither,
set type to xsd:boolean
.xsd:integer
or
xsd:double
and its
lexical form
is a valid xsd:integer
or xsd:double
according [XMLSCHEMA11-2], set converted value
to the result of converting the
lexical form
to a JSON number.@language
to result and set its value to the
language tag of value.xsd:string
which is ignored.@value
to result whose value
is set to converted value.null
, add a member @type
to result whose value is set to type.When deserializing JSON-LD to RDF
JSON-native numbers are automatically
type-coerced to xsd:integer
or xsd:double
depending on whether the number has a non-zero fractional part
or not (the result of a modulo‑1 operation), the boolean values
true
and false
are coerced to xsd:boolean
,
and strings are coerced to xsd:string
.
The numeric or boolean values themselves are converted to
canonical lexical form, i.e., a deterministic string
representation as defined in [XMLSCHEMA11-2].
The canonical lexical form of an integer, i.e., a
number with no non-zero fractional part or a number
coerced to xsd:integer
, is a finite-length sequence of decimal
digits (0-9
) with an optional leading minus sign; leading
zeros are prohibited. In JavaScript, implementers can use the following
snippet of code to convert an integer to
canonical lexical form:
(value).toFixed(0).toString()
The canonical lexical form of a double, i.e., a
number with a non-zero fractional part or a number
coerced to xsd:double
, consists of a mantissa followed by the
character E
, followed by an exponent. The mantissa is a
decimal number and the exponent is an integer. Leading zeros and a
preceding plus sign (+
) are prohibited in the exponent.
If the exponent is zero, it is indicated by E0
. For the
mantissa, the preceding optional plus sign is prohibited and the
decimal point is required. Leading and trailing zeros are prohibited
subject to the following: number representations must be normalized
such that there is a single digit which is non-zero to the left of
the decimal point and at least a single digit to the right of the
decimal point unless the value being represented is zero. The
canonical representation for zero is 0.0E0
.
xsd:double
's value space is defined by the IEEE
double-precision 64-bit floating point type [IEEE-754-2008] whereas
the value space of JSON numbers is not
specified; when deserializing JSON-LD to RDF the mantissa is rounded to
15 digits after the decimal point. In JavaScript, implementers
can use the following snippet of code to convert a double to
canonical lexical form:
(value).toExponential(15).replace(/(\d)0*e\+?/,'$1E')
The canonical lexical form of the boolean
values true
and false
are the strings
true
and false
.
When JSON-native numbers are deserialized
to RDF, lossless data round-tripping cannot be guaranteed, as rounding
errors might occur. When
serializing RDF as JSON-LD,
similar rounding errors might occur. Furthermore, the datatype or the lexical
representation might be lost. An xsd:double
with a value
of 2.0
will, e.g., result in an xsd:integer
with a value of 2
in canonical lexical form
when converted from RDF to JSON-LD and back to RDF. It is important
to highlight that in practice it might be impossible to losslessly
convert an xsd:integer
to a number because
its value space is not limited. While the JSON specification [RFC7159]
does not limit the value space of numbers
either, concrete implementations typically do have a limited value
space.
To ensure lossless round-tripping the
Serialize RDF as JSON-LD algorithm
specifies a use native types flag which controls whether
RDF literals
with a datatype IRI
equal to xsd:integer
, xsd:double
, or
xsd:boolean
are converted to their JSON-native
counterparts. If the use native types flag is set to
false
, all literals remain in their original string
representation.
Some JSON serializers, such as PHP's native implementation in some versions,
backslash-escape the forward slash character. For example, the value
http://example.com/
would be serialized as http:\/\/example.com\/
.
This is problematic as other JSON parsers might not understand those escaping characters.
There is no need to backslash-escape forward slashes in JSON-LD. To aid
interoperability between JSON-LD processors, forward slashes MUST NOT be
backslash-escaped.
This API provides a clean mechanism that enables developers to convert JSON-LD data into a variety of output formats that are often easier to work with.
The JSON-LD API uses Promises to represent the result of the various asynchronous operations. Promises are defined in [ECMASCRIPT-6.0]. General use within specifications can be found in [promises-guide].
JsonLdProcessor
Interface §The JsonLdProcessor
interface is the high-level programming structure
that developers use to access the JSON-LD transformation methods.
It is important to highlight that implementations do not modify the input parameters.
If an error is detected, the Promise is
rejected passing a JsonLdError
with the corresponding error
code
and processing is stopped.
If the documentLoader
option is specified, it is used to dereference remote documents and contexts.
The documentUrl
in the returned RemoteDocument
is used as base IRI and the
contextUrl
is used instead of looking at the HTTP Link Header directly. For the sake of simplicity, none of the algorithms
in this document mention this directly.
[Constructor]
interface JsonLdProcessor
{
static Promise<JsonLdDictionary
> compact
(JsonLdInput
input,
JsonLdContext
context,
optional JsonLdOptions
? options);
static Promise<sequence<JsonLdDictionary
>> expand
(JsonLdInput
input,
optional JsonLdOptions
? options);
static Promise<JsonLdDictionary
> flatten
(JsonLdInput
input,
optional JsonLdContext
? context,
optional JsonLdOptions
? options);
};
compact
Compacts the given input using the context according to the steps in the Compaction algorithm:
expand
method using input and options.
@context
member, set
context to that member's value, otherwise to context.base
option from
options, if set;
otherwise, if the
compactToRelative
option is
true, to the IRI of the currently being processed
document, if available; otherwise to null
.null
as property,
expanded input as element, and if passed, the
compactArrays
flag in options.input
;
it can be specified by using a dictionary, an
IRI, or an array consisting of
dictionaries and IRIs.expand
Expands the given input according to the steps in the Expansion algorithm:
application/json
,
nor application/ld+json
, nor any other media type using a
+json
suffix as defined in [RFC6839], reject the promise passing an
loading document failed
error.null
. If set, the
base
option from options overrides the base IRI.expandContext
option
has been passed, update the active context using the
Context Processing algorithm, passing the
expandContext
as local context. If
expandContext
is a dictionary having an @context
member, pass that member's value instead.http://www.w3.org/ns/json-ld#context
link relation
and a content type of application/json
or any media type
with a +json
suffix as defined in [RFC6839] except
application/ld+json
, update the active context using the
Context Processing algorithm, passing the
context referenced in the HTTP Link Header as local context. The
HTTP Link Header is ignored for documents served as application/ld+json
If
multiple HTTP Link Headers using the http://www.w3.org/ns/json-ld#context
link relation are found, the promise is rejected with a JsonLdError
whose code is set to
multiple context link headers
and processing is terminated.loading document failed
error.frameExpansion
option is set, pass the frame expansion flag as true
..flatten
Flattens the given input and compacts it using the passed context according to the steps in the Flattening algorithm:
expand
method using input and options.
@context
member, set
context to that member's value, otherwise to context.base
option from
options, if set;
otherwise, if the
compactToRelative
option is
true, to the IRI of the currently being processed
document, if available; otherwise to null
.0
)
to be used by the
Generate Blank Node Identifier algorithm.compactArrays
flag in options
(which is internally passed to the
Compaction algorithm).null
is passed, the result will not be compacted
but kept in expanded form.dictionary JsonLdDictionary
{
};
The JsonLdDictionary
is the definition of a dictionary
used to contain arbitrary key/value pairs which are the result of
parsing a JSON Object.
typedef (JsonLdDictionary
or sequence<JsonLdDictionary
> or USVString) JsonLdInput
;
The JsonLdInput
type is used to refer to an input value that
that may be a dictionary, an array of dictionaries or a string representing an
IRI which an be dereferenced to retrieve a valid JSON document.
typedef (JsonLdDictionary
or USVString or sequence<(JsonLdDictionary
or USVString)>) JsonLdContext
;
The JsonLdContext
type is used to refer to a value that
that may be a dictionary, a string representing an
IRI, or an array of dictionaries
and strings.
The JsonLdOptions
type is used to pass various options to the
JsonLdProcessor
methods.
dictionary JsonLdOptions
{
USVString? base
;
boolean compactArrays
= true;
boolean compactToRelative
= true;
LoadDocumentCallback
documentLoader
= null;
(JsonLdDictionary
? or USVString) expandContext
= null;
boolean frameExpansion
= false;
USVString processingMode
= null;
boolean produceGeneralizedRdf
= true;
};
base
compactArrays
true
, the JSON-LD processor replaces arrays with just
one element with that element during compaction. If set to false
,
all arrays will remain arrays even if they have just one element.
compactToRelative
base
option or document location when compacting.documentLoader
expandContext
frameExpansion
processingMode
json-ld-1.0
or json-ld-1.1
, the
implementation must produce exactly the same results as the algorithms
defined in this specification.
If set to another value, the JSON-LD processor is allowed to extend
or modify the algorithms defined in this specification to enable
application-specific optimizations. The definition of such
optimizations is beyond the scope of this specification and thus
not defined. Consequently, different implementations may implement
different optimizations. Developers must not define modes beginning
with json-ld
as they are reserved for future versions
of this specification.produceGeneralizedRdf
true
, the JSON-LD processor may emit blank nodes for
triple predicates, otherwise they will be omitted.Users of an API implementation can utilize a callback to control how remote documents and contexts are retrieved. This section details the parameters of that callback and the data structure used to return the retrieved context.
The LoadDocumentCallback
defines a callback that custom document loaders
have to implement to be used to retrieve remote documents and contexts.
callback LoadDocumentCallback
= Promise<USVString> (USVString url);
All errors result in the Promise being rejected with
a JsonLdError
whose code is set to
loading document failed
or multiple context link headers
as described in the next section.
The RemoteDocument
type is used by a LoadDocumentCallback
to return information about a remote document or context.
dictionary RemoteDocument
{
USVString contextUrl
= null;
USVString documentUrl
;
any document
;
};
contextUrl
http://www.w3.org/ns/json-ld#context
link relation in the
response. If the response's content type is application/ld+json
,
the HTTP Link Header is ignored. If multiple HTTP Link Headers using
the http://www.w3.org/ns/json-ld#context
link relation are found,
the Promise of the LoadDocumentCallback
is rejected with
a JsonLdError
whose code is set to
multiple context link headers
.documentUrl
document
This section describes the datatype definitions used within the JSON-LD API for error handling.
The JsonLdError
type is used to report processing errors.
dictionary JsonLdError
{
JsonLdErrorCode
code
;
USVString? message
= null;
};
code
message
The JsonLdErrorCode
represents the collection of valid JSON-LD error
codes.
enum JsonLdErrorCode
{
"colliding keywords",
"compaction to list of lists",
"conflicting indexes",
"cyclic IRI mapping",
"invalid @id value",
"invalid @index value",
"invalid @nest value",
"invalid @prefix value",
"invalid @reverse value",
"invalid @version value",
"invalid base IRI",
"invalid container mapping",
"invalid default language",
"invalid IRI mapping",
"invalid keyword alias",
"invalid language map value",
"invalid language mapping",
"invalid language-tagged string",
"invalid language-tagged value",
"invalid local context",
"invalid remote context",
"invalid reverse property",
"invalid reverse property map",
"invalid reverse property value",
"invalid scoped context",
"invalid set or list object",
"invalid term definition",
"invalid type mapping",
"invalid type value",
"invalid typed value",
"invalid value object",
"invalid value object value",
"invalid vocab mapping",
"keyword redefinition",
"list of lists",
"loading document failed",
"loading remote context failed",
"multiple context link headers",
"processing mode conflict",
"recursive context inclusion"
};
colliding keywords
compaction to list of lists
conflicting indexes
cyclic IRI mapping
invalid @id value
@id
member was encountered whose value was not a
string.invalid @index value
@index
member was encountered whose value was
not a string.invalid @nest value
@nest
has been found.invalid @prefix value
@prefix
has been found.invalid @reverse value
@reverse
member has been detected,
i.e., the value was not a dictionary.invalid @version value
@version
key was used in a context with
an out of range value.invalid base IRI
null
.invalid container mapping
@container
member was encountered whose value was
not one of the following strings:
@list
, @set
, or @index
.invalid default language
null
and thus invalid.invalid IRI mapping
invalid keyword alias
invalid language map value
invalid language mapping
@language
member in a term definition
was encountered whose value was neither a string nor
null
and thus invalid.invalid language-tagged string
invalid language-tagged value
true
, or false
with an
associated language tag was detected.invalid local context
invalid remote context
invalid reverse property
invalid reverse property map
@context
are allowed in reverse property maps.invalid reverse property value
invalid scoped context
invalid set or list object
invalid term definition
invalid type mapping
@type
member in a term definition
was encountered whose value could not be expanded to an
absolute IRI.invalid type value
@type
member has been detected,
i.e., the value was neither a string nor an array
of strings.invalid typed value
invalid value object
invalid value object value
@value
member of a
value object has been detected, i.e., it is neither
a scalar nor null
.invalid vocab mapping
null
.keyword redefinition
list of lists
loading document failed
loading remote context failed
multiple context link headers
http://www.w3.org/ns/json-ld#context
link relation
have been detected.processing mode conflict
recursive context inclusion
This section is non-normative.
[Constructor] interfaceJsonLdProcessor
{ static Promise<JsonLdDictionary
>compact
(JsonLdInput
input,JsonLdContext
context, optionalJsonLdOptions
? options); static Promise<sequence<JsonLdDictionary
>>expand
(JsonLdInput
input, optionalJsonLdOptions
? options); static Promise<JsonLdDictionary
>flatten
(JsonLdInput
input, optionalJsonLdContext
? context, optionalJsonLdOptions
? options); }; dictionaryJsonLdDictionary
{ }; typedef (JsonLdDictionary
or sequence<JsonLdDictionary
> or USVString)JsonLdInput
; typedef (JsonLdDictionary
or USVString or sequence<(JsonLdDictionary
or USVString)>)JsonLdContext
; dictionaryJsonLdOptions
{ USVString?base
; booleancompactArrays
= true; booleancompactToRelative
= true;LoadDocumentCallback
documentLoader
= null; (JsonLdDictionary
? or USVString)expandContext
= null; booleanframeExpansion
= false; USVStringprocessingMode
= null; booleanproduceGeneralizedRdf
= true; }; callbackLoadDocumentCallback
= Promise<USVString> (USVString url); dictionaryRemoteDocument
{ USVStringcontextUrl
= null; USVStringdocumentUrl
; anydocument
; }; dictionaryJsonLdError
{JsonLdErrorCode
code
; USVString?message
= null; }; enumJsonLdErrorCode
{ "colliding keywords", "compaction to list of lists", "conflicting indexes", "cyclic IRI mapping", "invalid @id value", "invalid @index value", "invalid @nest value", "invalid @prefix value", "invalid @reverse value", "invalid @version value", "invalid base IRI", "invalid container mapping", "invalid default language", "invalid IRI mapping", "invalid keyword alias", "invalid language map value", "invalid language mapping", "invalid language-tagged string", "invalid language-tagged value", "invalid local context", "invalid remote context", "invalid reverse property", "invalid reverse property map", "invalid reverse property value", "invalid scoped context", "invalid set or list object", "invalid term definition", "invalid type mapping", "invalid type value", "invalid typed value", "invalid value object", "invalid value object value", "invalid vocab mapping", "keyword redefinition", "list of lists", "loading document failed", "loading remote context failed", "multiple context link headers", "processing mode conflict", "recursive context inclusion" };
Consider requirements from Self-Review Questionnaire: Security and Privacy.
This section is non-normative.
@context
property, which defines a context used for values of
a property identified with such a term. This context is used
in both the Expansion Algorithm and
Compaction Algorithm.@nest
property, which identifies a term expanding to
@nest
which is used for containing properties using the same
@nest
mapping. When expanding, the values of a property
expanding to @nest
are treated as if they were contained
within the enclosing node object directly.@container
values within an expanded term definition may now
include @id
and @type
, corresponding to id maps and type maps.@none
value, but
JSON-LD 1.0 only allowed string values. This has been updated
to allow (and ignore) @none
values.@container
in an expanded term definition
can also be an array containing any appropriate container
keyword along with @set
(other than @list
).
This allows a way to ensure that such property values will always
be expressed in array form.compactToRelative
option to allow IRI compaction (section 6.3 IRI Compaction)
to document relative IRIs to be disabled.@prefix
member with the value true. The 1.0 algorithm has
been updated to only consider terms that map to a value that ends with a URI
gen-delim character.@container
to include @graph
,
along with @id
, @index
and @set
.
In the Expansion Algorithm, this is
used to create a named graph from either a node object, or
objects which are values of keys in an id map or index map.
the Compaction Algorithm allows
specific forms of graph objects to be compacted back to a set of node
objects, or maps of node objects.@none
keyword, or an alias, for
values of maps for which there is no natural index. The Expansion Algorithm removes this indexing
transparently.""
) has been added as a possible value for @vocab
in
a context. When this is set, vocabulary-relative IRIs, such as the
keys of node objects, are expanded or compacted relative
to the base IRI using string concatenation.This section is non-normative.
The following is a list of issues open at the time of publication.
Thanks for the great work with JSON-LD! However, when trying to use JSON-LD for to present data in the company I'm working in, I noticed the following missing feature:
FEATURE PROPOSAL: ABILITY TO DEFINE ANY KEY AS AN INDEX KEY
In addition to JSON-LD's existing index container structure, I propose that any key under a JSON-LD node could be defined as a index key.
This would help clustering data under a node into coder friendly logical groups without messing up the Linked Data interpretation with e.g. blank nodes. I encountered the need for this feature at our company where our problem is that the amount of attributes a single JSON-LD node can have can potentially be quite many, say, tens or hundreds of attributes.
As far as I know, this can not be currently done with JSON-LD without 1) ending up with blank nodes or 2) the need to create a deeper JSON structure by using a separate index term (using "@container":"@index") which then contains the data underneath.
In addition, if a single key could be defined as a index term, this would make it more flexible to attach the JSON-LD Linked Data interpretation to even a wider amount of existing JSON data, without having to change the structure of such data (and without ending up with e.g. lots of blank nodes).
DEFINING AN INDIVIDUAL INDEX KEY IN @context
The "@context" definition could be done e.g. using the existing reserved keyword "@index" in the following way:
"indexkey":"@index"
which should be interpreted in the following way: 1) the "indexkey" is an index key and should be skipped when traversing the JSON tree while doing the JSON-LD to RDF interpretation, 2) any data directly under the "indexkey" should be interpreted as data directly attached to the node of the indexkey (same RDF subject).
EXAMPLE
To give a full example, in the following a single key "labels" is defined as an index index key to help grouping the data into coder friendly logical groups without messing up the Linked Data interpretation):
{
"@context": {
"labels":"@index",
"main_label":"http://example.org/my-schema#main_label",
"other_label":"http://example.org/my-schema#other_label",
"homepage":{ "@id":"http://example.org/my-schema#homepage", "@type":"@id"}
},
"@id":"http://example.org/myresource",
"homepage": "http://example.org",
"labels": {
"main_label": "This is the main label for my resource",
"other_label": "This is the other label"
}
}
This example JSON-LD should generate the following RDF triplets:
<http://example.org/myresource> <http://example.org/my-schema#homepage> <http://example.org>.
<http://example.org/myresource> <http://example.org/my-schema#main_label> "This is the main label for my resource".
<http://example.org/myresource> <http://example.org/my-schema#other_label> "This is the other label".
This has already been discussed several times usingvarious terms.. the most recent request has come from David Janes on the mailing list. The basic idea is to support JSON values/subtrees that aren't mapped to an IRI in the context. They should survive algorithmic transformations (basically without being touched at all).
_This was raised by Fabian Steeg:_
The JSON-LD API document states: "Expansion has two important goals: removing any contextual information from the document, and ensuring all values are represented in a regular form."
Is there a way to achieve only the second goal, the regular form, but with compact terms? Using compaction with compactArrays=false is pretty close, but there is still at least one thing that is irregular and causing issues for me.
Given this input:
{ "http://example.com/foo": "foo-value", "http://example.com/bar": { "@value": "bar-value", "@language": "en" }, "@context": { "foo": "http://example.com/foo", "bar": "http://example.com/bar" } }
I get this from compaction with compactArrays=false:
{ "@graph": [{ "foo": ["foo-value"], <-- foo: array of strings "bar": [{ <-- bar: array of objects "@language": "en", "@value": "bar-value" }] }], "@context": { "foo": "http://example.com/foo", "bar": "http://example.com/bar" } }
But I'd like to get this (which is what expansion does to the values):
{ "@graph": [{ "foo": [{ <-- both foo and bar: "@value" : "foo-value" array of objects }], "bar": [{ "@language": "en", "@value": "bar-value" }] }], "@context": { "foo": "http://example.com/foo", "bar": "http://example.com/bar" } }
So I guess I'm looking for something like a compactValues=false option.
Is there some way to get this output?
We are encountering an issue when converting RDF Datasets to JSON-LD.
The problem is with blank nodes that are shared between graphs and lists.
In TriG (yes, this is a synthetic reduced test case that captures a
smaller example that might appear for real):
# Bnode references across graph and lists
PREFIX : <http://www.example.com/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
:G {
# Written in short form it would be:
# :z :q ("cell-A" "cell-B")
# but we want to share the tail ("cell-B")
:z :q _:z0 .
_:z0 rdf:first "cell-A" .
_:z0 rdf:rest _:z1 .
_:z1 rdf:first "cell-B" .
_:z1 rdf:rest rdf:nil .
}
:G1 {
# This references the tail ("cell-B")
:x :p _:z1 .
}
The triple in :G1 references into the list in :G.
But as we understand the conversion algorithm, section 4 only considers
each graph in turn and so does not see the cross graph sharing.
Is this a correct reading of the spec text?
Part 4 of the conversion algorithm has
"For each name and graph object in graph map: "
so 4.3.3.* walks back up the list in one graph only.
(Conversion generated by jsonld-java : it does not matter if compaction
is applied or not):
{
"@graph" : [ {
"@graph" : [ {
"@id" : ":z",
":q" : {
"@list" : [ "cell-A", "cell-B" ]
}
} ],
"@id" : ":G"
}, {
"@graph" : [ {
"@id" : ":x",
":p" : {
"@id" : "_:b1"
}
} ],
"@id" : ":G1"
} ],
"@context" : {
"@base" : "http://www.example.com/",
"" : "http://www.example.com/",
"rdf" : "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
}
}
There is no _:b1 in :G to refer to because the algorith generated @list
and its implicit bNodes don't have labels.
This is a different dataset with no shared bNode.
If it is all the same graph (s/:G1/:G/), the RDF dataset structure is
correctly serialized.
Andy
See: digitalbazaar/jsonld.js#72
It would be helpful to have the ability to use @language within an object as a shorthand for "@context": {"@language": "..."} ... for instance... make:
{
"@language": "en",
"displayName": "foo"
}
equivalent to:
{
"@context": {"@language": "en"},
"displayName": "foo"
}
In the spirit of "Labeling Everything" (http://patterns.dataincubator.org/book/label-everything.html) ... it would be worthwhile, IMO, for JSON-LD to provide a basic @Label keyword for use both in @context and nodes. It's largely syntactic sugar but would be useful.
For example:
{
"@context": {
"@label": "An Example Context",
"displayName": "@label",
},
"displayName": "A Simple Label"
}
Which would expand to:
_:c14n0 <http://www.w3.org/2000/01/rdf-schema#label> "A Simple Label" .
Many JSON specs existed before JSON-LD. A couple of these specs may not be compatible with JSON-LD as they contain multidimensional containers, such as GeoJSON.
Example of a multidimensional array:
[ [3.1,51.06,30],
[3.1,51.06,20] ]
This issue is a result from the discussion on the GeoJSON-LD repository: geojson/geojson-ld#32. If this issue will not get resolved, the GeoJSON-LD community would suggest creating custom JSON-LD parsers for JSON-LD dialects. This situation would be far from desirable.
Introduce a new @values
keyword, which can be used to describe the values of a @set
or a @list
container in more detail.
When an array is given in the @values
, then the precise amount of objects within this array corresponds with the array in the graph in this order.
When an object is given in the @values
, each value of the array in the graph is mapped according to this template.
{
"@context": {
"coordinates": {
"@id": "geojson:coordinates",
"@container" : "@list",
"@values" : {
"@type" : "geojson:Coordinate",
"@container" : "@set",
"@values" : [
{"@type" : "xsd:double", "@id":"geo:longitude"},
{"@type" : "xsd:double", "@id":"geo:latitude"}
]
}
}
},
"@graph" : [{
"@id" : "ex:LineString1",
"coordinates" : [
[
3.1057405471801753,
51.064216229943476
],
[
3.1056976318359375,
51.063434090307574
]
]
}]
}
Would transform to (and vice versa):
ex:LineString1 geojson:coordinates _:b0 .
_:b0 rdf:first _:b1 .
_:b1 a geojson:Coordinate ;
geo:longitude "3.105740547180175E0"^^xsd:double ;
geo:latitude "5.106421622994348E1"^^xsd:double .
_:b0 rdf:rest _:b2 .
_:b2 rdf:first a geojson:Coordinate ;
geo:longitude "3.1056976318359375"^^xsd:double ;
geo:latitude "51.063434090307574"^^xsd:double .
_:b2 rdf:rest rdf:nil .
I want the following:
{
"@context": {
"type": "@type",
"profile": "@type"
},
"type": "cov:Coverage",
"profile": "cov:GridCoverage"
}
However this is not allowed. The playground says "Invalid JSON-LD syntax; colliding keywords detected".
However, this one works:
{
"@context": {
"type": {"@id": "rdf:type", "@type": "@id" },
"profile": {"@id": "rdf:type", "@type": "@id" }
},
"type": "cov:Coverage",
"profile": "cov:GridCoverage"
}
I understand that this restriction probably makes sense for other keywords, but could it do any harm for @type
?
There have been some discussions on what it would take to be able to do a streaming parse of JSON-LD into Quads, and similarly to generate compliant JSON-LD from a stream of quads. Describing these as some kind of a profile would be useful for implementations that expect to work in a streaming environment, when it's not feasible to work on an entire document basis.
As currently stated, the JSON-LD to RDF algorithm requires expanding the document and creating a node map. A profile of JSON-LD which used a flattened array of node objects, where each node object could be independently expanded and no flattening is required could facilitate deserializing an arbitrarily long JSON-LD source to Quads. (Some simplifying restrictions on shared lists may be necessary). Outer document is an object, containing @context
and @graph
only; obviously, this only will work for systems that can access key/values in order, and for systems that ensure that @context
comes lexically before @graph
in the output. Obviously, only implementations that can read and write JSON objects with key ordering intact will be able to take advantage of such streaming capability.
Fo serializing RDF to JSON-LD, expectations on the grouping of quads with the same graph name and subject are necessary to reduce serialization cost, and marshaling components of RDF Lists is likely not feasible. Even if graph name/subject grouping is not maintained in the input, the resulting output will still represent a valid JSON-LD document, although it may require flattening for further processing. (Many triple stores will, in fact, generate statements/quads properly grouped, so this is likely not an issue in real world applications).
Hi there,
I was looking for a way to access properties in a JSON-LD document based on triples (to patch the document). This would mean having a view which creates a dictionary for a given document. The term Normalisation is already used, but this approach would be close to the way https://github.com/paularmstrong/normalizr. D3 uses https://github.com/d3/d3-hierarchy/blob/master/README.md#stratify in a slightly different way but with the same general intent.
The goal would be to be able to address document values with this syntax stratified_doc[triple.subject][triple.predicate]
or even better stratified[triple.graph][triple.subject][triple.predicate]
.
This could also be a @stratified
parameter for expansion
.
For a document:
{
"@context": {
"dc": "http://purl.org/dc/elements/1.1/",
"ex": "http://example.org/vocab#",
"xsd": "http://www.w3.org/2001/XMLSchema#",
"ex:contains": {
"@type": "@id"
}
},
"@id": "http://example.org/graph/0",
"dc:creator": "Jane Doe",
"@graph": [
{
"@id": "http://example.org/library",
"@type": "ex:Library",
"ex:contains": "http://example.org/library/the-republic"
}
]
}
Such a stratified
would therefore look like:
{
"http://example.org/graph/0": {
"http://example.org/library": {
"@type": "http://example.org/vocab#Library",
"http://example.org/vocab#contains": {
"@id": "http://example.org/library/the-republic"
}
},
"http://example.org/library/the-republic": {}
},
"@graph": {
"http://example.org/graph/0": {
"http://purl.org/dc/elements/1.1/creator": "Jane Doe"
}
}
}
This would therefore allow to do the following:
// Access a triple from the default graph
var creator = stratified['@graph']['http://example.org/graph/0']['http://purl.org/dc/elements/1.1/creator'];
// "Jane Doe"
// Access a triple in a named graph
var type = stratified['http://example.org/graph/0']['http://example.org/library']['@type'];
// "http://example.org/vocab#Library"
// Before submitting a document, mutate a property
stratified['http://example.org/graph/0']['http://example.org/library/the-republic']['@type'] = 'http://example.org/vocab#Book';
// Or using an immutable spread syntax approach
var new_doc = {
...stratified,
'http://example.org/graph/0': {
...stratified['http://example.org/graph/0'],
'http://example.org/library/the-republic' : {
...stratified['http://example.org/graph/0']['http://example.org/library/the-republic'],
'@type': 'http://example.org/vocab#Book'
}
}
}
Issue: The compaction algorithm prefers the most compact format, which for resources without relationships is a string containing the URI. This causes problems in systems that cannot handle arrays of mixed data types (for example ElasticSearch) when there are also resources that have relationships, resulting in both objects and strings in the same array.
For example:
"seeAlso": [
"http://example.org/reference1",
{"id": "http://example.org/reference2", "format": "text/html"}
]
would raise an error in Elastic.
Proposed solution: Provide a flag to the compaction algorithm to signal that the resulting JSON should always create objects for resources, even if there is only the URI available. This would instead render the example above as an array of objects:
"seeAlso": [
{"id": "http://example.org/reference1"},
{"id": "http://example.org/reference2", "format": "text/html"}
]
The purpose of the @container:@set
functionality (AFAIU) is to ensure that the output is consistent in shape. Thus if there can ever be multiple values, the structure is always an array.
There are two situations in which this functionality could be desirable but is currently not possible:
@type
As it's a keyword, we can only alias it (e.g. as type
) but not define it to have @container:@set
functionality. Meaning that there's a gotcha waiting to happen for ontologies that require or use multiple classes for a single resource instance. See playground@context
Less useful, but @context
will also compact to a single string/object when there might be multiple contexts. See playground@context
modifying itself seems particularly strange, but the inconsistency for @type
seems solvable if the restrictions in its definition were loosened?
This is related to #235: When I have the following document:
{
"@context": {
"@vocab" : "http://vocab.getty.edu/",
"a" : "http://vocab.getty.edu/aaaaaaaaaat/"
},
"@id" : "http://vocab.getty.edu/aaaaaaaaaat/5001065997",
"@type": "http://vocab.getty.edu/aaaaaaaaaat/datatype"
}
By point 3
of the spec, because http://vocab.getty.edu/aaaaaaaaaat/5001065997
contains the value of @vocab
, it gets compacted as aaaaaaaaaat/5001065997
without even looking at the prefixes. I think this is not reasonable, in this case a:5001065997
would look much nicer IMO.
We have web application that needs to be able to modify RDF lists from a triple store and propagate the changes back. To do this, we are utilizing jsonld-java to serialize the RDF into JSON-LD, modifying it in the web app, and then sending it back to be deserialized and stored in the triple store. Originally, we were using blank nodes like the ones shown in Turtle below.
<http://example.com> <http://example.com/property> _:a .
_:a a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "a" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> _:b .
_:b a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "b" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> _:c .
_:c a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "c" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> <http://www.w3.org/1999/02/22-rdf-syntax-ns#nil> .
However, we discovered that blank node lists are collapsed during serialization thus losing all the blank node IDs.
[ {
"@id" : "http://example.com",
"http://example.com/property" : [ {
"@list" : [ {
"@value" : "a"
}, {
"@value" : "b"
}, {
"@value" : "c"
} ]
} ]
} ]
With blank node IDs removed, we are no longer able to reference the existing RDF in the triple store to perform updates when the lists are modified in the web-app without much more complex logic. To avoid this, we skolemized the blank node IDs into IRIs.
<http://example.com> <http://example.com/property> <urn:a> .
<urn:a> a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "a" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> <urn:b> .
<urn:b> a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "b" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> <urn:c> .
<urn:c> a <http://www.w3.org/1999/02/22-rdf-syntax-ns#List> ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#first> "c" ;
<http://www.w3.org/1999/02/22-rdf-syntax-ns#rest> <http://www.w3.org/1999/02/22-rdf-syntax-ns#nil> .
However, when serializing the skolemized triples, the IRI for the last element in the RDF list is hidden, in this case urn:c
. This leads to the same problem we were having when using blank node IDs.
[ {
"@id" : "http://example.com",
"http://example.com/property" : [ {
"@id" : "urn:a"
} ]
}, {
"@id" : "urn:a",
"@type" : [ "http://www.w3.org/1999/02/22-rdf-syntax-ns#List" ],
"http://www.w3.org/1999/02/22-rdf-syntax-ns#first" : [ {
"@value" : "a"
} ],
"http://www.w3.org/1999/02/22-rdf-syntax-ns#rest" : [ {
"@id" : "urn:b"
} ]
}, {
"@id" : "urn:b",
"@type" : [ "http://www.w3.org/1999/02/22-rdf-syntax-ns#List" ],
"http://www.w3.org/1999/02/22-rdf-syntax-ns#first" : [ {
"@value" : "b"
} ],
"http://www.w3.org/1999/02/22-rdf-syntax-ns#rest" : [ {
"@list" : [ {
"@value" : "c"
} ]
} ]
} ]
Issue #277 seems to be the point where the implementation changed from serializing lists in the manner we expect to this new compact way. Is there any way we can get around this so that the last blank node of a list is not collapsed?
Currently it appears that properties are sorted into alphabetical order after any JSON-LD operation (compaction, framing).
In the context of framing, this is sometimes a nice feature, since it means that after framing multiple input JSON files, the JSON data is at least in a consistent order.
I understand that ordering is semantically meaningless, but as framing exists to turn the graph (which could correspond to multiple different trees) into a predictable JSON tree as a convenience for developers, it seems natural to me that if an explicit ordering is given in the frame, that the algorithm would respect that order rather than alphabetize. For example, if my data is:
{
"@context": "http://schema.org/",
"@id": "document",
"b": "text",
"a": "more text"
}
Under the frame:
{
"@context": "http://schema.org/",
"@id": "document",
"b": {},
"a": {}
}
the returned document reverses the order of b
and a
(to be alphabetical), and not the order given in the frame. Framing is a really elegant way to specify the nesting order, but it would be nice for framing to also be able to dictate the ordering, so that the output data file really follows the exact structure of the given frame.
Related issue: there is no way to indicate that referenced nodes should occur before they are references (excluding circular references), which can be useful in streaming files. Having control of the node order via the frame would also give a mechanism to address that.
Hope this makes sense and apologies if I'm missing something fundamental here that makes alphabetizing the node order the only logical thing to do; or if I've misunderstood the expected behavior.
Comments at TPAC suggested that as our work is a breaking change (causing 1.0 processors that are not 1.1 compatible to intentionally break when they see "@version": 1.1
), semantic versioning would suggest that we use a major release number, rather than a minor number.
This could impact a potential WG, which may want to make further changes, and then be in the place of using either 2.1
or 3.0
, which is odd given that the previous recommendation is 1.0
.
In some situations it is important/necessary to include the base direction of a text, alongside its language; see the “Requirements for Language and Direction Metadata in Data Formats” for further details. In practice, in a vanilla JSON, it would require something like:
"title": [ { "value": "Moby Dick", "lang": "en" },
{ "value": "موبي ديك", "lang": "ar" "dir": "rtl"}
]
(the example comes from that document).
At this moment, I believe the only way you can reasonably express that in JSON-LD is via cheating a bit:
"title": [ { "@value": "Moby Dick", "@language": "en" },
{ "@value": "موبي ديك", "@language": "ar" "dir": "rtl"}
]
and making sure that the dir
term is not defined in the relevant @context
so that, when generating the RDF output, that term is simply ignored. But that also means that there is no round-tripping, that term will disappear after expansion.
The difficulty lies in the RDF layer, in fact; RDF does not have any means (alas!) to express text direction. On the other hand, this missing feature is a general I18N problem whenever JSON-LD is used (there were issues when developing the Web Annotation Model, these issues are popping up in the Web Publication work, etc.).
Here is what I would propose as a non-complete solution
@dir
term, alongside @language
. This means this term can be used in place of dir
above, ie, it is a bona-fide part of a string representation, and would therefore be kept in the compaction/expansion steps, can also be used for framing.@dir
is ignored when transforming into RDF. I.e., only the language tag would be used.[] ex:title "موبي ديك"^^rdf:internationalText(ar,rtl) ;
@dir
value can be properly mapped onto an RDF representing the right choices (if such choices are worked out)Cc: @BigBlueHat @r12a
Per a suggestion by @danbri, we may want to add a container type, similar to @list
for encoding schema:ItemList
serializations, when the values are schema:ListItem
and order is set through schema:position
. ItemList
can be used with text values as well, but this is already reasonably supported natively.
Markup might look like the following:
{
"@context": {
"@vocab": "http://schema.org/",
"itemListElement": {"@container": "@listItem"}
},
"@type": "ItemList",
"@url": "http://en.wikipedia.org/wiki/Billboard_200",
"name": "Top music artists",
"description": "The artists with the most cumulative weeks at number one according to Billboard 200",
"itemListElement": [
{"@type": "MusicGroup", "name": "Beatles"},
{"@type": "MusicGroup", "name": "Elvis Presley"},
{"@type": "MusicGroup", "name": "Michael Jackson"},
{"@type": "MusicGroup", "name": "Garth Brooks" }
]
This would expand to the following:
[
{
"@id": "http://en.wikipedia.org/wiki/Billboard_200",
"@type": ["http://schema.org/ItemList"],
"http://schema.org/description": [{
"@value": "The artists with the most cumulative weeks at number one according to Billboard 200"
}],
"http://schema.org/itemListElement": [{
"@type": ["http://schema.org/ListItem"],
"http://schema.org/item": [{
"@type": ["http://schema.org/MusicGroup"],
"http://schema.org/name": [{"@value": "Beatles"}]
}],
"http://schema.org/position": [{"@value": 1}]
}, {
"@type": ["http://schema.org/ListItem"],
"http://schema.org/item": [{
"@type": ["http://schema.org/MusicGroup"],
"http://schema.org/name": [{"@value": "Elvis Presley"}]
}],
"http://schema.org/position": [{"@value": 2}]
}, {
"@type": ["http://schema.org/ListItem"],
"http://schema.org/item": [{
"@type": ["http://schema.org/MusicGroup"],
"http://schema.org/name": [{"@value": "Michael Jackson"}]
}],
"http://schema.org/position": [{"@value": 3}]
}, {
"@type": ["http://schema.org/ListItem"],
"http://schema.org/item": [{
"@type": ["http://schema.org/MusicGroup"],
"http://schema.org/name": [{"@value": "Garth Brooks"}]
}],
"http://schema.org/position": [{"@value": 3}]
}
],
"http://schema.org/name": [{"@value": "Top music artists"}]
}]
Otherwise, it works like @list
.
When compacting, the processor will re-order items based on position
, and ignore any nextItem
or previousItem
entries.
Expansion shows 1-base position, but could be 0-base as well. Note that specific position
values are lost when compacting, and duplicate values may lead to undefined relative ordering.
During the last meeting it was resolved to have one playground for 1.0 and 1.1 processing. Some notes on that related to jsonld.js:
@graph
handling and a bit of @version
handlingFrom an ease of site development viewpoint, I think we should just put the most recent jsonld.js on the playground and add a UI control to pick the processingMode
API option. Due to practicalities of jsonld.js not having a full correct 1.0 only lib, it seems not worth the effort to try and deal with this any other way. There are edge cases where a 1.1 lib in 1.0 mode will produce different results than a 1.0 lib. My guess is that in practice this really doesn't matter. Or in any case, is not worth handling on the playground.
This section is non-normative.
A large amount of thanks goes out to the JSON-LD Community Group participants who worked through many of the technical issues on the mailing list and the weekly telecons - of special mention are Niklas Lindström, François Daoust, Lin Clark, and Zdenko 'Denny' Vrandečić. The editors would like to thank Mark Birbeck, who provided a great deal of the initial push behind the JSON-LD work via his work on RDFj. The work of Dave Lehn and Mike Johnson are appreciated for reviewing, and performing several implementations of the specification. Ian Davis is thanked for his work on RDF/JSON. Thanks also to Nathan Rixham, Bradley P. Allen, Kingsley Idehen, Glenn McDonald, Alexandre Passant, Danny Ayers, Ted Thibodeau Jr., Olivier Grisel, Josh Mandel, Eric Prud'hommeaux, David Wood, Guus Schreiber, Pat Hayes, Sandro Hawke, and Richard Cyganiak for their input on the specification.