This document is also available in this non-normative format: diff to previous version .
This document is licensed under a Creative Commons Attribution 3.0 License .
JSON [ RFC4627 ] has proven to be a highly useful object serialization and messaging format. In an attempt to harmonize the representation of Linked Data in JSON, this specification outlines a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document.
This document is merely a public working draft of a potential specification. It has no official standing of any kind and does not represent the support or consensus of any standards organisation.
This document is an experimental work in progress.
JSON,
as
specified
in
[
RFC4627
],
is
a
simple
language
for
representing
data
on
the
Web.
Linked
Data
is
a
technique
for
creating
a
graph
of
interlinked
data
across
different
documents
or
Web
sites.
Data
entities
are
described
using
IRI
s,
which
are
typically
dereferencable
and
thus
may
be
used
to
find
more
information
about
an
entity,
creating
a
"Web
"Web
of
Knowledge".
Knowledge".
JSON-LD
is
intended
to
be
a
simple
publishing
method
for
expressing
not
only
Linked
Data
in
JSON,
but
also
for
adding
semantics
to
existing
JSON.
JSON-LD is designed as a light-weight syntax that can be used to express Linked Data. It is primarily intended to be a way to use Linked Data in Javascript and other Web-based programming environments. It is also useful when building interoperable Web services and when storing Linked Data in JSON-based document storage engines. It is practical and designed to be as simple as possible, utilizing the large number of JSON parsers and libraries available today. It is designed to be able to express key-value pairs, RDF data, RDFa [ RDFA-CORE ] data, Microformats [ MICROFORMATS ] data, and Microdata [ MICRODATA ]. That is, it supports every major Web-based structured data model in use today.
The
syntax
does
not
necessarily
require
applications
to
change
their
JSON,
but
allows
to
easily
add
meaning
by
adding
context
in
a
way
that
is
either
in-band
or
out-of-band.
The
syntax
is
designed
to
not
disturb
already
deployed
systems
running
on
JSON,
but
provide
a
smooth
upgrade
path
from
JSON
to
JSON
with
added
semantics.
Finally,
the
format
is
intended
to
be
easy
to
parse,
efficient
to
generate,
convertible
to
RDF
in
one
pass,
stream-based
and
document-based
processing
compatible,
and
require
a
very
small
memory
footprint
in
order
to
operate.
This document is a detailed specification for a serialization of Linked Data in JSON. The document is primarily intended for the following audiences:
To understand the basics in this specification you must first be familiar with JSON, which is detailed in [ RFC4627 ]. To understand the API and how it is intended to operate in a programming environment, it is useful to have working knowledge of the JavaScript programming language [ ECMA-262 ] and WebIDL [ WEBIDL ]. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [ RDF-CONCEPTS ].
Examples may contain references to existing vocabularies and use prefix es to refer to Web Vocabularies. The following is a list of all vocabularies and their prefix abbreviations, as used in this document:
dc
,
e.g.,
dc:title
)
foaf
,
e.g.,
foaf:knows
)
rdf
,
e.g.,
rdf:type
)
xsd
,
e.g.,
xsd:integer
)
JSON [ RFC4627 ] defines several terms which are used throughout this document:
There are a number of ways that one may participate in the development of this specification:
The following section outlines the design goals and rationale behind the JSON-LD markup language.
A number of design considerations were explored during the creation of this markup language:
The following definition for Linked Data is the one that will be used for this specification.
Note that the definition for Linked Data above is silent on the topic of unlabeled nodes. Unlabeled nodes are not considered Linked Data . However, this specification allows for the expression of unlabled nodes, as most graph-based data sets on the Web contain a number of associated nodes that are not named and thus are not directly de-referenceable.
An Internationalized Resource Identifier ( IRI ), as described in [ RFC3987 ], is a mechanism for representing unique identifiers on the web. In Linked Data , an IRI is commonly used for expressing a subject , a property or an object .
JSON-LD defines a mechanism to map JSON terms, i.e., keys and values, to IRIs. This does not mean that JSON-LD requires every key or value to be an IRI, but rather ensures that keys and values can be mapped to IRIs if the developer desires to transform their data into Linked Data. There are a few techniques that can ensure that developers will generate good Linked Data for the Web. JSON-LD formalizes those techniques.
We will be using the following JSON markup as the example for the rest of this section:
{ "name": "Manu Sporny", "homepage": "http://manu.sporny.org/", "avatar": "http://twitter.com/account/profile_image/manusporny" }
In
JSON-LD,
a
context
is
used
to
map
term
s,
i.e.,
keys
and
values
in
an
JSON
document,
to
IRI
s.
A
term
is
a
short
word
that
may
be
expanded
to
an
IRI
.
The
Web
uses
IRIs
for
unambiguous
identification.
The
idea
is
that
these
term
s
mean
something
that
may
be
of
use
to
other
developers
and
that
it
is
useful
to
give
them
an
unambiguous
identifier.
That
is,
it
is
useful
for
term
s
to
expand
to
IRIs
so
that
developers
don't
accidentally
step
on
each
other's
Web
Vocabulary
terms.
For
example,
the
term
name
may
map
directly
to
the
IRI
http://xmlns.com/foaf/0.1/name
.
This
allows
JSON-LD
documents
to
be
constructed
using
the
common
JSON
practice
of
simple
name/value
pairs
while
ensuring
that
the
data
is
useful
outside
of
the
page,
API
or
database
in
which
it
resides.
These Linked Data term s are typically collected in a context document that would look something like this:
{ "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "avatar": "http://xmlns.com/foaf/0.1/avatar" }
This context document can then be used in an JSON-LD document by adding a single line. The JSON markup as shown in the previous section could be changed as follows to link to the context document:
{
"@context": "http://example.org/json-ld-contexts/person",
"name": "Manu Sporny",
"homepage": "http://manu.sporny.org/",
"avatar": "http://twitter.com/account/profile_image/manusporny"
}
The
additions
addition
above
transform
transforms
the
previous
JSON
document
into
a
JSON
document
with
added
semantics
because
the
@context
specifies
how
the
name
,
homepage
,
and
avatar
terms
map
to
IRIs.
Mapping
those
keys
to
IRIs
gives
the
data
global
context.
If
two
developers
use
the
same
IRI
to
describe
a
property,
they
are
more
than
likely
expressing
the
same
concept.
This
allows
both
developers
to
re-use
each
others
data
without
having
to
agree
to
how
their
data
will
inter-operate
on
a
site-by-site
basis.
Contexts
may
also
contain
datatype
information
for
certain
term
s
as
well
as
other
processing
instructions
for
the
JSON-LD
processor.
Contexts may be specified in-line. This ensures that JSON-LD documents can be processed when a JSON-LD processor does not have access to the Web.
{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "avatar": "http://xmlns.com/foaf/0.1/avatar" },"name": "Manu Sporny", "homepage": "http://manu.sporny.org/", "avatar": "http://twitter.com/account/profile_image/manusporny""name": "Manu Sporny", "homepage": "http://manu.sporny.org/", "avatar": "http://twitter.com/account/profile_image/manusporny" }
JSON-LD strives to ensure that developers don't have to change the JSON that is going into and being returned from their Web APIs. This means that developers can also specify a context for JSON data in an out-of-band fashion. This is described later in this document.
JSON-LD
uses
a
special
type
of
machine-readable
document
called
a
Web
Vocabulary
to
define
term
s
that
are
then
used
to
describe
concepts
and
"things"
"things"
in
the
world.
Typically,
these
Web
Vocabulary
documents
have
prefix
es
associated
with
them
and
contain
a
number
of
term
declarations.
A
prefix
,
like
a
term
,
is
a
short
word
that
expands
to
a
Web
Vocabulary
base
IRI.
Prefix
es
are
helpful
when
a
developer
wants
to
mix
multiple
vocabularies
together
in
a
context,
but
does
not
want
to
go
to
the
trouble
of
defining
every
single
term
in
every
single
vocabulary.
Some
Web
Vocabularies
may
have
dozens
of
terms
defined.
If
a
developer
wants
to
use
3-4
different
vocabularies,
the
number
of
terms
that
would
have
to
be
declared
in
a
single
context
could
become
quite
large.
To
reduce
the
number
of
different
terms
that
must
be
defined,
JSON-LD
also
allows
prefixes
to
be
used
to
compact
IRIs.
For
example,
the
IRI
http://xmlns.com/foaf/0.1/
specifies
a
Web
Vocabulary
which
may
be
represented
using
the
foaf
prefix
.
The
foaf
Web
Vocabulary
contains
a
term
called
name
.
If
you
join
the
foaf
prefix
with
the
name
suffix,
you
can
build
a
compact
IRI
that
will
expand
out
into
an
absolute
IRI
for
the
http://xmlns.com/foaf/0.1/name
vocabulary
term.
That
is,
the
compact
IRI,
or
short-form,
is
foaf:name
and
the
expanded-form
is
http://xmlns.com/foaf/0.1/name
.
This
vocabulary
term
is
used
to
specify
a
person's
name.
Developers, and machines, are able to use this IRI (plugging it directly into a web browser, for instance) to go to the term and get a definition of what the term means. Much like we can use WordNet today to see the definition of words in the English language. Developers and machines need the same sort of definition of terms. IRIs provide a way to ensure that these terms are unambiguous.
The context provides a collection of vocabulary term s and prefix es that can be used to expand JSON keys and values into IRI s.
If a set of terms such as, name , homepage , and avatar , are defined in a context, and that context is used to resolve the names in JSON objects, machines are able to automatically expand the terms to something meaningful and unambiguous, like this:
{ "http://xmlns.com/foaf/0.1/name": "Manu Sporny", "http://xmlns.com/foaf/0.1/homepage": "http://manu.sporny.org" "http://rdfs.org/sioc/ns#avatar": "http://twitter.com/account/profile_image/manusporny" }
Doing this allows JSON to be unambiguously machine-readable without requiring developers to drastically change their workflow.
Please note that this JSON-LD document doesn't define the subject and will thus result in an unlabeled or blank node.
JSON-LD is designed to ensure that Linked Data concepts can be marked up in a way that is simple to understand and author by Web developers. In many cases, regular JSON markup can become Linked Data with the simple addition of a context. As more JSON-LD features are used, more semantics are added to the JSON markup.
Expressing IRIs are fundamental to Linked Data as that is how most subject s and many object are named. IRIs can be expressed in a variety of different ways in JSON-LD.
@context
and
when
dealing
with
keys
that
start
with
the
@subject
character.
@subject
,
if
it
is
a
string
.
@type
.
@iri
keyword.
@coerce
rules
in
effect
for
a
key
named
@iri
.
IRIs can be expressed directly in the key position like so:
{ ..."": "Manu Sporny","http://xmlns.com/foaf/0.1/name": "Manu Sporny", ... }
In
the
example
above,
the
key
http://xmlns.com/foaf/0.1/name
is
interpreted
as
an
IRI,
as
opposed
to
being
interpreted
as
a
string.
Term expansion occurs for IRIs if a term is defined within the active context :
{ "@context": {"name": "http://xmlns.com/foaf/0.1/name"}, ..."": "Manu Sporny","name": "Manu Sporny", ... }
Prefix es are expanded when used in keys:
{ "@context": {"foaf": "http://xmlns.com/foaf/0.1/"}, ..."": "Manu Sporny","foaf:name": "Manu Sporny", ... }
foaf:name
above
will
automatically
expand
out
to
the
IRI
http://xmlns.com/foaf/0.1/name
.
An
IRI
is
generated
when
a
value
is
associated
with
a
key
using
the
@iri
keyword:
{ ..."homepage": { "": "http://manu.sporny.org" }"homepage": { "@iri": "http://manu.sporny.org" } ... }
If
type
coercion
rules
are
specified
in
the
@context
for
a
particular
vocabulary
term,
an
IRI
is
generated:
{ "@context": { ..."@coerce":"@coerce": {"@iri": "homepage""@iri": "homepage" } } ..."homepage": "http://manu.sporny.org/","homepage": "http://manu.sporny.org/", ... }
Even
though
the
value
http://manu.sporny.org/
is
a
string
,
the
type
coercion
rules
will
transform
the
value
into
an
IRI
when
processed
by
a
JSON-LD
Processor
To be able to externally reference nodes, it is important that each node has an unambiguous identifier. IRI s are a fundamental concept of Linked Data, and nodes should have a de-referencable identifier used to name and locate them. For nodes to be truely linked, de-referencing the identifier should result in a representation of that node. Associating an IRI with a node tells an application that the returned document contains a description of the node requested.
JSON-LD documents may also contain descriptions of other nodes, so it is necessary to be able to uniquely identify each node which may be externally referenced.
A
subject
of
an
object
in
JSON
is
declared
using
the
@subject
key.
The
subject
is
the
first
piece
of
information
needed
by
the
JSON-LD
processor
in
order
to
create
the
(subject,
property,
object)
tuple,
also
known
as
a
triple.
{ ..."","@subject": "http://example.org/people#joebob", ... }
The
example
above
would
set
the
subject
to
the
IRI
http://example.org/people#joebob
.
The
type
of
a
particular
subject
can
be
specified
using
the
@type
key.
Specifying
the
type
in
this
way
will
generate
a
triple
of
the
form
(subject,
type,
type-iri).
To be Linked Data, types must be uniquely identified by an IRI .
{ ..."@subject": "http://example.org/people#joebob", "","@subject": "http://example.org/people#joebob", "@type": "http://xmlns.com/foaf/0.1/Person", ... }
The example above would generate the following triple if the JSON-LD document is mapped to RDF (in N-Triples notation):
<http://example.org/people#joebob> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> .
Regular text strings, also referred to as plain literal s, are easily expressed using regular JSON string s.
{ ..."name": "","name": "Mark Birbeck", ... }
JSON-LD makes an assumption that strings with associated language encoding information are not very common when used in JavaScript and Web Services. Thus, it takes a little more effort to express strings with associated language information.
{ ..."name":"name": {"@literal": "花澄", "@language": "ja""@literal": "花澄", "@language": "ja" } ... }
The
example
above
would
generate
a
plain
literal
for
花澄
and
associate
the
ja
language
code
with
the
triple
that
is
generated.
Languages
must
be
expressed
in
[
BCP47
]
format.
A value with an associated datatype, also known as a typed literal , is indicated by associating a literal with an IRI which indicates the literal's datatype. Typed literals may be expressed in JSON-LD in three ways:
@coerce
keyword.
The
first
example
uses
the
@coerce
keyword
to
express
a
typed
literal:
{ "@context": {"modified": "http://purl.org/dc/terms/modified", "dateTime": "http://www.w3.org/2001/XMLSchema#dateTime" "@coerce":"modified": "http://purl.org/dc/terms/modified", "dateTime": "http://www.w3.org/2001/XMLSchema#dateTime" "@coerce": {"dateTime": "modified""dateTime": "modified" } } ..."modified": "2010-05-29T14:17:39+02:00","modified": "2010-05-29T14:17:39+02:00", ... }
The second example uses the expanded form for specifying objects:
{ ..."modified":"modified": {"@literal": "2010-05-29T14:17:39+02:00", "@datatype": "dateTime""@literal": "2010-05-29T14:17:39+02:00", "@datatype": "dateTime" } ... }
Both
examples
above
would
generate
an
object
with
the
literal
value
of
2010-05-29T14:17:39+02:00
and
the
datatype
of
http://www.w3.org/2001/XMLSchema#dateTime
.
The third example uses a built-in native JSON type, a number , to express a datatype:
{ ..."@subject": "http://example.org/people#joebob", "age":"@subject": "http://example.org/people#joebob", "age": 31 ... }
The example above would generate the following triple:
<http://example.org/people#joebob> <http://xmlns.com/foaf/0.1/age>"31"^^<http://www.w3.org/2001/XMLSchema#integer>"31"^^<http://www.w3.org/2001/XMLSchema#integer> .
A JSON-LD author can express multiple triples in a compact way by using array s. If a subject has multiple values for the same property, the author may express each property as an array .
In JSON-LD, multiple objects on a property are not ordered. This is because typically graphs are not inherently ordered data structures. To see more on creating ordered collections in JSON-LD, see Lists .
{ ..."@subject": "http://example.org/people#joebob", "nick": ,"@subject": "http://example.org/people#joebob", "nick": ["joe", "bob", "jaybee"], ... }
The markup shown above would generate the following triples:
<http://example.org/people#joebob> <http://xmlns.com/foaf/0.1/nick>"joe" ."joe" . <http://example.org/people#joebob> <http://xmlns.com/foaf/0.1/nick>"bob" ."bob" . <http://example.org/people#joebob> <http://xmlns.com/foaf/0.1/nick>"jaybee""jaybee" .
Multiple typed literal s may also be expressed using the expanded form for objects:
{ ..."@subject": "http://example.org/articles/8", "modified":"@subject": "http://example.org/articles/8", "modified": [ {"@literal": "2010-05-29T14:17:39+02:00", "@datatype": "dateTime""@literal": "2010-05-29T14:17:39+02:00", "@datatype": "dateTime" }, {"@literal": "2010-05-30T09:21:28-04:00", "@datatype": "dateTime""@literal": "2010-05-30T09:21:28-04:00", "@datatype": "dateTime" } ] ... }
The markup shown above would generate the following triples:
<http://example.org/articles/8> <http://purl.org/dc/terms/modified>"2010-05-29T14:17:39+02:00"^^http://www.w3.org/2001/XMLSchema#dateTime ."2010-05-29T14:17:39+02:00"^^http://www.w3.org/2001/XMLSchema#dateTime . <http://example.org/articles/8> <http://purl.org/dc/terms/modified>"2010-05-30T09:21:28-04:00"^^http://www.w3.org/2001/XMLSchema#dateTime"2010-05-30T09:21:28-04:00"^^http://www.w3.org/2001/XMLSchema#dateTime .
Expansion is the process of taking a JSON-LD document and applying a context such that all IRI, datatypes, and literal values are expanded so that the context is no longer necessary. JSON-LD document expansion is typically used as a part of Framing or Normalization .
For example, assume the following JSON-LD input document:
{ "@context": {"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce":"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": {"@iri": "homepage""@iri": "homepage" } },"name": "Manu Sporny", "homepage": "http://manu.sporny.org/""name": "Manu Sporny", "homepage": "http://manu.sporny.org/" }
Running the JSON-LD Expansion algorithm against the JSON-LD input document provided above would result in the following output:
{ "http://xmlns.com/foaf/0.1/name": "Manu Sporny", "http://xmlns.com/foaf/0.1/homepage": {"@iri": "http://manu.sporny.org/""@iri": "http://manu.sporny.org/" } }
Compaction is the process of taking a JSON-LD document and applying a context such that the most compact form of the document is generated. JSON is typically expressed in a very compact, key-value format. That is, full IRIs are rarely used as keys. At times, a JSON-LD document may be received that is not in its most compact form. JSON-LD, via the API, provides a way to compact a JSON-LD document.
For example, assume the following JSON-LD input document:
{ "http://xmlns.com/foaf/0.1/name": "Manu Sporny", "http://xmlns.com/foaf/0.1/homepage": {"@iri": "http://manu.sporny.org/""@iri": "http://manu.sporny.org/" } }
Additionally, assume the following developer-supplied JSON-LD context:
{ "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": {"@iri": "homepage""@iri": "homepage" } }
Running the JSON-LD Compaction algorithm given the context supplied above against the JSON-LD input document provided above would result in the following output:
{ "@context": {"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce":"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": {"@iri": "homepage""@iri": "homepage" } },"name": "Manu Sporny", "homepage": "http://manu.sporny.org/""name": "Manu Sporny", "homepage": "http://manu.sporny.org/" }
The
compaction
algorithm
also
enables
the
developer
to
map
any
expanded
format
into
an
application-specific
compacted
format.
While
the
context
provided
above
mapped
http://xmlns.com/foaf/0.1/name
to
name
,
it
could
have
also
mapped
it
to
any
arbitrary
string
provided
by
the
developer.
A JSON-LD document is a representation of a directed graph. A single directed graph can have many different serializations, each expressing exactly the same information. Developers typically work with trees, represented as JSON object s. While mapping a graph to a tree can be done, the layout of the end result must be specified in advance. A Frame can be used by a developer on a JSON-LD document to specify a deterministic layout for a graph.
Framing is the process of taking a JSON-LD document, which expresses a graph of information, and applying a specific graph layout (called a Frame ).
The JSON-LD document below expresses a library, a book and a chapter:
{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title", "@coerce": {"@iri": "contains""@iri": "contains" }, },"@subject":"@subject": [{"@subject": "http://example.com/library", "@type": "Library", "contains": "http://example.org/library/the-republic""@subject": "http://example.com/library", "@type": "Library", "contains": "http://example.org/library/the-republic" }, {"@subject": "http://example.org/library/the-republic", "@type": "Book", "creator": "Plato", "title": "The Republic", "contains": "http://example.org/library/the-republic#introduction""@subject": "http://example.org/library/the-republic", "@type": "Book", "creator": "Plato", "title": "The Republic", "contains": "http://example.org/library/the-republic#introduction" }, {"@subject": "http://example.org/library/the-republic#introduction", "@type": "Chapter", "description": "An introductory chapter on The Republic.", "title": "The Introduction""@subject": "http://example.org/library/the-republic#introduction", "@type": "Chapter", "description": "An introductory chapter on The Republic.", "title": "The Introduction" }] }
Developers typically like to operate on items in a hierarchical, tree-based fashion. Ideally, a developer would want the data above sorted into top-level libraries, then the books that are contained in each library, and then the chapters contained in each book. To achieve that layout, the developer can define the following frame :
{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title" },"@type": "Library", "contains": { "@type": "Book", "contains": { "@type": "Chapter""@type": "Library", "contains": { "@type": "Book", "contains": { "@type": "Chapter" } } }
When the framing algorithm is run against the previously defined JSON-LD document, paired with the frame above, the following JSON-LD document is the end result:
{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title" },"@subject": "http://example.org/library", "@type": "Library", "contains": { "@type": "Book", "contains": { "@type": "Chapter","@subject": "http://example.org/library", "@type": "Library", "contains": { "@subject": "http://example.org/library/the-republic", "@type": "Book", "creator": "Plato", "title": "The Republic", "contains": { "@subject": "http://example.org/library/the-republic#introduction", "@type": "Chapter", "description": "An introductory chapter on The Republic.", "title": "The Introduction" }, }, }
The JSON-LD framing algorithm allows developers to query by example and force a specific tree layout to a JSON-LD document.
JSON-LD has a number of features that provide functionality above and beyond the core functionality described above. The following sections outline the features that are specific to JSON-LD.
Vocabulary terms in Linked Data documents may draw from a number of different Web vocabularies. At times, declaring every single term that a document uses can require the developer to declare tens, if not hundreds of potential vocabulary terms that may be used across an application. This is a concern for at least three reasons; the first is the cognitive load on the developer, the second is the serialized size of the context, the third is future-proofing application contexts. In order to address these issues, the concept of a prefix mechanism is introduced.
A
prefix
is
a
compact
way
of
expressing
a
base
IRI
to
a
Web
Vocabulary
.
Generally,
these
prefixes
are
used
by
concatenating
the
prefix
and
a
term
separated
by
a
colon
(
:
).
The
prefix
is
a
short
string
that
identifies
a
particular
Web
vocabulary.
For
example,
the
prefix
foaf
may
be
used
as
a
short
hand
for
the
Friend-of-a-Friend
Web
Vocabulary,
which
is
identified
using
the
IRI
http://xmlns.com/foaf/0.1/
.
A
developer
may
append
any
of
the
FOAF
Vocabulary
terms
to
the
end
of
the
prefix
to
specify
a
short-hand
version
of
the
full
IRI
for
the
vocabulary
term.
For
example,
foaf:name
would
be
expanded
out
to
the
IRI
http://xmlns.com/foaf/0.1/name
.
Instead
of
having
to
remember
and
type
out
the
entire
IRI,
the
developer
can
instead
use
the
prefix
in
their
JSON-LD
markup.
The
ability
to
use
prefix
es
reduces
the
need
for
developers
to
declare
every
vocabulary
term
that
they
intend
to
use
in
the
JSON-LD
context.
This
reduces
document
serialization
size
because
every
vocabulary
term
need
not
be
declared
in
the
context.
Prefix
also
reduce
the
cognitive
load
on
the
developer.
It
is
far
easier
to
remember
foaf:name
than
it
is
to
remember
http://xmlns.com/foaf/0.1/name
.
The
use
of
prefixes
also
ensures
that
a
context
document
does
not
have
to
be
updated
in
lock-step
with
an
externally
defined
Web
Vocabulary
.
Without
prefixes,
a
developer
would
need
to
keep
their
application
context
terms
in
lock-step
with
an
externally
defined
Web
Vocabulary.
Rather,
by
just
declaring
the
Web
Vocabulary
prefix,
one
can
use
new
terms
as
they're
declared
without
having
to
update
the
application's
JSON-LD
context.
Consider the following example:
{ "@context": { "dc": "http://purl.org/dc/elements/1.1/", "ex": "http://example.org/vocab#" },"@subject": "http://example.org/library", "@type": , : { "@subject": "http://example.org/library/the-republic", "@type": , : "Plato", : "The Republic", : { "@subject": "http://example.org/library/the-republic#introduction", "@type": , : "An introductory chapter on The Republic.", : "The Introduction""@subject": "http://example.org/library", "@type": "ex:Library", "ex:contains": { "@subject": "http://example.org/library/the-republic", "@type": "ex:Book", "dc:creator": "Plato", "dc:title": "The Republic", "ex:contains": { "@subject": "http://example.org/library/the-republic#introduction", "@type": "ex:Chapter", "dc:description": "An introductory chapter on The Republic.", "dc:title": "The Introduction" }, }, }
In
this
example,
two
different
vocabularies
are
referred
to
using
prefixes.
Those
prefixes
are
then
used
as
type
and
property
values
using
the
prefix:term
notation.
Prefixes,
also
known
as
CURIEs,
are
defined
more
formally
in
RDFa
Core
1.1,
Section
6
"CURIE
"CURIE
Syntax
Definition"
Definition"
[
RDFA-CORE
].
JSON-LD
does
not
support
the
square-bracketed
CURIE
syntax
as
the
mechanism
is
not
required
to
disambiguate
IRIs
in
a
JSON-LD
document
like
it
is
in
HTML
documents.
Since JSON is capable of expressing typed information such as doubles, integers, and boolean values. As demonstrated below, JSON-LD utilizes that information to create typed literal s:
{ ... // The following two values are automatically converted to a type of xsd:double // and both values are equivalent to each other."measure:cups": , "measure:cups": ,"measure:cups": 5.3, "measure:cups": 5.3e0, // The following value is automatically converted to a type of xsd:double as well"space:astronomicUnits": ,"space:astronomicUnits": 6.5e73, // The following value should never be converted to a language-native type"measure:stones": ,"measure:stones": { "@literal": "4.8", "@datatype": "xsd:decimal" }, // This value is automatically converted to having a type of xsd:integer"chem:protons": ,"chem:protons": 12, // This value is automatically converted to having a type of xsd:boolean"sensor:active": ,"sensor:active": true, ... }
When
dealing
with
a
number
of
modern
programming
languages,
including
JavaScript
ECMA-262,
there
is
no
distinction
between
xsd:decimal
and
xsd:double
values.
That
is,
the
number
5.3
and
the
number
5.3e0
are
treated
as
if
they
were
the
same.
When
converting
from
JSON-LD
to
a
language-native
format
and
back,
datatype
information
is
lost
in
a
number
of
these
languages.
Thus,
one
could
say
that
5.3
is
a
xsd:decimal
and
5.3e0
is
an
xsd:double
in
JSON-LD,
but
when
both
values
are
converted
to
a
language-native
format
the
datatype
difference
between
the
two
is
lost
because
the
machine-level
representation
will
almost
always
be
a
double
.
Implementers
should
be
aware
of
this
potential
round-tripping
issue
between
xsd:decimal
and
xsd:double
.
Specifically
objects
with
a
datatype
of
xsd:decimal
must
not
be
converted
to
a
language
native
type.
JSON-LD supports the coercion of values to particular data types. Type coercion allows someone deploying JSON-LD to coerce the incoming or outgoing types to the proper data type based on a mapping of data type IRIs to property types. Using type coercion, one may convert simple JSON data to properly typed RDF data.
The example below demonstrates how a JSON-LD author can coerce values to plain literal s, typed literal s and IRIs.
{ "@context": {"rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#", "xsd": "http://www.w3.org/2001/XMLSchema#", "name": "http://xmlns.com/foaf/0.1/name", "age": "http://xmlns.com/foaf/0.1/age", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce":"rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#", "xsd": "http://www.w3.org/2001/XMLSchema#", "name": "http://xmlns.com/foaf/0.1/name", "age": "http://xmlns.com/foaf/0.1/age", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": {"xsd:integer": "age", "@iri": "homepage""xsd:integer": "age", "@iri": "homepage" } },"name": "John Smith", "age": , "homepage":"name": "John Smith", "age": "41", "homepage": "http://example.org/home/" }
The example above would generate the following triples:
_:bnode1 <http://xmlns.com/foaf/0.1/name>"John Smith" ."John Smith" . _:bnode1 <http://xmlns.com/foaf/0.1/age>"41"^^http://www.w3.org/2001/XMLSchema#integer ."41"^^http://www.w3.org/2001/XMLSchema#integer . _:bnode1 <http://xmlns.com/foaf/0.1/homepage> <http://example.org/home/> .
Object chaining is a JSON-LD feature that allows an author to use the definition of JSON-LD objects as property values. This is a commonly used mechanism for creating a parent-child relationship between two subject s.
The example shows an two subjects related by a property from the first subject:
{ ..."name": "Manu Sporny", "": { "", "","name": "Manu Sporny", "knows": { "@type": "Person", "name": "Gregg Kellogg", } ... }
An object definition, like the one used above, may be used as a JSON value at any point in JSON-LD.
At
times,
it
becomes
necessary
to
be
able
to
express
information
without
being
able
to
specify
the
subject.
Typically,
this
type
of
node
is
called
an
unlabeled
node
or
a
blank
node.
In
JSON-LD,
unlabeled
node
identifiers
are
automatically
created
if
a
subject
is
not
specified
using
the
@subject
keyword.
However,
authors
may
provide
identifiers
for
unlabeled
nodes
by
using
the
special
_
(underscore)
prefix
.
This
allows
to
reference
the
node
locally
within
the
document
but
not
in
an
external
document.
{ ..."@subject": "","@subject": "_:foo", ... }
The
example
above
would
set
the
subject
to
_:foo
,
which
can
then
be
used
later
on
in
the
JSON-LD
markup
to
refer
back
to
the
unlabeled
node.
This
practice,
however,
is
usually
frowned
upon
when
generating
Linked
Data.
If
a
developer
finds
that
they
refer
to
the
unlabeled
node
more
than
once,
they
should
consider
naming
the
node
using
a
resolve-able
IRI.
JSON-LD
allows
all
of
the
syntax
keywords,
except
for
@context
,
to
be
aliased.
This
feature
allows
more
legacy
JSON
content
to
be
supported
by
JSON-LD.
It
also
allows
developers
to
design
domain-specific
implementations
using
only
the
JSON-LD
context.
{ "@context": {, , "name": "http://schema.org/name""url": "@subject", "a": "@type", "name": "http://schema.org/name" },"url": "http://example.com/about#gregg", "a": "http://schema.org/Person", "name": "Gregg Kellogg""url": "http://example.com/about#gregg", "a": "http://schema.org/Person", "name": "Gregg Kellogg" }
In
the
example
above,
the
@subject
and
@type
keywords
have
been
given
the
aliases
url
and
a
,
respectively.
Normalization is the process of taking JSON-LD input and performing a deterministic transformation on that input that results in a JSON-LD output that any conforming JSON-LD processor would have generated given the same input. The problem is a fairly difficult technical problem to solve because it requires a directed graph to be ordered into a set of nodes and edges in a deterministic way. This is easy to do when all of the nodes have unique names, but very difficult to do when some of the nodes are not labeled.
Normalization is useful when comparing two graphs against one another, when generating a detailed list of differences between two graphs, and when generating a cryptographic digital signature for information contained in a graph or when generating a hash of the information contained in a graph.
The example below is an un-normalized JSON-LD document:
{ "@context": {"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "xsd": "http://www.w3.org/2001/XMLSchema#", "@coerce":"name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "xsd": "http://www.w3.org/2001/XMLSchema#", "@coerce": {"@iri": ["homepage"]"@iri": ["homepage"] } },"name": "Manu Sporny", "homepage": "http://manu.sporny.org/""name": "Manu Sporny", "homepage": "http://manu.sporny.org/" }
The example below is the normalized form of the JSON-LD document above:
Whitespace is used below to aid readability. The normalization algorithm for JSON-LD removes all unnecessary whitespace in the fully normalized form.
[{ "@subject": {"@iri": "_:c14n0""@iri": "_:c14n0" },"http://xmlns.com/foaf/0.1/homepage":"http://xmlns.com/foaf/0.1/homepage": {"@iri": "http://manu.sporny.org/""@iri": "http://manu.sporny.org/" },"http://xmlns.com/foaf/0.1/name": "Manu Sporny""http://xmlns.com/foaf/0.1/name": "Manu Sporny" }]
Notice how all of the term s have been expanded and sorted in alphabetical order. Also, notice how the subject has been labeled with a blank node identifier . Normalization ensures that any arbitrary graph containing exactly the same information would be normalized to exactly the same form shown above.
This API provides a clean mechanism that enables developers to convert JSON-LD data into a a variety of output formats that are easier to work with in various programming languages. If a JSON-LD API is provided in a programming environment, the entirety of the following API must be implemented.
[NoInterfaceObject]
interface JsonLdProcessor {
object expand (object input, optional object? context) raises (InvalidContext); object compact (object input, optional object? context) raises (InvalidContext, ProcessingError); object frame (object input, object frame, object options) raises (InvalidFrame); object normalize (object input, optional object? context) raises (InvalidContext); object triples (object input, JsonLdTripleCallback
tripleCallback, optional object? context) raises (InvalidContext);
};
compact
input
according
to
the
steps
in
the
Compaction
Algorithm
.
The
input
must
be
copied,
compacted
and
returned
if
there
are
no
errors.
If
the
compaction
fails,
an
appropirate
exception
must
be
thrown.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
input |
object
| ✘ | ✘ | The JSON-LD object to perform compaction on. |
context |
object
| ✔ | ✔ |
The
base
context
to
use
when
compacting
the
input
. |
Exception | Description | ||||
---|---|---|---|---|---|
InvalidContext |
| ||||
ProcessingError |
|
object
expand
input
according
to
the
steps
in
the
Expansion
Algorithm
.
The
input
must
be
copied,
expanded
and
returned
if
there
are
no
errors.
If
the
expansion
fails,
an
appropriate
exception
must
be
thrown.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
input |
object
| ✘ | ✘ | The JSON-LD object to copy and perform the expansion upon. |
context |
object
| ✔ | ✔ |
An
external
context
to
use
additionally
to
the
context
embedded
in
input
when
expanding
the
input
. |
Exception | Description | ||||
---|---|---|---|---|---|
InvalidContext |
|
object
frame
input
using
the
frame
according
to
the
steps
in
the
Framing
Algorithm
.
The
input
is
used
to
build
the
framed
output
and
is
returned
if
there
are
no
errors.
If
there
are
no
matches
for
the
frame,
null
must
be
returned.
Exceptions
must
be
thrown
if
there
are
errors.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
input |
object
| ✘ | ✘ | The JSON-LD object to perform framing on. |
frame |
object
| ✘ | ✘ | The frame to use when re-arranging the data. |
options |
object
| ✘ | ✘ | A set of options that will affect the framing algorithm. |
Exception | Description | ||||
---|---|---|---|---|---|
InvalidFrame |
|
object
normalize
input
according
to
the
steps
in
the
Normalization
Algorithm
.
The
input
must
be
copied,
normalized
and
returned
if
there
are
no
errors.
If
the
compaction
fails,
null
must
be
returned.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
input |
object
| ✘ | ✘ | The JSON-LD object to perform normalization upon. |
context |
object
| ✔ | ✔ |
An
external
context
to
use
additionally
to
the
context
embedded
in
input
when
expanding
the
input
. |
Exception | Description | ||||
---|---|---|---|---|---|
InvalidContext |
|
object
triples
input
according
to
the
RDF
Conversion
Algorithm
,
calling
the
provided
tripleCallback
for
each
triple
generated.
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
input |
object
| ✘ | ✘ | The JSON-LD object to process when outputting triples. |
tripleCallback |
| ✘ | ✘ |
A
callback
that
is
called
whenever
a
processing
error
occurs
on
the
given
input
.
This
callback
should
be
aligned
with
the
RDF
API.
|
context |
object
| ✔ | ✔ |
An
external
context
to
use
additionally
to
the
context
embedded
in
input
when
expanding
the
input
. |
Exception | Description | ||||
---|---|---|---|---|---|
InvalidContext |
|
object
The
JsonLdTripleCallback
is
called
whenever
the
processor
generates
a
triple
during
the
triple()
call.
[NoInterfaceObject Callback]
interface JsonLdTripleCallback {
void triple (DOMString subject, DOMString property, DOMString objectType, DOMString object, DOMString? datatype, DOMString? language);
};
triple
Parameter | Type | Nullable | Optional | Description |
---|---|---|---|---|
subject |
DOMString
| ✘ | ✘ | The subject IRI that is associated with the triple. |
property |
DOMString
| ✘ | ✘ | The property IRI that is associated with the triple. |
objectType |
DOMString
| ✘ | ✘ |
The
type
of
object
that
is
associated
with
the
triple.
Valid
values
are
IRI
and
literal
. |
object |
DOMString
| ✘ | ✘ | The object value associated with the subject and the property. |
datatype |
DOMString
| ✔ | ✘ | The datatype associated with the object. |
language |
DOMString
| ✔ | ✘ | The language associated with the object in BCP47 format. |
void
All algorithms described in this section are intended to operate on language-native data structures. That is, the serialization to a text-based JSON document isn't required as input or output to any of these algorithms and language-native data structures must be used where applicable.
JSON-LD specifies a number of syntax tokens and keywords that are using in all algorithms described in this section:
@context
@base
@vocab
@coerce
@literal
@iri
@language
@datatype
:
@subject
@type
@context
keyword.
Processing of JSON-LD data structure is managed recursively. During processing, each rule is applied using information provided by the active context . Processing begins by pushing a new processor state onto the processor state stack and initializing the active context with the initial context . If a local context is encountered, information from the local context is merged into the active context .
The active context is used for expanding keys and values of a JSON object (or elements of a list (see List Processing )).
A
local
context
is
identified
within
a
JSON
object
having
a
key
of
@context
with
string
or
a
JSON
object
value.
When
processing
a
local
context
,
special
processing
rules
apply:
@base
key,
it
must
have
a
value
of
a
simple
string
with
the
lexical
form
of
an
absolute
IRI.
Add
the
base
mapping
to
the
local
context
.Turtle allows @base to be relative. If we did this, we would have to add IRI Expansion .
@vocab
key,
it
must
have
a
value
of
a
simple
string
with
the
lexical
form
of
an
absolute
IRI.
Add
the
vocabulary
mapping
to
the
local
context
after
performing
IRI
Expansion
on
the
associated
value.
@coerce
key,
it
must
have
a
value
of
a
JSON
object
.
Add
the
@coerce
mapping
to
the
local
context
performing
IRI
Expansion
on
the
associated
value(s).
@coerce
mapping
into
the
active
context
's
@coerce
mapping
as
described
below
.
@coerce
mapping
from
the
local
context
to
the
active
context
overwriting
any
duplicate
values.
Map
each
key-value
pair
in
the
local
context
's
@coerce
mapping
into
the
active
context
's
@coerce
mapping,
overwriting
any
duplicate
values
in
the
active
context
's
@coerce
mapping.
The
@coerce
mapping
has
either
a
single
prefix:term
value,
a
single
term
value
or
an
array
of
prefix:term
or
term
values.
When
merging
with
an
existing
mapping
in
the
active
context
,
map
all
prefix
and
term
values
to
array
form
and
replace
with
the
union
of
the
value
from
the
local
context
and
the
value
of
the
active
context
.
If
the
result
is
an
array
with
a
single
value,
the
processor
may
represent
this
as
a
string
value.
The initial context is initialized as follows:
@base
is
set
using
@coerce
is
set
with
a
single
mapping
from
@iri
to
@type
.
{
"@base": document-location,
"@context": {
"@iri": "@type"
}
}
Keys and some values are evaluated to produce an IRI. This section defines an algorithm for transforming a value representing an IRI into an actual IRI.
IRIs
may
be
represented
as
an
absolute
IRI,
a
term
,
a
prefix
:
term
construct,
or
as
a
value
relative
to
@base
or
@vocab
.
The algorithm for generating an IRI is:
@coerce
mapping)
and
the
active
context
has
a
@vocab
mapping,
join
the
mapped
value
to
the
suffix
using
textual
concatenation.
@base
mapping,
join
the
mapped
value
to
the
suffix
using
the
method
described
in
[
RFC3986
].
Some keys and values are expressed using IRIs. This section defines an algorithm for transforming an IRI to a compact IRI using the term s and prefix es specified in the local context .
The algorithm for generating a compacted IRI is:
Some values in JSON-LD can be expressed in a compact form. These values are required to be expanded at times when processing JSON-LD documents.
The algorithm for expanding a value is:
@iri
,
expand
the
value
by
adding
a
new
key-value
pair
where
the
key
is
@iri
and
the
value
is
the
expanded
IRI
according
to
the
IRI
Expansion
rules.
@literal
and
the
unexpanded
value.
The
second
key-value
pair
will
be
@datatype
and
the
associated
coercion
datatype
expanded
according
to
the
IRI
Expansion
rules.
Some values, such as IRIs and typed literals, may be expressed in an expanded form in JSON-LD. These values are required to be compacted at times when processing JSON-LD documents.
The algorithm for compacting a value is:
@iri
,
the
compacted
value
is
the
value
associated
with
the
@iri
key,
processed
according
to
the
IRI
Compaction
steps.
@literal
key.
This algorithm is a work in progress, do not implement it.
As stated previously, expansion is the process of taking a JSON-LD input and expanding all IRIs and typed literals to their fully-expanded form. The output will not contain a single context declaration and will have all IRIs and typed literals fully expanded.
This algorithm is a work in progress, do not implement it.
As stated previously, compaction is the process of taking a JSON-LD input and compacting all IRIs using a given context. The output will contain a single top-level context declaration and will only use term s and prefix es and will ensure that all typed literals are fully compacted.
This algorithm is a work in progress, do not implement it.
A JSON-LD document is a representation of a directed graph. A single directed graph can have many different serializations, each expressing exactly the same information. Developers typically don't work directly with graphs, but rather, prefer trees when dealing with JSON. While mapping a graph to a tree can be done, the layout of the end result must be specified in advance. This section defines an algorithm for mapping a graph to a tree given a frame .
The framing algorithm takes JSON-LD input that has been normalized according to the Normalization Algorithm ( normalized input ), an input frame that has been expanded according to the Expansion Algorithm ( expanded frame ), and a number of options and produces JSON-LD output . The following series of steps is the recursive portion of the framing algorithm:
null
.
Invalid
Frame
Format
exception.
Add
each
matching
item
from
the
normalized
input
to
the
matches
array
and
decrement
the
match
limit
by
1
if:
rdf:type
that
exists
in
the
item's
list
of
rdf:type
s.
Note:
the
rdf:type
can
be
an
array
,
but
only
one
value
needs
to
be
in
common
between
the
item
and
the
expanded
frame
for
a
match.
rdf:type
property,
but
every
property
in
the
expanded
frame
exists
in
the
item.
@embed
keyword,
set
the
object
embed
flag
to
its
value.
If
the
match
frame
contains
an
@explicit
keyword,
set
the
explicit
inclusion
flag
to
its
value.
Note:
if
the
keyword
exists,
but
the
value
is
neither
true
or
false
,
set
the
associated
flag
to
true
.
@subject
property,
replace
the
item
with
the
value
of
the
@subject
property.
@subject
property,
and
its
IRI
is
in
the
map
of
embedded
subjects
,
throw
a
Duplicate
Embed
exception.
@subject
property
and
its
IRI
is
not
in
the
map
of
embedded
subjects
:
@subject
.
rdf:type
:
@iri
value
that
exists
in
the
normalized
input
,
replace
the
object
in
the
recusion
input
list
with
a
new
object
containing
the
@subject
key
where
the
value
is
the
value
of
the
@iri
,
and
all
of
the
other
key-value
pairs
for
that
subject.
Set
the
recursion
match
frame
to
the
value
associated
with
the
match
frame
's
key.
Replace
the
value
associated
with
the
key
by
recursively
calling
this
algorithm
using
recursion
input
list
,
recursion
match
frame
as
input.
null
otherwise.
null
,
process
the
omit
missing
properties
flag
:
@omitDefault
keyword,
set
the
omit
missing
properties
flag
to
its
value.
Note:
if
the
keyword
exists,
but
the
value
is
neither
true
or
false
,
set
the
associated
flag
to
true
.
@default
keyword
is
set
in
the
property
frame
set
the
item's
value
to
the
value
of
@default
.
null
set
it
to
the
item,
otherwise,
append
the
item
to
the
JSON-LD
output
.This algorithm is a work in progress, do not implement it.
Normalization is the process of taking JSON-LD input and performing a deterministic transformation on that input that results in all aspects of the graph being fully expanded and named in the JSON-LD output . The normalized output is generated in such a way that any conforming JSON-LD processor will generate identical output given the same input. The problem is a fairly difficult technical problem to solve because it requires a directed graph to be ordered into a set of nodes and edges in a deterministic way. This is easy to do when all of the nodes have unique names, but very difficult to do when some of the nodes are not labeled.
In time, there may be more than one normalization algorithm that will need to be identified. For identification purposes, this algorithm is named UGNA2011 .
@subject
and
the
value
is
a
string
that
is
an
IRI
or
a
JSON
object
containing
the
key
@iri
and
a
value
that
is
a
string
that
is
an
IRI.
s
or
c
.When performing the steps required by the normalization algorithm, it is helpful to track the many pieces of information in a data structure called the normalization state . Many of these pieces simply provide indexes into the graph. The information contained in the normalization state is described below.
_:
and
that
has
a
path,
via
properties,
that
starts
with
the
node
reference
.
_:
and
that
has
a
path,
via
properties,
that
ends
with
the
node
reference
.
_:
,
is
not
used
by
any
other
node's
label
in
the
JSON-LD
input
,
and
does
not
start
with
the
characters
_:c14n
.
The
prefix
has
two
uses.
First
it
is
used
to
temporarily
name
nodes
during
the
normalization
algorithm
in
a
way
that
doesn't
collide
with
the
names
that
already
exist
as
well
as
the
names
that
will
be
generated
by
the
normalization
algorithm.
Second,
it
will
eventually
be
set
to
_:c14n
to
generate
the
final,
deterministic
labels
for
nodes
in
the
graph.
This
prefix
will
be
concatenated
with
the
labeling
counter
to
produce
a
node
label
.
For
example,
_:j8r3k
is
a
proper
initial
value
for
the
labeling
prefix
.
1
.The normalization algorithm expands the JSON-LD input , flattens the data structure, and creates an initial set of names for all nodes in the graph. The flattened data structure is then processed by a node labeling algorithm in order to get a fully expanded and named list of nodes which is then sorted. The result is a deterministically named and ordered list of graph nodes.
@subject
and
the
value
is
the
concatenation
of
the
labeling
prefix
and
the
string
value
of
the
labeling
counter
.
Increment
the
labeling
counter
.
@iri
and
the
value
is
the
value
of
the
@subject
key
in
the
node.
_:c14n
,
relabel
the
node
using
the
Node
Relabeling
Algorithm
.
@subject
key
associated
with
a
value
starting
with
_:
according
to
the
steps
in
the
Deterministic
Labeling
Algorithm
.This algorithm renames a node by generating a unique new label and updating all references to that label in the node state map . The old label and the normalization state must be given as an input to the algorithm. The old label is the current label of the node that is to be relabeled.
The node relabeling algorithm is as follows:
_:c14n
and
the
old
label
begins
with
_:c14n
then
return
as
the
node
has
already
been
renamed.
The deterministic labeling algorithm takes the normalization state and produces a list of finished nodes that is sorted and contains deterministically named and expanded nodes from the graph.
_:c14n
,
the
labeling
counter
to
1
,
the
list
of
finished
nodes
to
an
empty
array,
and
create
an
empty
array,
the
list
of
unfinished
nodes
.
_:
then
put
the
node
reference
in
the
list
of
finished
nodes
.
_:
then
put
the
node
reference
in
the
list
of
unfinished
nodes
.
_:c14n
from
the
list
of
unfinished
nodes
and
add
it
to
the
list
of
finished
nodes
.The shallow comparison algorithm takes two unlabeled nodes, alpha and beta , as input and determines which one should come first in a sorted list. The following algorithm determines the steps that are executed in order to determine the node that should come first in a list:
_:
is
first.
_:
,
then
the
node
associated
with
the
lexicographically
lesser
label
is
first.
_:c14n
is
first.
The object comparison algorithm is designed to compare two graph node property values, alpha and beta , against the other. The algorithm is useful when sorting two lists of graph node properties.
@literal
is
first.
@datatype
is
first.
@language
is
first.
@iri
is
first.
The deep comparison algorithm is used to compare the difference between two nodes, alpha and beta . A deep comparison takes the incoming and outgoing node edges in a graph into account if the number of properties and value of those properties are identical. The algorithm is helpful when sorting a list of nodes and will return whichever node should be placed first in a list if the two nodes are not truly equivalent.
When performing the steps required by the deep comparison algorithm, it is helpful to track state information about mappings. The information contained in a mapping state is described below.
1
.
s1
and
its
index
is
set
to
0
.The deep comparison algorithm is as follows:
outgoing
direction
to
the
algorithm
as
inputs.
outgoing
direction
to
the
algorithm
as
inputs.
incoming
direction
to
the
algorithm
as
inputs.
incoming
direction
to
the
algorithm
as
inputs.
The
node
serialization
algorithm
takes
a
node
state
,
a
mapping
state
,
and
a
direction
(either
outgoing
direction
or
incoming
direction
)
as
inputs
and
generates
a
deterministic
serialization
for
the
node
reference
.
true
.
outgoing
direction
and
the
incoming
list
otherwise,
if
the
label
starts
with
_:
,
it
is
the
target
node
label
:
1
or
the
length
of
the
adjacent
unserialized
labels
list
,
whichever
is
greater.
0
,
perform
the
Combinatorial
Serialization
Algorithm
passing
the
node
state
,
the
mapping
state
for
the
first
iteration
and
a
copy
of
it
for
each
subsequent
iteration,
the
generated
serialization
label
,
the
direction
,
the
adjacent
serialized
labels
map
,
and
the
adjacent
unserialized
labels
list
.
Decrement
the
maximum
serialization
combinations
by
1
for
each
iteration.
The algorithm generates a serialization label given a label and a mapping state and returns the serialization label .
_:c14n
,
the
serialization
label
is
the
letter
c
followed
by
the
number
that
follows
_:c14n
in
the
label
.
s
followed
by
the
string
value
of
mapping
count
.
Increment
the
mapping
count
by
1
.The combinatorial serialization algorithm takes a node state , a mapping state , a serialization label , a direction , a adjacent serialized labels map , and a adjacent unserialized labels list as inputs and generates the lexicographically least serialization of nodes relating to the node reference .
1
or
the
length
of
the
adjacent
unserialized
labels
list
,
whichever
is
greater.
0
:
1
for
each
iteration.
outgoing
direction
then
directed
serialization
refers
to
the
outgoing
serialization
and
the
directed
serialization
map
refers
to
the
outgoing
serialization
map
,
otherwise
it
refers
to
the
incoming
serialization
and
the
directed
serialization
map
refers
to
the
incoming
serialization
map
.
Compare
the
serialization
string
to
the
directed
serialization
according
to
the
Serialization
Comparison
Algorithm
.
If
the
serialization
string
is
less
than
or
equal
to
the
directed
serialization
:The serialization comparison algorithm takes two serializations, alpha and beta and returns either which of the two is less than the other or that they are equal.
The mapping serialization algorithm incrementally updates the serialization string in a mapping state .
_
character
and
the
serialization
key
to
the
serialization
string
.
true
.
0
onto
the
key
stack
.The label serialization algorithm serializes information about a node that has been assigned a particular serialization label .
[
character
to
the
label
serialization
.
@subject
property.
The
keys
should
be
processed
in
lexicographical
order
and
their
associated
values
should
be
processed
in
the
order
produced
by
the
Object
Comparison
Algorithm
:<
KEY
>
where
KEY
is
the
current
key.
Append
string
to
the
label
serialization
.
@iri
key
with
a
value
that
starts
with
_:
,
set
the
object
string
to
the
value
_:
.
If
the
value
does
not
start
with
_:
,
build
the
object
string
using
the
pattern
<
IRI
>
where
IRI
is
the
value
associated
with
the
@iri
key.
@literal
key
and
a
@datatype
key,
build
the
object
string
using
the
pattern
"
LITERAL
"^^
<
DATATYPE
>
where
LITERAL
is
the
value
associated
with
the
@literal
key
and
DATATYPE
is
the
value
associated
with
the
@datatype
key.
@literal
key
and
a
@language
key,
build
the
object
string
using
the
pattern
"
LITERAL
"@
LANGUAGE
where
LITERAL
is
the
value
associated
with
the
@literal
key
and
LANGUAGE
is
the
value
associated
with
the
@language
key.
"
LITERAL
"
where
LITERAL
is
the
value
associated
with
the
current
key.
|
separator
character
to
the
label
serialization
.
]
character
to
the
label
serialization
.
[
character
to
the
label
serialization
.<
PROPERTY
>
<
REFERER
>
where
PROPERTY
is
the
property
associated
with
the
incoming
reference
and
REFERER
is
either
the
subject
of
the
node
referring
to
the
label
in
the
incoming
reference
or
_:
if
REFERER
begins
with
_:
.
|
separator
character
to
the
label
serialization
.
]
character
to
the
label
serialization
.
When
normalizing
xsd:double
values,
implementers
must
ensure
that
the
normalized
value
is
a
string.
In
order
to
generate
the
string
from
a
double
value,
output
equivalent
to
the
printf("%1.6e",
value)
function
in
C
must
be
used
where
"%1.6e"
is
the
string
formatter
and
value
is
the
value
to
be
converted.
To convert the a double value in JavaScript, implementers can use the following snippet of code:
// the variable 'value' below is the JavaScript native double value that is to be converted (value).toExponential(6).replace(/(e(?:\+|-))([0-9])$/, '$10$2')
When data needs to be normalized, JSON-LD authors should not use values that are going to undergo automatic conversion. This is due to the lossy nature of xsd:double values.
Some
JSON
serializers,
such
as
PHP's
native
implementation,
backslash-escapes
the
forward
slash
character.
For
example,
the
value
http://example.com/
would
be
serialized
as
http:\/\/example.com\/
in
some
versions
of
PHP.
This
is
problematic
when
generating
a
byte
stream
for
processes
such
as
normalization.
There
is
no
need
to
backslash-escape
forward-slashes
in
JSON-LD.
To
aid
interoperability
between
JSON-LD
processors,
a
JSON-LD
serializer
must
not
backslash-escape
forward
slashes.
Round-tripping data can be problematic if we mix and match @coerce rules with JSON-native datatypes, like integers. Consider the following code example:
var myObj = { "@context" : { "number" : "http://example.com/vocab#number", "@coerce": { "xsd:nonNegativeInteger": "number" } }, "number" : 42 }; // Map the language-native object to JSON-LD var jsonldText = jsonld.normalize(myObj); // Convert the normalized object back to a JavaScript object var myObj2 = jsonld.parse(jsonldText);
At this point, myObj2 and myObj will have different values for the "number" value. myObj will be the number 42, while myObj2 will be the string "42". This type of data round-tripping error can bite developers. We are currently wondering if having a "coerce validation" phase in the parsing/normalization phases would be a good idea. It would prevent data round-tripping issues like the one mentioned above.
A JSON-LD document may be converted to any other RDF-compatible document format using the algorithm specified in this section.
The JSON-LD Processing Model describes processing rules for extracting RDF from a JSON-LD document. Note that many uses of JSON-LD may not require generation of RDF.
The processing algorithm described in this section is provided in order to demonstrate how one might implement a JSON-LD to RDF processor. Conformant implementations are only required to produce the same type and number of triples during the output process and are not required to implement the algorithm exactly as described.
The RDF Conversion Algorithm is a work in progress.
This section is non-normative.
JSON-LD is intended to have an easy to parse grammar that closely models existing practice in using JSON for describing object representations. This allows the use of existing libraries for parsing JSON in a document-oriented fashion, or can allow for stream-based parsing similar to SAX.
As with other grammars used for describing Linked Data , a key concept is that of a resource . Resources may be of three basic types: IRI s, for describing externally named entities, BNodes , resources for which an external name does not exist, or is not known, and Literals, which describe terminal entities such as strings, dates and other representations having a lexical representation possibly including an explicit language or datatype.
Data described with JSON-LD may be considered to be the representation of a graph made up of subject and object resources related via a property resource. However, specific implementations may choose to operate on the document as a normal JSON description of objects having attributes.
The algorithm below is designed for in-memory implementations with random access to JSON object elements.
A conforming JSON-LD processor implementing RDF conversion must implement a processing algorithm that results in the same default graph that the following algorithm generates:
@context
key,
process
the
local
context
as
described
in
Context
.
@iri
key,
set
the
active
object
by
performing
IRI
Expansion
on
the
associated
value.
Generate
a
triple
representing
the
active
subject
,
the
active
property
and
the
active
object
.
Return
the
active
object
to
the
calling
location.
@iri
really
just
behaves
the
same
as
@subject
,
consider
consolidating
them.
@literal
key,
set
the
active
object
to
a
literal
value
as
follows:
@datatype
key
after
performing
IRI
Expansion
on
the
specified
@datatype
.
@language
key,
use
it's
value
to
set
the
language
of
the
plain
literal.
@subject
key:
@subject
key,
set
the
active
object
to
newly
generated
blank
node
identifier
.
Generate
a
triple
representing
the
active
subject
,
the
active
property
and
the
active
object
.
Set
the
active
subject
to
the
active
object
.
@type
,
set
the
active
property
to
rdf:type
.
@iri
coercion,
set
the
active
object
by
performing
IRI
Expansion
on
the
string.
xsd:integer
or
xsd:double
,
depending
on
if
the
value
contains
a
fractional
and/or
an
exponential
component.
Generate
a
triple
using
the
active
subject
,
active
property
and
the
generated
typed
literal.
xsd:boolean
.There are a few advanced concepts where it is not clear whether or not the JSON-LD specification is going to support the complexity necessary to support each concept. The entire section on Advanced Concepts should be considered as discussion points; it is merely a list of possibilities where all of the benefits and drawbacks have not been explored.
When serializing an RDF graph that contains two or more sections of the graph which are entirely disjoint, one must use an array to express the graph as two graphs. This may not be acceptable to some authors, who would rather express the information as one graph. Since, by definition, disjoint graphs require there to be two top-level objects, JSON-LD utilizes a mechanism that allows disjoint graphs to be expressed using a single graph.
Assume the following RDF graph:
<http://example.org/people#john> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> . <http://example.org/people#jane> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> .
Since the two subjects are entirely disjoint with one another, it is impossible to express the RDF graph above using a single JSON object .
In JSON-LD, one can use the subject to express disjoint graphs as a single graph:
{ "@context": { "Person": "http://xmlns.com/foaf/0.1/Person" }, "@subject": [ { "@subject": "http://example.org/people#john", "@type": "Person" }, { "@subject": "http://example.org/people#jane", "@type": "Person" } ] }
A disjoint graph could also be expressed like so:
[ { "@subject": "http://example.org/people#john", "@type": "http://xmlns.com/foaf/0.1/Person" }, { "@subject": "http://example.org/people#jane", "@type": "http://xmlns.com/foaf/0.1/Person" } ]
Warning:
Using
this
serialisation
format
it
is
impossible
to
include
@context
given
that
the
document's
data
structure
is
an
array
and
not
an
object.
Because graphs do not describe ordering for links between nodes, in contrast to plain JSON, multi-valued properties in JSON-LD do not provide an ordering of the listed objects. For example, consider the following simple document:
{
...
"@subject": "http://example.org/people#joebob",
"nick": ["joe", "bob", "jaybee"],
...
}
This results in three triples being generated, each relating the subject to an individual object, with no inherent order.
To
preserve
the
order
of
the
objects,
RDF-based
languages,
such
as
[
TURTLE
]
use
the
concept
of
an
rdf:List
(as
described
in
[
RDF-SCHEMA
]).
This
uses
a
sequence
of
unlabeled
nodes
with
properties
describing
a
value,
a
null-terminated
next
property.
Without
specific
syntactical
support,
this
could
be
represented
in
JSON-LD
as
follows:
{ ... "@subject": "http://example.org/people#joebob", "nick": {, "@first": "joe", "@rest": { "@first": "bob", "@rest": { "@first": "jaybee", "@rest": "@nil" } } } }, ... }
As
this
notation
is
rather
unwieldy
and
the
notion
of
ordered
collections
is
rather
important
in
data
modeling,
it
is
useful
to
have
specific
language
support.
In
JSON-LD,
a
list
may
be
represented
using
the
@list
keyword
as
follows:
{
...
"@subject": "http://example.org/people#joebob",
"foaf:nick": {"@list": ["joe", "bob", "jaybee"]},
...
}
This
describes
the
use
of
this
array
as
being
ordered,
and
order
is
maintained
through
normalization
and
RDF
conversion.
If
every
use
of
a
given
multi-valued
property
is
a
list,
this
may
be
abbreviated
by
adding
an
@coerce
term:
{ "@context": { ... "@coerce": { "@list": ["foaf:nick"] } }, ... "@subject": "http://example.org/people#joebob", "foaf:nick": ["joe", "bob", "jaybee"], ... }
There is an ongoing discussion about this issue. One of the proposed solutions is allowing to change the default behaviour so that arrays are considered as ordered lists by default.
TBD.
TBD.
To support RDF Conversion of lists, RDF Conversion Algorithm is updated as follows:
@list
key
and
the
value
is
an
array
process
the
value
as
a
list
starting
at
Step
3a
.
@list
coercion,
and
the
value
is
an
array
,
process
the
value
as
a
list
starting
at
Step
3a
.
rdf:first
and
rdf:next
,
terminating
the
list
with
rdf:nil
using
the
following
sequence:
rdf:nil
.
rdf:first
as
the
active
property
.
rdf:nil
.
rdf:rest
and
rest
blank
node
identifier
.The JSON-LD markup examples below demonstrate how JSON-LD can be used to express semantic data marked up in other languages such as RDFa, Microformats, and Microdata. These sections are merely provided as proof that JSON-LD is very flexible in what it can express across different Linked Data approaches.
The following example describes three people with their respective names and homepages.
<div prefix="foaf: http://xmlns.com/foaf/0.1/"> <ul><li > <a >Bob</a><li typeof="foaf:Person"> <a rel="foaf:homepage" href="http://example.com/bob/" property="foaf:name" >Bob</a> </li><li > <a >Eve</a><li typeof="foaf:Person"> <a rel="foaf:homepage" href="http://example.com/eve/" property="foaf:name" >Eve</a> </li><li > <a >Manu</a><li typeof="foaf:Person"> <a rel="foaf:homepage" href="http://example.com/manu/" property="foaf:name" >Manu</a> </li> </ul> </div>
An example JSON-LD implementation is described below, however, there are other ways to mark-up this information such that the context is not repeated.
{ "@context": { "foaf": "http://xmlns.com/foaf/0.1/"}, "@subject": [ {"@subject": "_:bnode1", "@type": "foaf:Person", "foaf:homepage": "http://example.com/bob/", "foaf:name": "Bob""@subject": "_:bnode1", "@type": "foaf:Person", "foaf:homepage": "http://example.com/bob/", "foaf:name": "Bob" }, {"@subject": "_:bnode2", "@type": "foaf:Person", "foaf:homepage": "http://example.com/eve/", "foaf:name": "Eve""@subject": "_:bnode2", "@type": "foaf:Person", "foaf:homepage": "http://example.com/eve/", "foaf:name": "Eve" }, {"@subject": "_:bnode3", "@type": "foaf:Person", "foaf:homepage": "http://example.com/manu/", "foaf:name": "Manu""@subject": "_:bnode3", "@type": "foaf:Person", "foaf:homepage": "http://example.com/manu/", "foaf:name": "Manu" } ] }
The following example uses a simple Microformats hCard example to express how the Microformat is represented in JSON-LD.
<div class="vcard"> <a class="url fn" href="http://tantek.com/">Tantek Çelik</a> </div>
The
representation
of
the
hCard
expresses
the
Microformat
terms
in
the
context
and
uses
them
directly
for
the
url
and
fn
properties.
Also
note
that
the
Microformat
to
JSON-LD
processor
has
generated
the
proper
URL
type
for
http://tantek.com
.
{ "@context": {"vcard": "http://microformats.org/profile/hcard#vcard", "url": "http://microformats.org/profile/hcard#url", "fn": "http://microformats.org/profile/hcard#fn", "@coerce": { "@iri": "url" }"vcard": "http://microformats.org/profile/hcard#vcard", "url": "http://microformats.org/profile/hcard#url", "fn": "http://microformats.org/profile/hcard#fn", "@coerce": { "@iri": "url" } },"@subject": "_:bnode1", "@type": "vcard", "url": "http://tantek.com/", "fn": "Tantek Çelik""@subject": "_:bnode1", "@type": "vcard", "url": "http://tantek.com/", "fn": "Tantek Çelik" }
The Microdata example below expresses book information as a Microdata Work item.
<dl itemscope itemtype="http://purl.org/vocab/frbr/core#Work" itemid="http://purl.oreilly.com/works/45U8QJGZSQKDH8N"> <dt>Title</dt><dd><cite itemprop="http://purl.org/dc/terms/title">Just a Geek</cite></dd><dd><cite itemprop="http://purl.org/dc/terms/title">Just a Geek</cite></dd> <dt>By</dt><dd><span itemprop="http://purl.org/dc/terms/creator">Wil Wheaton</span></dd><dd><span itemprop="http://purl.org/dc/terms/creator">Wil Wheaton</span></dd> <dt>Format</dt><dd itemprop="http://purl.org/vocab/frbr/core#realization"<dd itemprop="http://purl.org/vocab/frbr/core#realization" itemscopeitemtype="http://purl.org/vocab/frbr/core#Expression" itemid="http://purl.oreilly.com/products/9780596007683.BOOK"> <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/BOOK">itemtype="http://purl.org/vocab/frbr/core#Expression" itemid="http://purl.oreilly.com/products/9780596007683.BOOK"> <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/BOOK"> Print </dd><dd itemprop="http://purl.org/vocab/frbr/core#realization"<dd itemprop="http://purl.org/vocab/frbr/core#realization" itemscopeitemtype="http://purl.org/vocab/frbr/core#Expression" itemid="http://purl.oreilly.com/products/9780596802189.EBOOK"> <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/EBOOK">itemtype="http://purl.org/vocab/frbr/core#Expression" itemid="http://purl.oreilly.com/products/9780596802189.EBOOK"> <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/EBOOK"> Ebook </dd> </dl>
Note that the JSON-LD representation of the Microdata information stays true to the desires of the Microdata community to avoid contexts and instead refer to items by their full IRI.
[ {"@subject": "http://purl.oreilly.com/works/45U8QJGZSQKDH8N", "@type": "http://purl.org/vocab/frbr/core#Work", "http://purl.org/dc/terms/title": "Just a Geek", "http://purl.org/dc/terms/creator": "Whil Wheaton", "http://purl.org/vocab/frbr/core#realization": ["http://purl.oreilly.com/products/9780596007683.BOOK", "http://purl.oreilly.com/products/9780596802189.EBOOK"]"@subject": "http://purl.oreilly.com/works/45U8QJGZSQKDH8N", "@type": "http://purl.org/vocab/frbr/core#Work", "http://purl.org/dc/terms/title": "Just a Geek", "http://purl.org/dc/terms/creator": "Whil Wheaton", "http://purl.org/vocab/frbr/core#realization": ["http://purl.oreilly.com/products/9780596007683.BOOK", "http://purl.oreilly.com/products/9780596802189.EBOOK"] }, {"@subject": "http://purl.oreilly.com/products/9780596007683.BOOK", "@type": "http://purl.org/vocab/frbr/core#Expression", "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/BOOK""@subject": "http://purl.oreilly.com/products/9780596007683.BOOK", "@type": "http://purl.org/vocab/frbr/core#Expression", "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/BOOK" }, {"@subject": "http://purl.oreilly.com/products/9780596802189.EBOOK", "@type": "http://purl.org/vocab/frbr/core#Expression", "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/EBOOK""@subject": "http://purl.oreilly.com/products/9780596802189.EBOOK", "@type": "http://purl.org/vocab/frbr/core#Expression", "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/EBOOK" } ]
Developers would also benefit by allowing other vocabularies to be used automatically with their JSON API. There are over 200 Web Vocabulary Documents that are available for use on the Web today. Some of these vocabularies are:
You can use these vocabularies in combination, like so:
{ "@type": "foaf:Person", "foaf:name": "Manu Sporny", "foaf:homepage": "http://manu.sporny.org/", "sioc:avatar": "http://twitter.com/account/profile_image/manusporny" }
Developers
can
also
specify
their
own
Vocabulary
documents
by
modifying
the
active
context
in-line
using
the
@context
keyword,
like
so:
{ "@context": { "myvocab": "http://example.org/myvocab#" }, "@type": "foaf:Person", "foaf:name": "Manu Sporny", "foaf:homepage": "http://manu.sporny.org/", "sioc:avatar": "http://twitter.com/account/profile_image/manusporny", "myvocab:personality": "friendly" }
The
@context
keyword
is
used
to
change
how
the
JSON-LD
processor
evaluates
key-value
pairs.
In
this
case,
it
was
used
to
map
one
string
('myvocab')
to
another
string,
which
is
interpreted
as
a
IRI
.
In
the
example
above,
the
myvocab
string
is
replaced
with
"
"
http://example.org/myvocab#
"
"
when
it
is
detected.
In
the
example
above,
"
"
myvocab:personality
"
"
would
expand
to
"
"
http://example.org/myvocab#personality
".
".
This mechanism is a short-hand, called a Web Vocabulary prefix , and provides developers an unambiguous way to map any JSON value to RDF.
This section is included merely for standards community review and will be submitted to the Internet Engineering Steering Group if this specification becomes a W3C Recommendation.
form
compacted
,
expanded
,
framed
,
and
normalized
.
Other
values
are
allowed,
but
must
be
pre-pended
with
a
x-
string
until
they
are
clearly
defined
by
a
stable
specification.
If
no
form
is
specified
in
an
HTTP
request
header
to
a
responding
application,
such
as
a
Web
server,
the
application
may
choose
any
form.
If
no
form
is
specified
for
a
receiving
application,
the
form
must
not
be
assumed
to
take
any
particular
form.
application/json
MIME
media
type.
eval()
function.
It
is
recommended
that
a
conforming
parser
does
not
attempt
to
directly
evaluate
the
JSON-LD
serialization
and
instead
purely
parse
the
input
into
a
language-native
data
structure.
The editors would like to thank Mark Birbeck, who provided a great deal of the initial push behind the JSON-LD work via his work on RDFj, Dave Longley, Dave Lehn and Mike Johnson who reviewed, provided feedback, and performed several implementations of the specification, and Ian Davis, who created RDF/JSON. Thanks also to Nathan Rixham, Bradley P. Allen, Kingsley Idehen, Glenn McDonald, Alexandre Passant, Danny Ayers, Ted Thibodeau Jr., Olivier Grisel, Niklas Lindström, Markus Lanthaler, and Richard Cyganiak for their input on the specification. Another huge thank you goes out to Dave Longley who designed many of the algorithms used in this specification, including the normalization algorithm which was a monumentally difficult design challenge.