JSON-LD 1.0

A Context-based JSON Serialization for Linking Data

Unofficial Draft 08 17 August 2011

Editors:
Manu Sporny , Digital Bazaar
Gregg Kellogg , Kellogg Associates
Dave Longley , Digital Bazaar
Authors:
Manu Sporny , Digital Bazaar
Gregg Kellogg , Kellogg Associates
Dave Longley , Digital Bazaar
Mark Birbeck , Backplane Ltd.

This document is also available in this non-normative format: diff to previous version .


Abstract

JSON [ RFC4627 ] has proven to be a highly useful object serialization and messaging format. In an attempt to harmonize the representation of Linked Data in JSON, this specification outlines a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document.

Status of This Document

This document is merely a public working draft of a potential specification. It has no official standing of any kind and does not represent the support or consensus of any standards organisation.

This document is an experimental work in progress.

Table of Contents

1. Introduction

JSON, as specified in [ RFC4627 ], is a simple language for representing data on the Web. Linked Data is a technique for describing content across different documents or Web sites. Web resources are described using IRI s, and typically are dereferencable entities that may be used to find more information, creating a "Web of Knowledge". JSON-LD is intended to be a simple publishing method for expressing not only Linked Data in JSON, but for adding semantics to existing JSON.

JSON-LD is designed as a light-weight syntax that can be used to express Linked Data. It is primarily intended to be a way to express Linked Data in Javascript and other Web-based programming environments. It is also useful when building interoperable Web Services and when storing Linked Data in JSON-based document storage engines. It is practical and designed to be as simple as possible, utilizing the large number of JSON parsers and existing code that is in use today. It is designed to be able to express key-value pairs, RDF data, RDFa [ RDFA-CORE ] data, Microformats [ MICROFORMATS ] data, and Microdata [ MICRODATA ]. That is, it supports every major Web-based structured data model in use today.

The syntax does not require many applications to change their JSON, but easily add meaning by adding context in a way that is either in-band or out-of-band. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth migration path from JSON to JSON with added semantics. Finally, the format is intended to be fast to parse, fast to generate, stream-based and document-based processing compatible, and require a very small memory footprint in order to operate.

1.1 How to Read this Document

This document is a detailed specification for a serialization of JSON for Linked data. The document is primarily intended for the following audiences:

To understand the basics in this specification you must first be familiar with JSON, which is detailed in [ RFC4627 ]. To understand the API and how it is intended to operate in a programming environment, it is useful to have working knowledge of the JavaScript programming language [ ECMA-262 ] and WebIDL [ WEBIDL ]. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [ RDF-CONCEPTS ].

Examples may contain references to existing vocabularies and use abbreviations in CURIEs CURIE s and source code. The following is a list of all vocabularies and their abbreviations, as used in this document:

JSON [ RFC4627 ] defines several terms which are used throughout this document:

JSON Object
An object structure is represented as a pair of curly brackets surrounding zero or more name/value pairs (or members). A name is a string . A single colon comes after each name, separating the name from the value. A single comma separates a value from a following name. The names within an object should be unique.
array
An array is an ordered collection of values. An array begins with [ (left bracket) and ends with ] (right bracket). Values are separated by , (comma). Within JSON-LD, array order is not preserved, unless specific markup is provided (see Lists ). This is because the basic data model of JSON-LD linked data graph , which is inherently unordered.
string
A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. A string is very much like a C or Java string.
number
A number is very much like a C or Java number, except that the octal and hexadecimal formats are not used.
true and false
Boolean values.
null
The use of the null value is undefined within JSON-LD.

1.2 Contributing

There are a number of ways that one may participate in the development of this specification:

2. Design

The following section outlines the design goals and rationale behind the JSON-LD markup language.

2.1 Goals and Rationale

A number of design considerations were explored during the creation of this markup language:

Simplicity
Developers need only know JSON and three keywords to use the basic functionality in JSON-LD. No extra processors or software libraries are necessary to use JSON-LD in its most basic form. The language attempts to ensure that developers have an easy learning curve.
Compatibility
The JSON-LD markup must be 100% compatible with JSON. This ensures that all of the standard JSON libraries work seamlessly with JSON-LD documents.
Expressiveness
The syntax must be able to express directed graphs, which have been proven to be able to simply express almost every real world data model.
Terseness
The JSON-LD syntax must be very terse and human readable, requiring as little as possible from the developer.
Pragmatism Mixing the expression of pure Linked Data with data that is not linked was an approach that was driven by pragmatism. JSON-LD attempts to be more practical than theoretical in its approach to Linked Data. Zero Edits, most of the time
JSON-LD provides a mechanism that allows developers to specify context in a way that is out-of-band. This allows organizations that have already deployed large JSON-based infrastructure to add meaning to their JSON in a way that is not disruptive to their day-to-day operations and is transparent to their current customers. At times, mapping JSON to a graph representation can become difficult. In these instances, rather than having JSON-LD support esoteric markup, we chose not to support the use case and support a simplified syntax instead. So, while Zero Edits was a goal, it was not always possible without adding great complexity to the language.
Streaming
The format supports both document-based and stream-based processing.

2.2 Linked Data

The following definition for Linked Data is the one that will be used for this specification.

  1. Linked Data is a set of documents, each containing a representation of a linked data graph.
  2. A linked data graph is a an unordered labeled directed graph, where nodes are subject s or object s, and edges are properties.
  3. A subject is any node in a linked data graph with at least one outgoing edge.
  4. A subject should be labeled with a an IRI.
  5. A property is an edge of the linked data graph . .
  6. A property must should be labeled with an IRI.
  7. An object is a node in a linked data graph with at least one incoming edge.
  8. An object may be labeled with an IRI.
  9. An IRI that is a label in a linked data graph should be dereferencable to a Linked Data document describing the labeled subject , object or property . .
  10. A literal is an object with a label that is not an IRI

Note that the definition for Linked Data above is silent on the topic of unlabeled nodes. Unlabeled nodes are not considered Linked Data . However, this specification allows for the expression of unlabled nodes, as most graph-based data sets on the Web contain a number of associated nodes that are not named and thus are not directly de-referenceable.

2.3 Linking Data

An Internationalized Resource Identifier ( IRI ), as described in [ RFC3987 ], is a mechanism for representing unique identifiers on the web. In Linked Data , an IRI is commonly used for expressing a subject , a property or an object .

JSON-LD defines a mechanism to map JSON values to IRIs. This does not mean that JSON-LD requires every key or value to be an IRI, but rather ensures that keys and values can be mapped to IRIs if the developer so desires to transform their data into Linked Data. There are a few techniques that can ensure that developers will generate good Linked Data for the Web. JSON-LD formalizes those techniques.

We will be using the following JSON markup as the example for the rest of this section:

{
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/",
  "avatar": "http://twitter.com/account/profile_image/manusporny"
}

2.4 The Context

In JSON-LD, a context is used to allow developers to map term s to IRI s. A term is a short word that may be expanded to an IRI . The semantic web, just like the document-based web, uses IRIs for unambiguous identification. The idea is that these term s mean something that may be of use to other developers. For example, the term name may map directly to the IRI http://xmlns.com/foaf/0.1/name . This allows JSON-LD documents to be constructed using the common JSON practice of simple name/value pairs while ensuring that the data is useful outside of the database or page in which it resides.

These Linked Data term s are typically collected in a context and then used by adding a single line to the JSON markup above:

{
  "@context": "http://example.org/json-ld-contexts/person",
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/",
  "avatar": "http://twitter.com/account/profile_image/manusporny"
}

The addition above transforms the previous JSON document into a JSON document with added semantics because the @context specifies how the name , homepage , and avatar terms map to IRIs. Mapping those keys to IRIs gives the data global context. If two developers use the same IRI to describe a property, they are more than likely expressing the same concept. This allows both developers to re-use each others data without having to agree to how their data will inter-operate on a site-by-site basis.

The semantic web uses a special type of document called a Web Vocabulary to define term s. A context is a type of Web vocabulary. Typically, these Web Vocabulary documents have prefix es associated with them and contain a number of term declarations. A prefix , like a term , is a short word that expands to a Web Vocabulary IRI. Prefix es are helpful when a developer wants to mix multiple vocabularies together in a context, but does not want to go to the trouble of defining every single term in every single vocabulary. Some Web Vocabularies may have 10-20 terms defined. If a developer wants to use 3-4 different vocabularies, the number of terms that would have to be declared in a single context would become quite large. To reduce the number of different terms that must be defined, JSON-LD also allows prefixes to be used to compact IRIs.

For example, the IRI http://xmlns.com/foaf/0.1/ specifies a Web Vocabulary which may be represented using the foaf prefix . The foaf Web Vocabulary contains a term called name . If you join the foaf prefix with the name suffix, you can build a compact IRI that will expand out into an absolute IRI for the http://xmlns.com/foaf/0.1/name vocabulary term. That is, the compact IRI, or short-form, is foaf:name and the expanded-form is http://xmlns.com/foaf/0.1/name . This vocabulary term is used to specify a person's name.

Developers, and machines, are able to use this IRI (plugging it directly into a web browser, for instance) to go to the term and get a definition of what the term means. Much like we can use WordNet today to see the definition of words in the English language. Developers and machines need the same sort of dictionary of terms. IRIs provide a way to ensure that these terms are unambiguous.

The context provides a collection of vocabulary term s and prefix es that can be used to expand JSON keys and values into IRI s.

2.4.1 Inside a Context

In the previous section, the developer used the @context keyword to pull in an external context. That context document, if de-referenced, would look something like this:

{
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": "http://xmlns.com/foaf/0.1/homepage",
    "avatar": "http://xmlns.com/foaf/0.1/avatar"
}

A JSON-LD context document is a simple mapping from term s and prefix es to expanded values such as IRIs or keywords. Contexts may also contain datatype information for certain term s as well as other processing instructions for the JSON-LD processor.

Contexts may be specified in-line. This ensures that JSON-LD documents can be processed when a JSON-LD processor does not have access to the Web.

JSON-LD strives to ensure that developers don't have to change the JSON that is going into and being returned from their Web applications. This means that developers can also specify a context for JSON data in an out-of-band fashion via the API. The API is described later in this document. A JSON-LD aware Web Service may also define a context that will be pre-loaded for all calls to the service. This allows services that have previously been publishing and receiving JSON data to accept JSON-LD data without requiring client software to change.

2.5 From JSON to JSON-LD

If a set of terms such as, name , homepage , and avatar , are defined in a context, and that context is used to resolve the names in JSON objects, machines are able to automatically expand the terms to something meaningful and unambiguous, like this:

{
  "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
  "http://xmlns.com/foaf/0.1/homepage": "http://manu.sporny.org"
  "http://rdfs.org/sioc/ns#avatar": "http://twitter.com/account/profile_image/manusporny"
}

Doing this allows JSON to be unambiguously machine-readable without requiring developers that use JSON to drastically change their workflow.

3. Basic Concepts

JSON-LD is designed to ensure that Linked Data concepts can be marked up in a way that is simple to understand and author by Web developers. In many cases, regular JSON markup can become Linked Data with the simple addition of a context. As more JSON-LD features are used, more semantics are added to the JSON markup.

3.1 IRIs

Expressing IRIs are fundamental to Linked Data as that is how most subject s and many object are named. IRIs can be expressed in a variety of different ways in JSON-LD.

  1. In general, term s in the key position in an associative array a JSON object that have a mapping to an IRI or another key in the context are expanded to an IRI by JSON-LD processors. There are special rules for processing keys in @context and when dealing with keys that start with the @subject character.
  2. An IRI is generated for the value specified using @subject , if it is a string. string .
  3. An IRI is generated for the value specified using @type .
  4. An IRI is generated for the value specified using the @iri keyword.
  5. An IRI is generated when there are @coerce rules in effect for a key named @iri .

IRIs can be expressed directly in the key position like so:

{
...
  "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
...
}

In the example above, the key http://xmlns.com/foaf/0.1/name is interpreted as an IRI, as opposed to being interpreted as a string.. string.

Term expansion occurs for IRIs if a term is defined within the active context :

{
  "@context": {"name": "http://xmlns.com/foaf/0.1/name"},
...
  "name": "Manu Sporny",
...
}

Prefix es are expanded when used in keys:

{
  ""},

  "@context": {"name": "http://xmlns.com/foaf/0.1/name"},

...
  "": "Manu Sporny",

  "name": "Manu Sporny",

...
}

foaf:name name above will automatically expand out to the IRI http://xmlns.com/foaf/0.1/name .

An IRI is generated when a value is associated with a key using the @iri keyword:

{
...
  "foaf:homepage": { "": "http://manu.sporny.org" }

  "homepage": { "@iri": "http://manu.sporny.org" }

...
}

If type coercion rules are specified in the @context for a particular vocabulary term, an IRI is generated:

{
  "@context": 
  {
    ...
    "@coerce": 
    {
      "@iri": "foaf:homepage"

      "@iri": "homepage"

    }
  }
...
  "foaf:homepage": "http://manu.sporny.org/",

  "homepage": "http://manu.sporny.org/",

...
}

Even though the value http://manu.sporny.org/ is a string, string , the type coercion rules will transform the value into an IRI when processed by a JSON-LD Processor

3.2 Identifying the Subject

IRI s are a fundamental concept of Linked Data, and nodes should have a de-referencable identifier used to name and locate them. For nodes to be truely linked, de-referencing the identifier should result in a representation of that node. Associating an IRI with a node tells an application that the returned document contains a description of of the identifier requested.

JSON-LD documents may also contain descriptions of other nodes, so it is necessary to be able to uniquely identify each node which may be externally referenced.

A subject of a node is declared using the @subject key. The subject is the first piece of information needed by the JSON-LD processor in order to create the (subject, property, object) tuple, also known as a triple.

{
...
  "@subject": "http://example.org/people#joebob",
...
}

The example above would set the subject to the IRI http://example.org/people#joebob .

3.3 Specifying the Type

The type of a particular subject can be specified using the @type key. Specifying the type in this way will generate a triple of the form (subject, type, type-url). type-uri).

To be Linked Data, types should be uniquely identified by an IRI .

{
...
  "@subject": "http://example.org/people#joebob",
  "@type": "http://xmlns.com/foaf/0.1/Person",
...
}

The example above would generate the following triple if the JSON-LD document is mapped to RDF (in N-Triples notation):

<http://example.org/people#joebob> 
   <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
<http://xmlns.com/foaf/0.1/Person>
.

3.4 Strings

Regular text strings, also refered referred to as plain literal s, are easily expressed using regular JSON strings. string s.

{
...
  "foaf:name": "",

  "name": "Mark Birbeck",

...
}

3.5 String Internationalization

JSON-LD makes an assumption that strings with associated language encoding information are not very common when used in JavaScript and Web Services. Thus, it takes a little more effort to express strings with associated language information.

{
...
  "foaf:name": 

  "name": 

  {
    "@literal": "花澄",
    "@language": "ja"
  }
...
}

The example above would generate a plain literal for 花澄 and associate the ja language code with the triple that is generated. Languages must be expressed in [ BCP47 ] format.

3.6 Datatypes

A value with an associated datatype, also known as a typed literal , is indicated by associating a literal with an IRI which indicates the typed literal's datatype. Typed literals may be expressed in JSON-LD in three ways:

  1. By utilizing the @coerce keyword.
  2. By utilizing the expanded form for specifying objects.
  3. By using a native JSON datatype.

The first example uses the @coerce keyword to express a typed literal:

{
  "@context": 
  {
    "dc":  "http://purl.org/dc/terms/",
    "xsd": "http://www.w3.org/2001/XMLSchema#"

    "modified":  "http://purl.org/dc/terms/modified",
    "dateTime": "http://www.w3.org/2001/XMLSchema#dateTime"

    "@coerce": 
    {
      "xsd:dateTime": "dc:modified"

      "dateTime": "modified"

    }
  }
...
  "dc:modified": "2010-05-29T14:17:39+02:00",

  "modified": "2010-05-29T14:17:39+02:00",

...
}

The second example uses the expanded form for specifying objects:

{
...
  "dc:modified": 

  "modified": 

  {
    "@literal": "2010-05-29T14:17:39+02:00",
    "@datatype": "xsd:dateTime"

    "@datatype": "dateTime"

  }
...
}

Both examples above would generate an object with the literal value of 2010-05-29T14:17:39+02:00 and the datatype of http://www.w3.org/2001/XMLSchema#dateTime .

The third example uses a built-in native JSON type, a number, number , to express a datatype:

{
...
  "@subject": "http://example.org/people#joebob",
  "foaf:age": 

  "age": 31

...
}

The example above would generate the following triple:

<http://example.org/people#joebob> 
   <http://xmlns.com/foaf/0.1/age> 
"31"^^<http://www.w3.org/2001/XMLSchema#integer>
.

3.7 Multiple Objects for a Single Property

A JSON-LD author can express multiple triples in a compact way by using arrays. array s. If a subject has multiple values for the same property, the author may express each property as an array. array .

In JSON-LD, Multiple objects on a property are not ordered. This is because typically graphs are not inherently ordered data structures. To see more on creating ordered collections in JSON-LD, see Lists .

{
...
  "@subject": "http://example.org/people#joebob",
  "foaf:nick": ,

  "nick": ["joe", "bob", "jaybee"],

...
}

The markup shown above would generate the following triples:

<http://example.org/people#joebob> 
   <http://xmlns.com/foaf/0.1/nick>
      "joe" .
<http://example.org/people#joebob> 
   <http://xmlns.com/foaf/0.1/nick>
      "bob" .
<http://example.org/people#joebob> 
   <http://xmlns.com/foaf/0.1/nick>
"jaybee"
.

3.8 Multiple Typed Literals for a Single Property

Multiple typed literal s may also be expressed using the expanded form for objects:

{
...
  "@subject": "http://example.org/articles/8",
  "dcterms:modified": 

  "modified": 

  [
    {
      "@literal": "2010-05-29T14:17:39+02:00",
      "@datatype": "xsd:dateTime"

      "@datatype": "dateTime"

    },
    {
      "@literal": "2010-05-30T09:21:28-04:00",
      "@datatype": "xsd:dateTime"

      "@datatype": "dateTime"

    }
  ]
...
}

The markup shown above would generate the following triples:

<http://example.org/articles/8> 
   <http://purl.org/dc/terms/modified>
      "2010-05-29T14:17:39+02:00"^^http://www.w3.org/2001/XMLSchema#dateTime .
<http://example.org/articles/8> 
   <http://purl.org/dc/terms/modified>
"2010-05-30T09:21:28-04:00"^^http://www.w3.org/2001/XMLSchema#dateTime
.

3.9 Expansion

Expansion is the process of taking a JSON-LD document and applying a context such that all IRI, datatypes, and literal values are expanded so that the context is no longer necessary. JSON-LD document expansion is typically used when re-mapping JSON-LD documents to application-specific JSON documents or as a part of the Normalization process.

For example, assume the following JSON-LD input document:

{
   "name": "Manu Sporny",
   "homepage": "http://manu.sporny.org/",
   "@context": 
   {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": "http://xmlns.com/foaf/0.1/homepage",
      "@coerce": 
      {
         "@iri": "homepage"
      }
   }
}

Running the JSON-LD Expansion algorithm against the JSON-LD input document provided above would result in the following output:

{
   "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
   "http://xmlns.com/foaf/0.1/homepage": 
   {
      "@iri": "http://manu.sporny.org/"
   }
}

3.10 Compaction

Compaction is the process of taking a JSON-LD document and applying a context such that the most compact form of the document is generated. JSON is typically expressed in a very compact, key-value format. That is, full IRIs are rarely used as keys. At times, a JSON-LD document may be received that is not in its most compact form. JSON-LD, via the API, provides a way to compact a JSON-LD document.

For example, assume the following JSON-LD input document:

{
   "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
   "http://xmlns.com/foaf/0.1/homepage": 
   {
      "@iri": "http://manu.sporny.org/"
   }
}

Additionally, assume the following developer-supplied JSON-LD context:

{
   "name": "http://xmlns.com/foaf/0.1/name",
   "homepage": "http://xmlns.com/foaf/0.1/homepage",
   "@coerce": 
   {
      "@iri": ["homepage"]
   }
}

Running the JSON-LD Compaction algorithm given the context supplied above against the JSON-LD input document provided above would result in the following output:

{
   "name": "Manu Sporny",
   "homepage": "http://manu.sporny.org/",
   "@context": 
   {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": "http://xmlns.com/foaf/0.1/homepage",
      "@coerce": 
      {
         "@iri": "homepage"
      }
   }
}

The compaction algorithm also enables the developer to map any expanded format into an application-specific compacted format. While the context provided above mapped http://xmlns.com/foaf/0.1/name to name , it could have also mapped it to any arbitrary string provided by the developer.

3.11 Framing

A JSON-LD document is a representation of a directed graph. A single directed graph can have many different serializations, each expressing exactly the same information. Developers typically work with trees, also called associative arrays, when dealing with JSON. represented as JSON object s. While mapping a graph to a tree can be done, the layout of the end result must be specified in advance. A Frame can be used by a developer on a JSON-LD document to specify a deterministic layout for a graph.

Framing is the process of taking a JSON-LD document, which expresses a graph of information, and applying a specific graph layout (called a Frame ).

The JSON-LD document below expresses a library, a book and a chapter:

{
   "@coerce": {
    "dc":  "http://purl.org/dc/terms/",
    "ex":  "http://example.org/"
   },
   "@subject": 
   [{
      "@subject": "http://example.org/library",
      "@type": "ex:Library",
      "ex:contains": "http://example.org/library/the-republic"
   }, 
   {
      "@subject": "http://example.org/library/the-republic",
      "@type": "ex:Book",
      "dc:creator": "Plato",
      "dc:title": "The Republic",
      "ex:contains": "http://example.org/library/the-republic#introduction"
   }, 
   {
      "@subject": "http://example.org/library/the-republic#introduction",
      "@type": "ex:Chapter",
      "dc:description": "An introductory chapter on The Republic.",
      "dc:title": "The Introduction"
   }],
   "@context": 
   {
      "@coerce": 
      {
         "@iri": "ex:contains"
      },
      "dc": "http://purl.org/dc/elements/1.1/",
      "ex": "http://example.org/vocab#"
   }

  "@coerce": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     "http://example.org/vocab#contains",
    "creator":      "http://purl.org/dc/terms/creator"
    "description":  "http://purl.org/dc/terms/description"
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title",
    "@coerce": 
    {
      "@iri": "ex:contains"
    },
  },
  "@subject": 
  [{
    "@subject": "http://example.com/library",
    "@type": "Library",
    "contains": "http://example.org/library/the-republic"
  }, 
  {
    "@subject": "http://example.org/library/the-republic",
    "@type": "Book",
    "creator": "Plato",
    "title": "The Republic",
    "contains": "http://example.org/library/the-republic#introduction"
  }, 
  {
    "@subject": "http://example.org/library/the-republic#introduction",
    "@type": "Chapter",
    "description": "An introductory chapter on The Republic.",
    "title": "The Introduction"
  }]

}

Developers typically like to operate on items in a hierarchical, tree-based fashion. Ideally, a developer would want the data above sorted into top-level libraries, then the books that are contained in each library, and then the chapters contained in each book. To achieve that layout, the developer can define the following frame :

{
   "@context": {
      "dc": "http://purl.org/dc/elements/1.1/",
      "ex": "http://example.org/vocab#"
   },
   "@type": "ex:Library",
   "ex:contains": {
      "@type": "ex:Book",
      "ex:contains": {
         "@type": "ex:Chapter"
      }
   }

  "@context": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     "http://example.org/vocab#contains",
    "creator":      "http://purl.org/dc/terms/creator"
    "description":  "http://purl.org/dc/terms/description"
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title"
  },
  "@type": "Library",
  "contains": {
    "@type": "Book",
    "contains": {
      "@type": "Chapter"
    }
  }

}

When the framing algorithm is run against the previously defined JSON-LD document, paired with the frame above, the following JSON-LD document is the end result:

{
   "@context": 
   {
      "ex": "http://example.org/vocab#",
      "dc":  "http://purl.org/dc/terms/",
   }
   "@subject": "http://example.org/library",
   "@type": "ex:Library",
   "ex:contains": 
   {
      "@subject": "http://example.org/library/the-republic",
      "@type": "ex:Book",
      "dc:creator": "Plato",
      "dc:title": "The Republic",
      "ex:contains": 
      {
         "@subject": "http://example.org/library/the-republic#introduction",
         "@type": "ex:Chapter",
         "dc:description": "An introductory chapter on The Republic.",
         "dc:title": "The Introduction"
      },
   },

  "@context": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     "http://example.org/vocab#contains",
    "creator":      "http://purl.org/dc/terms/creator"
    "description":  "http://purl.org/dc/terms/description"
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title"
  },
  "@subject": "http://example.org/library",
  "@type": "Library",
  "contains": {
    "@subject": "http://example.org/library/the-republic",
    "@type": "Book",
    "creator": "Plato",    "title": "The Republic",
    "contains": {
      "@subject": "http://example.org/library/the-republic#introduction",
      "@type": "Chapter",
      "description": "An introductory chapter on The Republic.",      "title": "The Introduction"
    },
  },

}

The JSON-LD framing algorithm allows developers to query by example and force a specific tree layout to a JSON-LD document.

4. Advanced Concepts

JSON-LD has a number of features that provide functionality above and beyond the core functionality described above. The following sections outline the features that are specific to JSON-LD.

4.1 CURIEs

Concepts in Linked Data documents may draw on a number of different vocabularies. The @vocab mechanism is useful to easily associate types and properties with a specific vocabulary, but when many vocabularies are used, this becomes difficult. Consider the following example:


{
  "@context": {
    "dc": "http://purl.org/dc/elements/1.1/",    "ex": "http://example.org/vocab#"
  },
  "@subject": "http://example.org/library",
  "@type": "ex:Library",
  "ex:contains": {
    "@subject": "http://example.org/library/the-republic",
    "@type": "ex:Book",
    "dc:creator": "Plato",    "dc:title": "The Republic",    "ex:contains": {
      "@subject": "http://example.org/library/the-republic#introduction",
      "@type": "ex:Chapter",
      "dc:description": "An introductory chapter on The Republic.",      "dc:title": "The Introduction"
    },
  },
}

In this example, two different vocabularies are identified with prefixes, and used as type and property values using the CURIE notation.

A CURIE is a compact way of describing an IRI . The term actually comes from Compact URI. Generally, a CURIE is composed of a prefix and a suffix separated by a ':'. In JSON-LD, the prefix may be the empty string, denoting the default prefix .

CURIEs are defined more formally in [ RDFA-CORE ] section 6 "CURIE Syntax Definition" .

4.2 Automatic Typing

Since JSON is capable of expressing typed information such as doubles, integers, and boolean values. As demonstrated below, JSON-LD utilizes that information to create typed literal s:

{
...
  // The following two values are automatically converted to a type of xsd:double
  // and both values are equivalent to each other.
  "measure:cups": 5.3,
  "measure:cups": 5.3e0,
  // The following value is automatically converted to a type of xsd:double as well
  "space:astronomicUnits": 6.5e73,
  // The following value should never be converted to a language-native type
  "measure:stones": { "@literal": "4.8", "@datatype": "xsd:decimal" },
  // This value is automatically converted to having a type of xsd:integer
  "chem:protons": 12,
  // This value is automatically converted to having a type of xsd:boolean
  "sensor:active": true,
...
}

When dealing with a number of modern programming languages, including JavaScript ECMA-262, there is no distinction between xsd:decimal and xsd:double values. That is, the number 5.3 and the number 5.3e0 are treated as if they were the same. When converting from JSON-LD to a language-native format and back, datatype information is lost in a number of these languages. Thus, one could say that 5.3 is a xsd:decimal and 5.3e0 is an xsd:double in JSON-LD, but when both values are converted to a language-native format the datatype difference between the two is lost because the machine-level representation will almost always be a double . Implementers should be aware of this potential round-tripping issue between xsd:decimal and xsd:double . Specifically objects with a datatype of xsd:decimal must not be converted to a language native type.

4.2 4.3 Type Coercion

JSON-LD supports the coercion of values to particular data types. Type coercion allows someone deploying JSON-LD to coerce the incoming or outgoing types to the proper data type based on a mapping of data type IRIs to property types. Using type coercion, one may convert simple JSON data to properly typed RDF data.

The example below demonstrates how a JSON-LD author can coerce values to plain literal s, typed literal s and IRIs.

{
  "@context": 
  {  
     "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
     "xsd": "http://www.w3.org/2001/XMLSchema#",
     "name": "http://xmlns.com/foaf/0.1/name",
     "age": "http://xmlns.com/foaf/0.1/age",
     "homepage": "http://xmlns.com/foaf/0.1/homepage",
     "@coerce":
     {
        "xsd:integer": "age",
        "@iri": "homepage"
     }
  },
  "name": "John Smith",
  "age": "41",
  "homepage": "http://example.org/home/"
}

The example above would generate the following triples:

_:bnode1
   <http://xmlns.com/foaf/0.1/name>
      "John Smith" .
_:bnode1
   <http://xmlns.com/foaf/0.1/age>
      "41"^^http://www.w3.org/2001/XMLSchema#integer .
_:bnode1
   <http://xmlns.com/foaf/0.1/homepage>
<http://example.org/home/>
.

4.3 4.4 Chaining

Object chaining is a JSON-LD feature that allows an author to use the definition of JSON-LD objects as property values. This is a commonly used mechanism for creating a parent-child relationship between two subject s.

The example shows an two subjects related by a property from the first subject:

{
...
  "foaf:name": "Manu Sporny",
  "": {
    "",
    "",

  "name": "Manu Sporny",
  "knows": {
    "@type": "Person",
    "name": "Gregg Kellogg",

  }
...
}

An object definition, like the one used above, may be used as a JSON value at any point in JSON-LD.

4.4 4.5 Identifying Unlabeled Nodes

At times, it becomes necessary to be able to express information without being able to specify the subject. Typically, this type of node is called an unlabeled node or a blank node. In JSON-LD, unlabeled node identifiers are automatically created if a subject is not specified using the @subject keyword. However, authors may provide identifiers for unlabeled nodes by using the special _ (underscore) CURIE prefix.

{
...
  "@subject": "_:foo",
...
}

The example above would set the subject to _:foo , which can then be used later on in the JSON-LD markup to refer back to the unlabeled node. This practice, however, is usually frowned upon when generating Linked Data. If a developer finds that they refer to the unlabeled node more than once, they should consider naming the node using a resolve-able IRI.

4.5 4.6 Overriding Keywords

JSON-LD allows all of the syntax keywords, except for @context , to be overridden. This feature allows more legacy JSON content to be supported by JSON-LD. It also allows developers to design domain-specific implementations using only the JSON-LD context.

{
  "@context": 
  {  
     "url": "@subject",
     "a": "@type",
     "name": "http://schema.org/name"
  },
  "url": "http://example.com/about#gregg",
  "a": "http://schema.org/Person",
  "name": "Gregg Kellogg"
}

In the example above, the @subject and @type keywords have been overridden by url and a , respectively.

4.6 4.7 Normalization

Normalization is the process of taking JSON-LD input and performing a deterministic transformation on that input that results in a JSON-LD output that any conforming JSON-LD processor would have generated given the same input. The problem is a fairly difficult technical problem to solve because it requires a directed graph to be ordered into a set of nodes and edges in a deterministic way. This is easy to do when all of the nodes have unique names, but very difficult to do when some of the nodes are not labeled.

Normalization is useful when comparing two graphs against one another, when generating a detailed list of differences between two graphs, and when generating a cryptographic digital signature for information contained in a graph or when generating a hash of the information contained in a graph.

The example below is an un-normalized JSON-LD document:

{
   "name": "Manu Sporny",
   "homepage": "http://manu.sporny.org/",
   "@context": 
   {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": "http://xmlns.com/foaf/0.1/homepage",
      "xsd": "http://www.w3.org/2001/XMLSchema#",
      "@coerce": 
      {
         "@iri": ["homepage"]
      }
   }
}

The example below is the normalized form of the JSON-LD document above:

Whitespace is used below to aid readability. The normalization algorithm for JSON-LD remove all unnecessary whitespace in the fully normalized form.

[{
    "@subject": 
    {
        "@iri": "_:c14n0"
    },
    "http://xmlns.com/foaf/0.1/homepage": 
    {
        "@iri": "http://manu.sporny.org/"
    },
    "http://xmlns.com/foaf/0.1/name": "Manu Sporny"
}]

Notice how all of the term s have been expanded and sorted in alphabetical order. Also, notice how the subject has been labeled with a blank node identifier . Normalization ensures that any arbitrary graph containing exactly the same information would be normalized to exactly the same form shown above.

5. The Application Programming Interface

This API provides a clean mechanism that enables developers to convert JSON-LD data into a a variety of output formats that are easier to work with in various programming languages. If an API is provided in a programming environment, the entire API must be implemented.

5.1 JSONLDProcessor

[NoInterfaceObject]
interface JSONLDProcessor {
    object expand (in object input, in optional JSONLDProcessorCallback? callback);    object compact (in object input, in object context, in optional JSONLDProcessorCallback? callback);    object frame (in object input, in object frame, in object options, in optional JSONLDProcessorCallback? callback);    object normalize (in object input, in optional JSONLDProcessorCallback? callback);    object triples (in object input, in JSONLDTripleCallback tripleCallback, in optional JSONLDProcessorCallback? parserCallback);
};

5.1.1 Methods

compact
Compacts the given input according to the steps in the Compaction Algorithm . The input must be copied, compacted and returned if there are no errors. If the compaction fails, null must be returned.
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform compaction on.
context object The base context to use when compacting the input .
callback JSONLDProcessorCallback A callback that is called whenever a processing error occurs on the given input .
No exceptions.
Return type: object
expand
Expands the given input according to the steps in the Expansion Algorithm . The input must be copied, expanded and returned if there are no errors. If the expansion fails, null must be returned.
How do we generate warning messages during this process? For example, what happens when a key that doesn't have a mapping is discovered?
Parameter Type Nullable Optional Description
input object The JSON-LD object to copy and perform the expansion upon.
callback JSONLDProcessorCallback A callback that is called whenever a processing error occurs on the input .
No exceptions.
Return type: object
frame
Frames the given input using the frame according to the steps in the Framing Algorithm . The input is used to build the framed output and is returned if there are no errors. Exceptions are thrown if there are errors.
Define what the exceptions are. We need to specify whether or not we want exceptions thrown, or errors returned to the error callback?
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform framing on.
frame object The frame to use when re-arranging the data.
options object A set of options that will affect the framing algorithm.
callback JSONLDProcessorCallback A callback that is called whenever a processing error occurs on the given input .
No exceptions.
Return type: object
normalize
Normalizes the given input according to the steps in the Normalization Algorithm . The input must be copied, normalized and returned if there are no errors. If the compaction fails, null must be returned.
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform normalization upon.
callback JSONLDProcessorCallback A callback that is called whenever a processing error occurs on the given JSON-LD string.
No exceptions.
Return type: object
triples
Processes the input according to the RDF Conversion Algorithm , calling the provided tripleCallback for each triple generated.
Parameter Type Nullable Optional Description
input object The JSON-LD object to process when outputting triples.
tripleCallback JSONLDTripleCallback A callback that is called whenever a processing error occurs on the given input .
This callback should be aligned with the RDF API.
parserCallback JSONLDProcessorCallback A callback that is called whenever a processing error occurs on the given input .
No exceptions.
Return type: object

5.2 JSONLDProcessorCallback

The JSONLDProcessorCallback is called whenever a processing error occurs while processing the JSON-LD input .

[NoInterfaceObject Callback]
interface JSONLDProcessorCallback {
    void error (in DOMString error);
};

5.2.1 Methods

error
This callback is invoked whenever an error occurs during processing.
Parameter Type Nullable Optional Description
error DOMString A descriptive error string returned by the processor.
No exceptions.
Return type: void

5.3 JSONLDTripleCallback

The JSONLDTripleCallback is called whenever the processor generates a triple during the triple() call.

[NoInterfaceObject Callback]
interface JSONLDTripleCallback {
    void triple (in DOMString subject, in DOMString property, in DOMString objectType, in DOMString object, in DOMString? datatype, in DOMString? language);
};

5.3.1 Methods

triple
This callback is invoked whenever a triple is generated by the processor.
Parameter Type Nullable Optional Description
subject DOMString The subject IRI that is associated with the triple.
property DOMString The property IRI that is associated with the triple.
objectType DOMString The type of object that is associated with the triple. Valid values are IRI and literal .
object DOMString The object value associated with the subject and the property.
datatype DOMString The datatype associated with the object.
language DOMString The language associated with the object in BCP47 format.
No exceptions.
Return type: void

6. Algorithms

All algorithms described in this section are intended to operate on language-native data structures. That is, the serialization to a text-based JSON document isn't required as input or output to any of these algorithms and language-native data structures must be used where applicable.

5.1 6.1 Syntax Tokens and Keywords

JSON-LD specifies a number of syntax tokens and keywords that are using in all algorithms described in this section:

@context
Used to set the local context .
@base
Used to set the base IRI for all object IRIs affected by the active context .
@vocab
Used to set the base IRI for all property IRIs affected by the active context .
@coerce
Used to specify type coercion rules.
@literal
Used to specify a literal value.
@iri
Used to specify an IRI value.
@language
Used to specify the language for a literal.
@datatype
Used to specify the datatype for a literal.
:
The separator for CURIEs CURIE s when used in JSON keys or JSON values.
@subject
Sets the active subjects.
@type
Used to set the type of the active subjects.

5.2 6.2 Algorithm Terms

initial context
a context that is specified to the algorithm before processing begins.
active subject
the currently active subject that the processor should use when processing.
active property
the currently active property that the processor should use when processing.
active object
the currently active object that the processor should use when processing.
active context
a context that is used to resolve CURIEs CURIE s while the processing algorithm is running. The active context is the context contained within the processor state .
local context
a context that is specified at the within a JSON associative-array level, object , specified via the @context keyword.
processor state
the processor state , which includes the active context , current subject , and current property . The processor state is managed as a stack with elements from the previous processor state copied into a new processor state when entering a new associative array. JSON object .
JSON-LD input
The JSON-LD data structure that is provided as input to the algorithm.
JSON-LD output
The JSON-LD data structure that is produced as output by the algorithm.

5.3 6.3 Context

Processing of JSON-LD data structure is managed recursively. During processing, each rule is applied using information provided by the active context . Processing begins by pushing a new processor state onto the processor state stack and initializing the active context with the initial context . If a local context is encountered, information from the local context is merged into the active context .

The active context is used for expanding keys and values of an associative array a JSON object (or elements of a list (see List Processing )).

A local context is identified within an associative array a JSON object having a key of @context with string or an associative array a JSON object value. When processing a local context , special processing rules apply:

  1. Create a new, empty local context .
  2. If the value is a simple string, string , it must have a lexical form of IRI and used to initialize a new JSON document which replaces the value for subsequent processing.
  3. If the value is an associative array, a JSON object , perform the following steps:
    1. If the associative array JSON object has a @base key, it must have a value of a simple string with the lexical form of an absolute IRI. Add the base mapping to the local context .

      Turtle allows @base to be relative. If we did this, we would have to add IRI Expansion .

    2. If the associative array JSON object has a @vocab key, it must have a value of a simple string with the lexical form of an absolute IRI. Add the vocabulary mapping to the local context after performing IRI Expansion on the associated value.
    3. If the associative array JSON object has a @coerce key, it must have a value of an associative array. a JSON object . Add the @coerce mapping to the local context performing IRI Expansion on the associated value(s).
    4. Otherwise, the key must have the lexical form of NCName and must have the value of a simple string with the lexical form of IRI. Merge the key-value pair into the local context .
  4. Merge the of local context 's @coerce mapping into the active context 's @coerce mapping as described below .
  5. Merge all entries other than the @coerce mapping from the local context to the active context overwriting any duplicate values.

5.3.1 6.3.1 Coerce

Map each key-value pair in the local context 's @coerce mapping into the active context 's @coerce mapping, overwriting any duplicate values in the active context 's @coerce mapping. The @coerce mapping has either a single CURIE or an array of CURIEs. When merging with an existing mapping in the active context , map all CURIE values to array form and replace with the union of the value from the local context and the value of the active context . If the result is an array with a single CURIE, the processor may represent this as a string value.

5.3.2 6.3.2 Initial Context

The initial context is initialized as follows:

  • @base is set using section 5.1 Establishing a Base URI of [ RFC3986 ]. Processors may provide a means of setting the base IRI programatically.
  • @coerce is set with a single mapping from @iri to @type .
{
    "@base": document-location,
    "@context": {
      "@iri": "@type"
    }
}

5.4 6.4 IRI Expansion

Keys and some values are evaluated to produce an IRI. This section defines an algorithm for transforming a value representing an IRI into an actual IRI.

IRIs may be represented as an explicit string, or as a CURIE, CURIE , as a value relative to @base or @vocab .

CURIEs are defined more formally in [ RDFA-CORE ] section 6 "CURIE Syntax Definition" . Generally, a CURIE is composed of a prefix and a suffix separated by a ':'. In JSON-LD, either the prefix may be the empty string, denoting the default prefix . The algorithm for generating an IRI is:

  1. Split the value into a prefix and suffix from the first occurrence of ':'.
  2. If the prefix is a '_' (underscore), the IRI is unchanged.
  3. If the active context contains a mapping for prefix , generate an IRI by prepending the mapped prefix to the (possibly empty) suffix using textual concatenation. Note that an empty suffix and no suffix (meaning the value contains no ':' string at all) are treated equivalently.
  4. If the IRI being processed is for a property (i.e., a key value in an associative array, a JSON object , or a value in a @coerce mapping) and the active context has a @vocab mapping, join the mapped value to the suffix using textual concatenation.
  5. If the IRI being processed is for a subject or object (i.e., not a property) and the active context has a @base mapping, join the mapped value to the suffix using the method described in [ RFC3986 ].
  6. Otherwise, use the value directly as an IRI.

5.5 6.5 IRI Compaction

Some keys and values are expressed using IRIs. This section defines an algorithm for transforming an IRI to a compact IRI using the term s and prefix es specified in the local context .

The algorithm for generating a compacted IRI is:

  1. Search every key-value pair in the active context for a term that is a complete match against the IRI. If a complete match is found, the resulting compacted IRI is the term associated with the IRI in the active context .
  2. If a complete match is not found, search for a partial match from the beginning of the IRI. For all matches that are found, the resulting compacted IRI is the prefix associated with the partially matched IRI in the active context concatenated with a colon (:) character and the unmatched part of the string. If there is more than one compacted IRI produced, the final value is the lexicographically least value of the entire set of compacted IRIs.

5.6 6.6 Value Expansion

Some values in JSON-LD can be expressed in a compact form. These values are required to be expanded at times when processing JSON-LD documents.

The algorithm for expanding a value is:

  1. If the key that is associated with the value has an associated coercion entry in the local context , the resulting expansion is an object populated according to the following steps:
    1. If the coercion target is @iri , expand the value by adding a new key-value pair where the key is @iri and the value is the expanded IRI according to the IRI Expansion rules.
    2. If the coercion target is a typed literal, expand the value by adding two new key-value pairs. The first key-value pair will be @literal and the unexpanded value. The second key-value pair will be @datatype and the associated coercion datatype expanded according to the IRI Expansion rules.

5.7 6.7 Value Compaction

Some values, such as IRIs and typed literals, may be expressed in an expanded form in JSON-LD. These values are required to be compacted at times when processing JSON-LD documents.

The algorithm for compacting a value is:

  1. If the local context contains a coercion target for the key that is associated with the value, compact the value using the following steps:
    1. If the coercion target is an @iri , the compacted value is the value associated with the @iri key, processed according to the IRI Compaction steps.
    2. If the coercion target is a typed literal, the compacted value is the value associated with the @literal key.
    3. Otherwise, the value is not modified.

5.8 6.8 Expansion

This algorithm is a work in progress, do not implement it.

As stated previously, expansion is the process of taking a JSON-LD input and expanding all IRIs and typed literals to their fully-expanded form. The output will not contain a single context declaration and will have all IRIs and typed literals fully expanded.

5.8.1 6.8.1 Expansion Algorithm

  1. If the top-level item in the JSON-LD input is an array, array , process each item in the array recursively using this algorithm.
  2. If the top-level item in the JSON-LD input is an object, update the local context according to the steps outlined in the context section. Process each key, expanding the key according to the IRI Expansion rules.
    1. Process each value associated with each key
      1. If the value is an array, array , process each item in the array recursively using this algorithm.
      2. If the value is an object, process the object recursively using this algorithm.
      3. Otherwise, check to see the associated key has an associated coercion rule. If the value should be coerced, expand the value according to the Value Expansion rules. If the value does not need to be coerced, leave the value as-is.
    2. Remove the context from the object

5.9 6.9 Compaction

This algorithm is a work in progress, do not implement it.

As stated previously, compaction is the process of taking a JSON-LD input and compacting all IRIs using a given context. The output will contain a single top-level context declaration and will only use term s and prefix es and will ensure that all typed literals are fully compacted.

5.9.1 6.9.1 Compaction Algorithm

  1. Perform the Expansion Algorithm on the JSON-LD input .
  2. If the top-level item is an array, array , process each item in the array recursively, starting at this step.
  3. If the top-level item is an object, compress each key using the steps defined in IRI Compaction and compress each value using the steps defined in Value Compaction

5.10 6.10 Framing

This algorithm is a work in progress, do not implement it.

A JSON-LD document is a representation of a directed graph. A single directed graph can have many different serializations, each expressing exactly the same information. Developers typically don't work directly with graphs, but rather, prefer trees when dealing with JSON. While mapping a graph to a tree can be done, the layout of the end result must be specified in advance. This section defines an algorithm for mapping a graph to a tree given a frame .

5.10.1 6.10.1 Framing Algorithm Terms

input frame
the initial frame provided to the framing algorithm.
framing context
a context containing the object embed flag , the explicit inclusion flag and the omit default flag .
object embed flag
a flag specifying that objects should be directly embedded in the output, instead of being referred to by their IRI.
explicit inclusion flag
a flag specifying that for properties to be included in the output, they must be explicitly declared in the framing context .
omit missing properties flag
a flag specifying that properties that are missing from the JSON-LD input should be omitted from the output.
match limit
A value specifying the maximum number of matches to accept when building arrays of values during the framing algorithm. A value of -1 specifies that there is no match limit.
map of embedded subjects
A map that tracks if a subject has been embedded in the output of the Framing Algorithm .

5.10.2 6.10.2 Framing Algorithm

The framing algorithm takes JSON-LD input that has been normalized according to the Normalization Algorithm ( normalized input ), an input frame that has been expanded according to the Expansion Algorithm ( expanded frame ), and a number of options and produces JSON-LD output . The following series of steps is the recursive portion of the framing algorithm:

  1. Initialize the framing context by setting the object embed flag , clearing the explicit inclusion flag , and clearing the omit missing properties flag . Override these values based on input options provided to the algorithm by the application.
  2. Generate a list of frames by processing the expanded frame :
    1. If the expanded frame is not an array, array , set match limit to 1, place the expanded frame into the list of frames , and set the JSON-LD output to null .
    2. If the expanded frame is an empty array, array , place an empty object into the list of frames , set the JSON-LD output to an array, array , and set match limit to -1.
    3. If the expanded frame is a non-empty array, array , add each item in the expanded frame into the list of frames , set the JSON-LD output to an array, array , and set match limit to -1.
  3. Create a match array for each expanded frame in the list of frames halting when either the match limit is zero or the end of the list of frames is reached. If an expanded frame is not an object, the processor must throw a Invalid Frame Format exception. Add each matching item from the normalized input to the matches array and decrement the match limit by 1 if:
    1. The expanded frame has an rdf:type that exists in the item's list of rdf:type s. Note: the rdf:type can be an array, array , but only one value needs to be in common between the item and the expanded frame for a match.
    2. The expanded frame does not have an rdf:type property, but every property in the expanded frame exists in the item.
  4. Process each item in the match array with its associated match frame :
    1. If the match frame contains an @embed keyword, set the object embed flag to its value. If the match frame contains an @explicit keyword, set the explicit inclusion flag to its value. Note: if the keyword exists, but the value is neither true or false , set the associated flag to true .
    2. If the object embed flag is cleared and the item has the @subject property, replace the item with the value of the @subject property.
    3. If the object embed flag is set and the item has the @subject property, and its IRI is in the map of embedded subjects , throw a Duplicate Embed exception.
    4. If the object embed flag is set and the item has the @subject property and its IRI is not in the map of embedded subjects :
      1. If the explicit inclusion flag is set, then delete any key from the item that does not exist in the match frame , except @subject .
      2. For each key in the match frame , except for keywords and rdf:type :
        1. If the key is in the item, then build a new recursion input list using the object or objects associated with the key. If any object contains an @iri value that exists in the normalized input , replace the object in the recusion input list with a new object containing the @subject key where the value is the value of the @iri , and all of the other key-value pairs for that subject. Set the recursion match frame to the value associated with the match frame 's key. Replace the value associated with the key by recursively calling this algorithm using recursion input list , recursion match frame as input.
        2. If the key is not in the item, add the key to the item and set the associated value to an empty array if the match frame key's value is an array or null otherwise.
        3. If value associated with the item's key is null , process the omit missing properties flag :
          1. If the value associated with the key in the match frame is an array, use the first frame from the array as the property frame , otherwise set the property frame to an empty object.
          2. If the property frame contains an @omitDefault keyword, set the omit missing properties flag to its value. Note: if the keyword exists, but the value is neither true or false , set the associated flag to true .
          3. If the omit missing properties flag is set, delete the key in the item. Otherwise, if the @default keyword is set in the property frame set the item's value to the value of @default .
    5. If the JSON-LD output is null set it to the item, otherwise, append the item to the JSON-LD output .
  5. Return the JSON-LD output .
The final, non-recursive step of the framing algorithm requires the JSON-LD output to be compacted according to the Compaction Algorithm by using the context provided in the input frame . The resulting value is the final output of the compaction algorithm and is what should be returned to the application.

5.11 6.11 Normalization

This algorithm is a work in progress, do not implement it.

Normalization is the process of taking JSON-LD input and performing a deterministic transformation on that input that results in all aspects of the graph being fully expanded and named in the JSON-LD output . The normalized output is generated in such a way that any conforming JSON-LD processor will generate identical output given the same input. The problem is a fairly difficult technical problem to solve because it requires a directed graph to be ordered into a set of nodes and edges in a deterministic way. This is easy to do when all of the nodes have unique names, but very difficult to do when some of the nodes are not labeled.

In time, there may be more than one normalization algorithm that will need to be identified. For identification purposes, this algorithm is named UGNA2011 .

5.11.1 6.11.1 Normalization Algorithm Terms

label
The subject IRI associated with a graph node. The subject IRI is expressed using a key-value pair in a JSON object where the key is @subject and the value is a string that is an IRI or a JSON object containing the key @iri and a value that is a string that is an IRI.
list of expanded nodes
A list of all nodes in the JSON-LD input graph containing no embedded objects and having all keys and values expanded according to the steps in the Expansion Algorithm .
naming base string An unlabeled node naming prefix that is not used by any other node in the JSON-LD input and does not start with the characters c14n . The prefix is used to temporarily name nodes during the normalization algorithm in a way that doesn't collide with the names that already exist as well as the names that will be generated by the normalization algorithm. alpha and beta values
The words alpha and beta refer to the first and second nodes or values being examined in an algorithm. The names are merely used to refer to each input value to a comparison algorithm.
naming base string renaming counter
A counter that is used during the Node Relabeling Algorithm . The counter typically starts at one (1) and counts up for every node that is relabeled. There will be two such renaming counters in an implementation of the normalization algorithm. The first is the labeling counter and the second is the deterministic labeling counter .
serialization label
An unlabeled identifier that is created to aid in the normalization process in the Deep Comparison Algorithm . The value typically takes the form of s or c .

6.11.2 Normalization State

When performing the steps required by the normalization algorithm, it is helpful to track the many pieces of information in a data structure called the normalization state . Many of these pieces simply provide indexes into the graph. The information contained in the normalization state is described below.

node naming state
Each node in the graph will be assigned a node state . This state contains the information necessary to deterministically label all nodes in the graph. A node state includes:
node reference
A node reference is a reference to a node in the graph. For a given node state , its node reference refers to the node that the state is for. When a node state is created, its node reference should be to the node it is created for.
outgoing list
Lists the label s for all nodes that are properties of the node reference . This list should be initialized by iterating over every object associated with a property in the node reference adding its label if it is another node.
incoming list
Lists the label s for all nodes in the graph for which the node reference is a property. This list is initialized to an empty list.
outgoing serialization map
Maps node label s to serialization label s. This map is initialized to an empty map. When this map is populated, it will be filled with keys that are the label s of every node in the graph with a label that begins with _: and that has a path, via properties, that starts with the node reference .
outgoing serialization
A string that can be lexicographically compared to the outgoing serialization s of other node state s. It is a representation of the outgoing serialization map and other related information. This string is initialized to an empty string.
incoming serialization map
Maps node label s to serialization label s. This map is initialized to an empty map. When this map is populated, it will be filled with keys that are the label s of every node in the graph with a label that begins with _: and that has a path, via properties, that ends with the node reference .
incoming serialization
A string that can be lexicographically compared to the outgoing serialization s of other node state s. It is a representation of the incoming serialization map and other related information. This string is initialized to an empty string.
node state map
A mapping from a node's label to a node state . It is initialized to an empty map.
labeling prefix
The labeling prefix is a string that is used as the beginning of a node label . It should be initialized to a random base string that starts with the characters _: , is not used by any other node node's label in the JSON-LD input , and does not start with the characters c14n _:c14n . The prefix has two uses. First it is used to temporarily name nodes during the normalization algorithm in a way that doesn't collide with the names that already exist as well as the names that will be generated by the normalization algorithm. Second, it will eventually be set to _:c14n to generate the final, deterministic labels for nodes in the graph. This prefix will be concatenated with the labeling counter to produce a node label . For example, _:j8r3k is a proper initial value for the labeling prefix .
labeling counter
A counter that is used to label nodes. It is appended to the labeling prefix to create a node label . It is initialized to 1 .
map of flattened nodes
A map containing a representation of all nodes in the graph where the key is a node label and the value is a single JSON object that has no nested sub-objects and has had all properties for the same node merged into a single JSON object .

5.11.2 6.11.3 Normalization Algorithm

The normalization algorithm expands the JSON-LD input , flattens the data structure, and creates an initial set of names for all nodes in the graph. The flattened data structure is then processed by the a node labeling algorithm in order to get a fully expanded and named list of nodes which is then sorted. The result is a deterministically named and ordered list of graph nodes.

  1. Expand the JSON-LD input according to the steps in the Expansion Algorithm and store the result as the expanded input .
  2. Process every object in the expanded input searching for @subject values that start with the text string _:c14n . If a match is found, rename the subject and all references to the subject by concatenating _: with the naming base string and Create a unique identifier, such as an incremented counter value. normalization state .
  3. Create an empty list Initialize the map of expanded flattened nodes and by recursively process processing every object expanded node in the expanded input that is not an expanded IRI, typed literal or language literal, in depth-first order:
    1. If the expanded node is an object does not contain unlabeled node, add a new key-value pair to the expanded node where the key is @subject , name it by concatenating _: with and the naming base string value is the concatenation of the labeling prefix and a unique identifier, such as an incremented the string value of the labeling counter value. . Increment the labeling counter .
    2. Add the object expanded node to the list map of flattened nodes :
      1. If the expanded node 's label is already in the map of flattened nodes . merge all properties from the entry in the map of flattened nodes into the expanded node .
      2. Go through every property associated with an array in the expanded node and remove any duplicate IRI entries from the array. If the resulting array only has one IRI entry, change it from an array to an object.
      3. Set the entry for the expanded node 's label in the map of flattened nodes to the expanded node .
    3. Replace After exiting the recursive step, replace the reference to the object expanded node with an object containing a single key-value pair where the key is @iri and the value is the value of the @subject key in the object. node.
  4. ???duplicate objects for predicates??? For every entry in the map of flattened nodes , insert a key-value pair into the node state map where the key is the key from the map of flattened nodes and the value is a node state where its node reference refers to the value from the map of flattened nodes .
  5. Populate the incoming list for each node state by iterating over every node in the graph and adding its label to the incoming list associated with each node found in its properties.
  6. For every entry in the node state map that has a label that begins with _:c14n , relabel the node using the Node Relabeling Algorithm .
  7. Label all of the nodes that contain a @subject key associated with a value starting with _: according to the steps in the Node Deterministic Labeling Algorithm .

5.11.3 6.11.4 Node Labeling Relabeling Algorithm

This algorithm renames a node by generating a unique new label and updating all references to that label in the node state map . The old label and the normalization state must be given as an input to the algorithm. The old label is the current label of the node that is to be relabeled.

The node relabeling algorithm is as follows:

  1. If the labeling prefix is _:c14n and the old label begins with _:c14n then return as the node has already been renamed.
  2. Generate the new label by concatenating the labeling prefix with the string value of the labeling counter . Increment the labeling counter .
  3. For the node state associated with the old label , update every node in the incoming list by changing all the properties that reference the old label to the new label .
  4. Change the old label key in the node state map to the new label and set the associated node reference 's label to the new label .

6.11.5 Deterministic Labeling Algorithm

The deterministic labeling algorithm takes the normalization state and produces a list of expanded finished nodes that is sorted and sorts the list, contains deterministically naming all of the unlabeled named and expanded nodes in from the graph.

  1. Create a forward mapping that relates graph nodes Set the labeling prefix to _:c14n , the IRIs labeling counter to 1 , the list of finished nodes to an empty array, and create an empty array, the targets list of unfinished nodes that they reference. .
  2. For example, if a each node alpha reference refers to a in the node beta state map :
    1. If the node's label via a property, does not start with _: then put the key node reference in the forward mapping list of finished nodes .
    2. If the node's label is does start with _: then put the subject IRI node reference in the list of alpha unfinished nodes .
  3. Append to the list of finished nodes and by processing the value is an array containing at least remainder of the subject IRI list of beta . unfinished nodes until it is empty:
    1. Add all forward mappings for every node Sort the list of unfinished nodes in descending order according to the graph. Deep Comparison Algorithm to determine the sort order.
    2. Create a reverse mapping list of labels that relates graph nodes and initialize it to every other an empty array.
    3. For the first node that refers from the list of unfinished nodes :
      1. Add its label to them in the graph. list of labels .
      2. For example, if each key-value pair from its associated outgoing serialization map , add the key to a node alpha refers list and then sort the list according to the lexicographical order of the keys' associated values. Append the list to the list of nodes to label .
      3. For each key-value pair from its associated incoming serialization map , add the key to a list and then sort the list according to the lexicographical order of the keys' associated values. Append the list to the list of nodes to label .
    4. For each label in the list of labels , relabel the associated node beta according to the Node Relabeling Algorithm . If any outgoing serialization map via contains a property, the key in that matches the reverse mapping is label , clear the subject IRI for beta map and set the value is associated outgoing serialization to an array containing at least empty string. If any incoming serialization map contains a key that matches the IRI for alpha . label , clear the map and set the associated incoming serialization to an empty string.
    5. Add all reverse mappings for every Remove each node in with a label that starts with _:c14n from the graph. list of unfinished nodes and add it to the list of finished nodes .
  4. Label every unlabeled node according to Sort the Label Generation Algorithm list of finished nodes in descending order using according to the Deep Comparison Algorithm to determine the sort order.

5.11.4 6.11.6 Shallow Comparison Algorithm

The shallow comparison algorithm takes two unlabeled nodes, alpha and beta , as input and determines which one should come first in a sorted list. The following algorithm determines the steps that are executed in order to determine the node that should come first in a list:

  1. Compare the total number of node properties. The node with fewer properties is first.
  2. Lexicographically sort the property IRIs for each node and compare the sorted lists. If an IRI is found to be lexicographically smaller, the node containing that IRI is first.
  3. Compare the property values against one another:
    1. Create an alpha list by adding all values associated with the alpha property that is not an unlabeled node. Track the number of unlabeled nodes not added to the list using an alpha unlabeled counter .
    2. Create a beta list by adding all values associated with the beta property that is not an unlabeled node. Track the number of unlabeled nodes not added to the list using an beta unlabeled counter .
    3. Compare the length of alpha list and beta list . The node associated with the list containing the lesser number of items is first.
    4. Compare the alpha unlabeled counter to the beta unlabeled counter , the node associated with the lesser value is first.
    5. Sort alpha list and beta list using according to the Object Comparison Algorithm as the sorting comparator. For each offset into the alpha list , compare the item at the offset against the item at the same offset in the beta list using according to the Object Comparison Algorithm . The node associated with the lesser item is first.
  4. Process the reverse mapping incoming list s associated with each node to determine order:
    1. The node with fewer entries in the reverse mapping shortest incoming list is first.
    2. Sort the reverse mapping entry for alpha into a incoming list of sorted alpha mappings . Sort the reverse mapping entry for beta into a list of sorted beta mappings . s according to incoming property and then incoming label .
    3. The node associated with the list of sorted mappings with the least number of incoming unlabeled nodes is first.
    4. For each offset into the incoming list of sorted alpha mappings , s, compare the IRI at the offset against the IRI at the same offset in associated properties and label s. The node associated with the list of sorted beta mappings . lexicographically lesser associated property is first. The node associated with the lexicographically lesser IRI label is first.
  5. Otherwise, the nodes are equal. equivalent.

5.11.5 6.11.7 Object Comparison Algorithm

The object comparison algorithm is designed to compare two graph node property values, alpha and beta , against the other. The algorithm is useful when sorting two lists of graph node properties.

  1. If one of the values is a string and the other is not, the value that is a string is first.
  2. If both values are strings, string s, the lexicographically lesser string is first.
  3. If one of the values is a literal and the other is not, the value that is a literal is first.
  4. If both values are literals
    1. The lexicographically lesser string associated with @literal is first.
    2. The lexicographically lesser string associated with @datatype is first.
    3. The lexicographically lesser string associated with @language is first.
  5. If both values are expanded IRIs, the lexicographically lesser string associated with @iri is first.
  6. Otherwise, the two values are equivalent.

5.11.6 6.11.8 Deep Comparison Algorithm

DeepCompare(bnodeA, bnodeB): Return The deep comparison algorithm is used to compare the difference between two nodes, alpha and beta . A deep comparison takes the incoming and outgoing node edges in a graph into account if the number of properties and value of ShallowCompare those properties are identical. The algorithm is helpful when sorting a list of nodes and will return whichever node should be placed first in a list if the two nodes are not truly equivalent.

When performing the steps required by the deep comparison algorithm, it is non-zero. helpful to track state information about mappings. The information contained in a mapping state is described below.

mapping state
mapping counter
Keeps track of the number of nodes that have been mapped to serialization labels . It is initialized to 1 .
processed labels map
Keeps track of the label s of nodes that have already been assigned serialization label s. It is initialized to an empty map.
serialized labels map
Maps a node label to its associated serialization label . It is initialized to an empty map.
adjacent info map
Maps a serialization label to the node label associated with it, the list of sorted serialization label s for adjacent nodes, and the map of adjacent node serialiation label s to their associated node label s. It is initialized to an empty map.
key stack
A stack where each element contains an array of adjacent serialization label s and an index into that array. It is initialized to a stack containing a single element where its array contains a single string element s1 and its index is set to 0 .
serialized keys
Keeps track of which serialization label s have already been written at least once to the serialization string . It is initialized to an empty map.
serialization string
A string that is incrementally updated as a serialization is built. It is initialized to an empty string.

The deep comparison algorithm is as follows:

  1. Perform a comparison between alpha and beta according to the Shallow Comparison Algorithm . If the result does not show that the two nodes are equivalent, return the result.
  2. Compare property serializations incoming and then reference serializations, recycling the mapping from property serializations outgoing edges for reference serializations. each node, updating their associated node state as each node is processed:
    1. If the outgoing serialization map for bnodeA alpha is null, do SerializeNode(bnodeA, empty, generate the serialization according to the Node Serialization Algorithm . Provide alpha 's node state , a new Mapping). mapping state , outgoing direction to the algorithm as inputs.
    2. If the outgoing serialization map for bnodeB beta is null, do SerializeNode(bnodeA, empty, generate the serialization according to the Node Serialization Algorithm . Provide beta 's node state , a new Mapping). mapping state , and outgoing direction to the algorithm as inputs.
    3. Return If alpha 's outgoing serialization is lexicographically less than beta 's, then alpha is first. If it is greater, then beta is first.
    4. If the result incoming serialization map for alpha is empty, generate the serialization according to the Node Serialization Algorithm . Provide alpha 's node state , a new mapping state with its serialized labels map set to a copy of alpha 's outgoing serialization map , and incoming direction to the algorithm as inputs.
    5. If the incoming serialization map for beta is empty, generate the serialization according to the Node Serialization Algorithm . Provide beta 's node state , a lexicographical comparison new mapping state with its serialized labels map set to a copy of beta 's outgoing serialization map , and incoming direction to the two serializations. algorithm as inputs.
    6. If alpha 's incoming serialization is lexicographically less than beta 's, then alpha is first. If it is greater, then beta is first.

5.11.7 6.11.9 Node Serialization Algorithm

SerializeNode(bnode, mapping, dir): The node serialization algorithm takes a node state , a mapping state , and a direction (either outgoing direction or incoming direction ) as inputs and generates a deterministic serialization for the node reference .

  1. If the bnode's label is already marked exists in the processed labels map , terminate the algorithm as mapped, return. the serialization label has already been created.
  2. Mark Set the bnode's value associated with the label as mapped. in the processed labels map to true .
  3. Assign Generate the next serialization name label for the label according to the bnode's Serialization Label Generation Algorithm .
  4. Create an empty array called the list of unserialized labels .
  5. For every label in a list, where the list the outgoing list if the direction is outgoing direction and store the incoming list otherwise, if the label starts with _: , it in "top". is the target node label :
    1. Split Look up the bnode's adjacent bnodes into a target node label in the processed labels map and if a list. The map contains a reverse mapping of serialization names to bnode labels for all exists, update the serialized labels map where the key is the value in the mapping, serialization map and the value is the target node label .
    2. Otherwise, add the target node label to the list (notMapped) contains all of unserialized labels not in the mapping yet. .
  6. Save a copy Set the maximum serialization combinations to 1 or the length of the mapping. list of unserialized labels , whichever is greater.
  7. Do SerializeCombos for max(1, notMapped.length) using While the original mapping for maximum serialization combinations is greater than 0 , perform the first call Combinatorial Serialization Algorithm and decrement the maximum serialization combinations by 1 for each iteration.

6.11.10 Serialization Label Generation Algorithm

The algorithm generates a serialization label given a label and a copy mapping count .

    1. If the label starts with the string _:c14n , the serialization label is the letter c followed by the number that follows _:c14n in the label .
    2. Otherwise, the serialization label is the letter s followed by the string value of mapping count . Increment the mapping for each subsequent call. count by 1 ensuring that the value persists across multiple invocations of this algorithm.
  • 5.11.8 6.11.11 Combinatorial Serialization Algorithm

    SerializeCombos(top, mapping, mapped, notMapped, dir): SerializeCombos() takes a label , a serialization map , a serialization label , a processed labels map , a serialization map , a serialized labels map , and a list of unserialized labels as inputs and generates deterministic serializations for all possible combinations of graphs.

    1. If notMapped the list of unserialized labels is non-empty, copy mapped and assign not empty:
      1. Copy the next serialization name map to its the serialization map copy .
      2. Remove the first bnode, remove it unserialized label from the list, list of unserialized labels and update mapped. For max(1, notMapped.length) recurse into SerializeCombos with create a new new serialization label according to the original Serialization Label Generation Algorithm passing the unserialized label and the mapping for counter as parameters.
      3. Create a new key-value mapping in the serialization map copy where the key is the new serialization label and the value is the unserialized label .
      4. Set the maximum serialization rotations to 1 or the length of the list of unserialized labels , whichever is greater.
      5. While the maximum serialization rotations is greater than 0 :
        1. If this is the first call iteration in the loop, perform the Combinatorial Serialization Algorithm passing in the label , the serialization map copy , the serialization label , the processed labels map , serialized labels map , and a the list of unserialized labels .
        2. If this is not the first iteration in the loop, perform the Combinatorial Serialization Algorithm passing in the label , the serialization map copy , the serialization label , and temporary copies of the mapping processed labels map , serialized labels map , and the list of unserialized labels .
        3. Decrement the maximum serialization rotations by 1 for each subsequent call. Rotate notMapped on each iteration.
      6. If notMapped the list of unserialized labels is empty, save empty:
        1. ???Save an entry mapping from the bnode's serialization name to the reverse mapping (mapped) and its sorted keys then do SerializeMapping. SerializeMapping:
          1. If ???If the serialization is lexicographically less than the current serialization or the current serialization is null, then iterate over the sorted keys, get the reverse-mapped adjacent bnode and recursively call SerializeNode on each iteration.
          2. Do ???Do SerializeMapping then if the serialization is lexicographically less than the current serialization or the current serialization is null, then set it as the least serialization for the bnode in the given edge direction ('property' or 'reference').

    5.11.9 6.11.12 Mapping Serialization Algorithm

    map of all labels , map of all properties , key stack , serialization string

    SerializeMapping(mapping): (This function incrementally updates the relation serialization for a mapping)

    1. If there the serialization keys stack is an entry on not empty
      1. Pop the mapping's key stack, pop it and iterate over every key. list of serialization keys off of the serialization keys stack .
      2. For each key, if an entry for the serialization key hasn't been added to in the mapping yet, break out list of the loop. serialization keys :
        1. Update If the serialization key is not in the ???list of adjacent nodes???, push the list of serialization keys onto the serialization keys stack entry's index. and exit from this loop.
        2. If the serialization key is a key in the completed serialization key map , a cycle has already been serialized, output "'_' + key" detected. Append the concatenation of the _ character and continue. the serialization key to the serialization string .
        3. For each key, Otherwise, serialize all outgoing and incoming edges in the graph by performing the following steps:
          1. Mark the serialization key then its associated bnode properties, then its bnode references. The entire property list is surrounded as being processed by '[' and ']' adding a new key-value pair to the completed serialization key map where the key is the serialization key and so the value is true .
          2. Set the reference list. Individual properties/references are seperated serialization fragment to the value of the serialization key .
          3. Set the list of adjacent node keys by '|'. using the serialization key to look up the list in the adjacent node keys map .
          4. Set the adjacent node label ???somehow???.
          5. If a mapping for the adjacent node label exists in the map of all labels :
            1. Append the result of the Label Serialization Algorithm to the serialization fragment .

    6.11.13 Label Serialization Algorithm

    map of properties , label serialization , label , incoming map , adjacent node labels , key stack .

    1. Initialize the label serialization to an empty string.
    2. Append the [ character to the label serialization .
    3. Append all properties to the label serialization by processing each key-value pair in the map of properties , excluding the @subject property ???do the map keys need to be sorted???:
      1. Build a string using the pattern < KEY > where KEY is the current key. Append string to the label serialization .
      2. The value may be a single object or an array, array of objects. Process all of the serializations objects that are concatenated together associated with no joining delimiter. The '@subject' property is skipped. The property the key, building an object string for each item:
        1. If the object contains an @iri key with a value that starts with _: , set the object string to the value _: . If the value does not start with _: , build the object string using the pattern < IRI > where IRI is turtle-serialized. the value associated with the @iri key.
        2. If the object contains a property or reference @literal key and a @datatype key, build the object string using the pattern " LITERAL "^^ < DATATYPE > where LITERAL is the value associated with the @literal key and DATATYPE is the value associated with the @datatype key.
        3. If the object contains a bnode, it @literal key and a @language key, build the object string using the pattern " LITERAL "@ LANGUAGE where LITERAL is serialized to '_:', otherwise the turtle serialization value associated with the @literal key and LANGUAGE is used. the value associated with the @language key.
        4. Join all of Otherwise, the adjacent keys and add them to value is a string. Build the serialization. object string using the pattern " LITERAL " where LITERAL is the value associated with the current key.
        5. Push If this is the adjacent keys onto second iteration of the key stack. loop, append a | separator character to the label serialization .
        6. Do SerializeMapping. Append the object string to the label serialization .
      3. 5.11.10 Label Generation Algorithm NameNode(bnode):
    4. Remove Append the first blank node from ] character to the list of sorted blank nodes. label serialization .
    5. Give it Append the next canonical name. [ character to the label serialization .
    6. Give canonical names Append all incoming references for the current label to each blank node key the label serialization by processing all of the items associated with the label in its the incoming map :
      1. Build a reference string using the pattern < PROPERTY > < REFERER > where PROPERTY is the property serialization mapping associated with the incoming reference and REFERER is either the subject of the node referring to the label in lexicographical order. the incoming reference or _: if REFERER begins with _: .
      2. Give canonical names If this is the second iteration of the loop, append a | separator character to each blank node key in its the label serialization .
      3. Append the reference string to the label serialization mapping in lexicographical order. .
    7. Set Append the ] character to the label serialization .
    8. Append all serializations containing newly-named blank nodes adjacent node labels to null. the label serialization by concatenating the string value for all of them, one after the other, to the label serialization .
    9. Push the adjacent node labels onto the key stack and append the result of the Mapping Serialization Algorithm to the label serialization .

    5.12 6.12 Data Round Tripping

    When normalizing xsd:double values, implementers must ensure that the normalized value is a string. In order to generate the string from a double value, output equivalent to the printf("%1.6e", value) function in C must be used where "%1.6e" is the string formatter and value is the value to be converted.

    To convert the a double value in JavaScript, implementers can use the following snippet of code:

    // the variable 'value' below is the JavaScript native double value that is to be converted
    (value).toExponential(6).replace(/(e(?:\+|-))([0-9])$/,
    '$10$2')
    

    When data needs to be normalized, JSON-LD authors should not use values that are going to undergo automatic conversion. This is due to the lossy nature of xsd:double values.

    Round-tripping data can be problematic if we mix and match @coerce rules with JSON-native datatypes, like integers. Consider the following code example:

    var myObj = { "@context" : { 
                    "number" : "http://example.com/vocab#number",
                    "@coerce": {
                       "xsd:nonNegativeInteger": "number"
                    }
                  },
                  "number" : 42 };
    // Map the language-native object to JSON-LD
    var jsonldText = jsonld.normalize(myObj);
    // Convert the normalized object back to a JavaScript object
    var
    myObj2
    =
    jsonld.parse(jsonldText);
    

    At this point, myObj2 and myObj will have different values for the "number" value. myObj will be the number 42, while myObj2 will be the string "42". This type of data round-tripping error can bite developers. We are currently wondering if having a "coerce validation" phase in the parsing/normalization phases would be a good idea. It would prevent data round-tripping issues like the one mentioned above.

    5.13 6.13 RDF Conversion

    A JSON-LD document may be converted to any other RDF-compatible document format using the algorithm specified in this section.

    The JSON-LD Processing Model describes processing rules for extracting RDF from a JSON-LD document. Note that many uses of JSON-LD may not require generation of RDF.

    The processing algorithm described in this section is provided in order to demonstrate how one might implement a JSON-LD to RDF processor. Conformant implementations are only required to produce the same type and number of triples during the output process and are not required to implement the algorithm exactly as described.

    The RDF Conversion Algorithm is a work in progress.

    5.13.1 6.13.1 Overview

    This section is non-normative.

    JSON-LD is intended to have an easy to parse grammar that closely models existing practice in using JSON for describing object representations. This allows the use of existing libraries for parsing JSON in a document-oriented fashion, or can allow for stream-based parsing similar to SAX.

    As with other grammars used for describing Linked Data , a key concept is that of a resource . Resources may be of three basic types: IRI s, for describing externally named entities, BNodes , resources for which an external name does not exist, or is not known, and Literals, which describe terminal entities such as strings, dates and other representations having a lexical representation possibly including an explicit language or datatype.

    Data described with JSON-LD may be considered to be the representation of a graph made up of subject and object resources related via a property resource. However, specific implementations may choose to operate on the document as a normal JSON description of objects having attributes.

    5.13.2 6.13.2 RDF Conversion Algorithm Terms

    default graph
    the destination graph for all triples generated by JSON-LD markup.

    5.13.3 6.13.3 RDF Conversion Algorithm

    The algorithm below is designed for in-memory implementations with random access to associative array JSON object elements.

    A conforming JSON-LD processor implementing RDF conversion must implement a processing algorithm that results in the same default graph that the following algorithm generates:

    1. Create a new processor state with with the active context set to the initial context and active subject and active property initialized to NULL.
    2. If an associative array a JSON object is detected, perform the following steps:
      1. If the associative array JSON object has a @context key, process the local context as described in Context .
      2. Create a new associative array JSON object by mapping the keys from the current associative array JSON object using the active context to new keys using the associated value from the current associative array. JSON object . Repeat the mapping until no entry is found within the active context for the key. Use the new associative array JSON object in subsequent steps.
      3. If the associative array JSON object has an @iri key, set the active object by performing IRI Expansion on the associated value. Generate a triple representing the active subject , the active property and the active object . Return the active object to the calling location.

        @iri really just behaves the same as @subject , consider consolidating them.

      4. If the associative array JSON object has a @literal key, set the active object to a literal value as follows:
        1. as a typed literal if the associative array JSON object contains a @datatype key after performing IRI Expansion on the specified @datatype .
        2. otherwise, as a plain literal . If the associative array JSON object contains a @language key, use it's value to set the language of the plain literal.
        3. Generate a triple representing the active subject , the active property and the active object . Return the active object to the calling location.
      5. If the associative array JSON object has a @subject key:
        1. If the value is a string, string , set the active object to the result of performing IRI Expansion . Generate a triple representing the active subject , the active property and the active object . Set the active subject to the active object .
        2. Create a new processor state using copies of the active context , active subject and active property and process the value starting at Step 2 , set the active subject to the result and proceed using the previous processor state .
      6. If the associative array JSON object does not have a @subject key, set the active object to newly generated blank node identifier . Generate a triple representing the active subject , the active property and the active object . Set the active subject to the active object .
      7. For each key in the associative array JSON object that has not already been processed, perform the following steps:
        1. If the key is @type , set the active property to rdf:type .
        2. Otherwise, set the active property to the result of performing IRI Expansion on the key.
        3. Create a new processor state copies of the active context , active subject and active property and process the value starting at Step 2 and proceed using the previous processor state .
      8. Return the active object to the calling location.
    3. If a regular array is detected, process each value in the array by doing the following returning the result of processing the last value in the array: array :
      1. Create a new processor state using copies of the active context , active subject and active property and process the value starting at Step 2 then proceed using the previous processor state .
    4. If a string is detected:
      1. If the active property is the target of a @iri coercion, set the active object by performing IRI Expansion on the string.
      2. Otherwise, if the active property is the target of coercion, set the active object by creating a typed literal using the string and the coercion key as the datatype IRI.
      3. Otherwise, set the active object to a plain literal value created from the string.
      Generate a triple representing the active subject , the active property and the active object .
    5. If a number is detected, generate a typed literal using a string representation of the value with datatype set to either xsd:integer or xsd:double , depending on if the value contains a fractional and/or an exponential component. Generate a triple using the active subject , active property and the generated typed literal.
    6. Otherwise, if true or false is detected, generate a triple using the active subject , active property and a typed literal value created from the string representation of the value with datatype set to xsd:boolean .

    6. 7. Experimental Concepts

    There are a few advanced concepts where it is not clear whether or not the JSON-LD specification is going to support the complexity necessary to support each concept. The entire section on Advanced Concepts should be considered as discussion points; it is merely a list of possibilities where all of the benefits and drawbacks have not been explored.

    6.1 7.1 Disjoint Graphs

    When serializing an RDF graph that contains two or more sections of the graph which are entirely disjoint, one must use an array to express the graph as two graphs. This may not be acceptable to some authors, who would rather express the information as one graph. Since, by definition, disjoint graphs require there to be two top-level objects, JSON-LD utilizes a mechanism that allows disjoint graphs to be expressed using a single graph.

    Assume the following RDF graph:

    <http://example.org/people#john> 
       <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
          <http://xmlns.com/foaf/0.1/Person> .
    <http://example.org/people#jane> 
       <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
    <http://xmlns.com/foaf/0.1/Person>
    .
    

    Since the two subjects are entirely disjoint with one another, it is impossible to express the RDF graph above using a single JSON-LD associative array. JSON object .

    In JSON-LD, one can use the subject to express disjoint graphs as a single graph:

    {
      "@coerce": {
        "foaf": "http://xmlns.com/foaf/0.1/"
    
      "@context": {
        "Person": "http://xmlns.com/foaf/0.1/Person"
    
      },
      "@subject": 
      [
        {
          "@subject": "http://example.org/people#john",
          "@type": "foaf:Person"
    
          "@type": "Person"
    
        },
        {
          "@subject": "http://example.org/people#jane",
          "@type": "foaf:Person"
    
          "@type": "Person"
    
        }
      ]
    }
    

    A disjoint graph could also be expressed like so:

    [
      {
        "@subject": "http://example.org/people#john",
        "@type": "http://xmlns.com/foaf/0.1/Person"
      },
      {
        "@subject": "http://example.org/people#jane",
        "@type": "http://xmlns.com/foaf/0.1/Person"
      }
    ]
    

    6.2 7.2 Lists

    Because graphs do not describe ordering for links between nodes, multi-valued properties in JSON do not provide an ordering of the listed objects. For example, consider the following simple document:

    {
    ...
      "@subject": "http://example.org/people#joebob",
      "foaf:nick": ,
    
      "nick": ["joe", "bob", "jaybee"],
    
    ...
    }
    

    This results in three triples being generated, each relating the subject to an individual object, with no inherent order. To address this issue, RDF-based languages, such as [ TURTLE ] use the concept of an rdf:List (as described in [ RDF-SCHEMA ]). This uses a sequence of unlabeled nodes with properties describing a value, a null-terminated next property. Without specific syntactical support, this could be represented in JSON-LD as follows:

    {
    ...
      "@subject": "http://example.org/people#joebob",
      "foaf:nick": ,
    
      "nick": {,
    
        "@first": "joe",
        "@rest": {
          "@first": "bob",
          "@rest": {
            "@first": "jaybee",
            "@rest": "@nil"
            }
          }
        }
      },
    ...
    }
    

    As this notation is rather unwieldy and the notion of ordered collections is rather important in data modeling, it is useful to have specific language support. In JSON-LD, a list may be represented using the @list keyword as follows:

    {
    ...
      "@subject": "http://example.org/people#joebob",
      "foaf:nick": {"@list": ["joe", "bob", "jaybee"]},
    ...
    }
    

    This describes the use of this array as being ordered, and order is maintained through normalization and RDF conversion. If every use of a given multi-valued property is a list, this may be abbreviated by adding an @coerce term:

    {
      "@context": {
        ...
        "@context": {
          "@list": ["foaf:nick"]
        }
      },
    ...
      "@subject": "http://example.org/people#joebob",
      "foaf:nick": ["joe", "bob", "jaybee"],
    ...
    }
    

    6.2.1 7.2.1 Expansion

    TBD.

    6.2.2 7.2.2 Normalization

    TBD.

    6.2.3 7.2.3 RDF Conversion

    To support RDF Conversion of lists, RDF Conversion Algorithm is updated as follows:

    1. 2.4a. If the associative array JSON object has a @list key and the value is an array process the value as a list starting at Step 3a .
    2. 2.7.3. Create a new processor state copies of the active context , active subject and active property .
      1. If the active property is the target of a @list coercion, and the value is an array, array , process the value as a list starting at Step 3a .
      2. Otherwise, process the value starting at Step 2 .
      3. Proceed using the previous processor state .
    3. 3a. Generate an RDF List by linking each element of the list using rdf:first and rdf:next , terminating the list with rdf:nil using the following sequence:
      1. If the list has no element, generate a triple using the active subject , active property and rdf:nil .
      2. Otherwise, generate a triple using using the active subject , active property and a newly generated BNode identified as first blank node identifier .
      3. For each element other than the last element in the list:
        1. Create a processor state using the active context, first blank node identifier as the active subject , and rdf:first as the active property .
        2. Unless this is the last element in the list, generate a new BNode identified as rest blank node identifier , otherwise use rdf:nil .
        3. Generate a new triple using first blank node identifier , rdf:rest and rest blank node identifier .
        4. Set first blank node identifier to rest blank node identifier .

    A. Markup Examples

    The JSON-LD markup examples below demonstrate how JSON-LD can be used to express semantic data marked up in other languages such as RDFa, Microformats, and Microdata. These sections are merely provided as proof that JSON-LD is very flexible in what it can express across different Linked Data approaches.

    A.1 RDFa

    The following example describes three people with their respective names and homepages.

    <div prefix="foaf: http://xmlns.com/foaf/0.1/">
       <ul>
          <li typeof="foaf:Person">
            <a rel="foaf:homepage" href="http://example.com/bob/" property="foaf:name" >Bob</a>
          </li>
          <li typeof="foaf:Person">
            <a rel="foaf:homepage" href="http://example.com/eve/" property="foaf:name" >Eve</a>
          </li>
          <li typeof="foaf:Person">
            <a rel="foaf:homepage" href="http://example.com/manu/" property="foaf:name" >Manu</a>
          </li>
       </ul>
    </div>
    

    An example JSON-LD implementation is described below, however, there are other ways to mark-up this information such that the context is not repeated.

    {
      "@context": { "foaf": "http://xmlns.com/foaf/0.1/"},
      "@subject": [
       {
         "@subject": "_:bnode1",
         "@type": "foaf:Person",
         "foaf:homepage": "http://example.com/bob/",
         "foaf:name": "Bob"
       },
       {
         "@subject": "_:bnode2",
         "@type": "foaf:Person",
         "foaf:homepage": "http://example.com/eve/",
         "foaf:name": "Eve"
       },
       {
         "@subject": "_:bnode3",
         "@type": "foaf:Person",
         "foaf:homepage": "http://example.com/manu/",
         "foaf:name": "Manu"
       }
      ]
    }
    

    A.2 Microformats

    The following example uses a simple Microformats hCard example to express how the Microformat is represented in JSON-LD.

    <div class="vcard">
     <a class="url fn" href="http://tantek.com/">Tantek Çelik</a>
    </div>
    

    The representation of the hCard expresses the Microformat terms in the context and uses them directly for the url and fn properties. Also note that the Microformat to JSON-LD processor has generated the proper URL type for http://tantek.com .

    {
      "@context": 
      {
        "vcard": "http://microformats.org/profile/hcard#vcard",
        "url": "http://microformats.org/profile/hcard#url",
        "fn": "http://microformats.org/profile/hcard#fn",
        "@coerce": { "xsd:anyURI": "url" }
      },
      "@subject": "_:bnode1",
      "@type": "vcard",
      "url": "http://tantek.com/",
      "fn": "Tantek Çelik"
    }
    

    A.3 Microdata

    The Microdata example below expresses book information as a Microdata Work item.

    <dl itemscope
        itemtype="http://purl.org/vocab/frbr/core#Work"
        itemid="http://purl.oreilly.com/works/45U8QJGZSQKDH8N">
     <dt>Title</dt>
     <dd><cite itemprop="http://purl.org/dc/terms/title">Just a Geek</cite></dd>
     <dt>By</dt>
     <dd><span itemprop="http://purl.org/dc/terms/creator">Wil Wheaton</span></dd>
     <dt>Format</dt>
     <dd itemprop="http://purl.org/vocab/frbr/core#realization"
         itemscope
         itemtype="http://purl.org/vocab/frbr/core#Expression"
         itemid="http://purl.oreilly.com/products/9780596007683.BOOK">
      <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/BOOK">
      Print
     </dd>
     <dd itemprop="http://purl.org/vocab/frbr/core#realization"
         itemscope
         itemtype="http://purl.org/vocab/frbr/core#Expression"
         itemid="http://purl.oreilly.com/products/9780596802189.EBOOK">
      <link itemprop="http://purl.org/dc/terms/type" href="http://purl.oreilly.com/product-types/EBOOK">
      Ebook
     </dd>
    </dl>
    

    Note that the JSON-LD representation of the Microdata information stays true to the desires of the Microdata community to avoid contexts and instead refer to items by their full IRI.

    [
      {
        "@subject": "http://purl.oreilly.com/works/45U8QJGZSQKDH8N",
        "@type": "http://purl.org/vocab/frbr/core#Work",
        "http://purl.org/dc/terms/title": "Just a Geek",
        "http://purl.org/dc/terms/creator": "Whil Wheaton",
        "http://purl.org/vocab/frbr/core#realization": 
          ["http://purl.oreilly.com/products/9780596007683.BOOK", "http://purl.oreilly.com/products/9780596802189.EBOOK"]
      },
      {
        "@subject": "http://purl.oreilly.com/products/9780596007683.BOOK",
        "@type": "http://purl.org/vocab/frbr/core#Expression",
        "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/BOOK"
      },
      {
        "@subject": "http://purl.oreilly.com/products/9780596802189.EBOOK",
        "@type": "http://purl.org/vocab/frbr/core#Expression",
        "http://purl.org/dc/terms/type": "http://purl.oreilly.com/product-types/EBOOK"
      }
    ]
    

    A.4 Mashing Up Vocabularies

    Developers would also benefit by allowing other vocabularies to be used automatically with their JSON API. There are over 200 Vocabulary Documents that are available for use on the Web today. Some of these vocabularies are:

    You can use these vocabularies in combination, like so:

    {
      "@type": "foaf:Person",
      "foaf:name": "Manu Sporny",
      "foaf:homepage": "http://manu.sporny.org/",
      "sioc:avatar": "http://twitter.com/account/profile_image/manusporny"
    }
    

    Developers can also specify their own Vocabulary documents by modifying the active context in-line using the @context keyword, like so:

    {
      "@context": { "myvocab": "http://example.org/myvocab#" },
      "@type": "foaf:Person",
      "foaf:name": "Manu Sporny",
      "foaf:homepage": "http://manu.sporny.org/",
      "sioc:avatar": "http://twitter.com/account/profile_image/manusporny",
      "myvocab:personality": "friendly"
    }
    

    The @context keyword is used to change how the JSON-LD processor evaluates key-value pairs. In this case, it was used to map one string ('myvocab') to another string, which is interpreted as a IRI . In the example above, the myvocab string is replaced with " http://example.org/myvocab# " when it is detected. In the example above, " myvocab:personality " would expand to " http://example.org/myvocab#personality ".

    This mechanism is a short-hand for RDF, called a CURIE, CURIE , and provides developers an unambiguous way to map any JSON value to RDF.

    A.5 Acknowledgements

    The editors would like to thank Mark Birbeck, who provided a great deal of the initial push behind the JSON-LD work via his work on RDFj, Dave Longley, Dave Lehn and Mike Johnson who reviewed, provided feedback, and performed several implementations of the specification, and Ian Davis, who created RDF/JSON. Thanks also to Nathan Rixham, Bradley P. Allen, Kingsley Idehen, Glenn McDonald, Alexandre Passant, Danny Ayers, Ted Thibodeau Jr., Olivier Grisel, Niklas Lindström, Markus Lanthaler, and Richard Cyganiak for their input on the specification. Another huge thank you goes out to Dave Longley who designed many of the algorithms used in this specification, including the normalization algorithm which was a monumentally difficult design challenge.

    B. References

    B.1 Normative references

    [BCP47]
    A. Phillips, M. Davis. Tags for Identifying Languages September 2009. IETF Best Current Practice. URL: http://tools.ietf.org/rfc/bcp/bcp47.txt
    [RDF-CONCEPTS]
    Graham Klyne; Jeremy J. Carroll. Resource Description Framework (RDF): Concepts and Abstract Syntax. 10 February 2004. W3C Recommendation. URL: http://www.w3.org/TR/2004/REC-rdf-concepts-20040210
    [RFC3986]
    T. Berners-Lee; R. Fielding; L. Masinter. Uniform Resource Identifier (URI): Generic Syntax. January 2005. Internet RFC 3986. URL: http://www.ietf.org/rfc/rfc3986.txt
    [RFC3987]
    M. Dürst; M. Suignard. Internationalized Resource Identifiers (IRIs). January 2005. Internet RFC 3987. URL: http://www.ietf.org/rfc/rfc3987.txt
    [RFC4627]
    D. Crockford. The application/json Media Type for JavaScript Object Notation (JSON) July 2006. Internet RFC 4627. URL: http://www.ietf.org/rfc/rfc4627.txt
    [WEBIDL]
    Cameron McCormack. Web IDL. 19 December 2008. W3C Working Draft. (Work in progress.) URL: http://www.w3.org/TR/2008/WD-WebIDL-20081219

    B.2 Informative references

    [ECMA-262]
    ECMAScript Language Specification, Third Edition. December 1999. URL: http://www.ecma-international.org/publications/standards/Ecma-262.htm
    [MICRODATA]
    Ian Hickson; et al. Microdata 04 March 2010. W3C Working Draft. URL: http://www.w3.org/TR/microdata/
    [MICROFORMATS]
    Microformats . URL: http://microformats.org
    [RDF-SCHEMA]
    Dan Brickley; Ramanathan V. Guha. RDF Vocabulary Description Language 1.0: RDF Schema. 10 February 2004. W3C Recommendation. URL: http://www.w3.org/TR/2004/REC-rdf-schema-20040210
    [RDFA-CORE]
    Shane McCarron; et al. RDFa Core 1.1: Syntax and processing rules for embedding RDF through attributes. 31 March 2011. W3C Working Draft. URL: http://www.w3.org/TR/2011/WD-rdfa-core-20110331
    [TURTLE]
    David Beckett, Tim Berners-Lee. Turtle: Terse RDF Triple Language. January 2008. W3C Team Submission. URL: http://www.w3.org/TeamSubmission/turtle/