The JSON-LD API 1.0

An Application Programming Interface for the JSON-LD Syntax

Unofficial Draft 24 October 2011 12 January 2012

Editors:
Manu Sporny , Digital Bazaar
Gregg Kellogg , Kellogg Associates
Dave Longley , Digital Bazaar
Authors:
Dave Longley , Digital Bazaar
Manu Sporny , Digital Bazaar
Gregg Kellogg , Kellogg Associates

This document is also available in this non-normative format: diff to previous version .


Abstract

JSON [ RFC4627 ] has proven to be a highly useful object serialization and messaging format. JSON-LD [ JSON-LD ] harmonizes the representation of Linked Data in JSON by outlining a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document. This document outlines an Application Programming Interface and a set of algorithms for programmatically transforming JSON-LD documents.

Status of This Document

This document is merely a public working draft of a potential specification. It has no official standing of any kind and does not represent the support or consensus of any standards organisation.

This document is an experimental work in progress.

Table of Contents

1. Introduction

JSON, as specified in [ RFC4627 ], is a simple language for representing data on the Web. Linked Data is a technique for creating a graph of interlinked data across different documents or Web sites. Data entities are described using IRI s, which are typically dereferencable and thus may be used to find more information about an entity, creating a "Web "Web of Knowledge". Knowledge". JSON-LD is intended to be a simple publishing method for expressing not only Linked Data in JSON, but also for adding semantics to existing JSON.

JSON-LD is designed as a light-weight syntax that can be used to express Linked Data. It is primarily intended to be a way to use Linked Data in Javascript and other Web-based programming environments. It is also useful when building interoperable Web services and when storing Linked Data in JSON-based document storage engines. It is practical and designed to be as simple as possible, utilizing the large number of JSON parsers and libraries available today. It is designed to be able to express key-value pairs, RDF data, RDFa [ RDFA-CORE ] data, Microformats [ MICROFORMATS ] data, and Microdata [ MICRODATA ]. That is, it supports every major Web-based structured data model in use today.

The syntax does not necessarily require applications to change their JSON, but allows to easily add meaning by adding context in a way that is either in-band or out-of-band. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON with added semantics. Finally, the format is intended to be easy to parse, efficient to generate, convertible to RDF in one pass, and require a very small memory footprint in order to operate.

1.1 How to Read this Document

This document is a detailed specification for a serialization of Linked Data in JSON. The document is primarily intended for the following audiences:

To understand the basics in this specification you must first be familiar with JSON, which is detailed in [ RFC4627 ]. You must also understand the JSON-LD Syntax [ JSON-LD ], which is the base syntax used by all of the algorithms in this document. To understand the API and how it is intended to operate in a programming environment, it is useful to have working knowledge of the JavaScript programming language [ ECMA-262 ] and WebIDL [ WEBIDL ]. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [ RDF-CONCEPTS ].

Examples may contain references to existing vocabularies and use prefix es to refer to Web Vocabularies. The following is a list of all vocabularies and their prefix abbreviations, as used in this document:

JSON [ RFC4627 ] defines several terms which are used throughout this document:

JSON Object
An object structure is represented as a pair of curly brackets surrounding zero or more name/value pairs (or members). A name is a string . A single colon comes after each name, separating the name from the value. A single comma separates a value from a following name. The names within an object should be unique.
array
An array is an ordered collection of values. An array structure is represented as square brackets surrounding zero or more values (or elements). Elements are separated by commas. Within JSON-LD, array order is not preserved by default, unless specific markup is provided (see Lists ). This is because the basic data model of JSON-LD is a linked data graph , which is inherently unordered.
string
A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string.
number
A number is is similar to that used in most programming languages, except that the octal and hexadecimal formats are not used and that leading zeros are not allowed.
true and false
Boolean values.
null
The use of the null value is undefined within JSON-LD.
Supporting null in JSON-LD might have a number of advantages and should be evaluated. This is currently an open issue .

1.2 Linked Data

The following definition for Linked Data is the one that will be used for this specification.

  1. Linked Data is a set of documents, each containing a representation of a linked data graph.
  2. A linked data graph is an unordered labeled directed graph, where nodes are subject s or object s, and edges are properties.
  3. A subject is any node in a linked data graph with at least one outgoing edge.
  4. A subject should be labeled with an IRI (an Internationalized Resource Identifier as described in [ RFC3987 ]).
  5. An object is a node in a linked data graph with at least one incoming edge.
  6. An object may be labeled with an IRI .
  7. An object may be a subject and object at the same time.
  8. A property is an edge of the linked data graph .
  9. A property should be labeled with an IRI .
  10. An IRI that is a label in a linked data graph should be dereferencable to a Linked Data document describing the labeled subject , object or property .
  11. A literal is an object with a label that is not an IRI

Note that the definition for Linked Data above is silent on the topic of unlabeled nodes. Unlabeled nodes are not considered Linked Data . However, this specification allows for the expression of unlabled nodes, as most graph-based data sets on the Web contain a number of associated nodes that are not named and thus are not directly de-referenceable.

1.3 Contributing

There are a number of ways that one may participate in the development of this specification:

2. The Application Programming Interface

This API provides a clean mechanism that enables developers to convert JSON-LD data into a a variety of output formats that are easier to work with in various programming languages. If a JSON-LD API is provided in a programming environment, the entirety of the following API must be implemented.

2.1 JsonLdProcessor

] interface { };
[NoInterfaceObject]
interface JsonLdProcessor {
    object    expand (object input, optional object? context) raises (InvalidContext);
    object    compact (object input, optional object? context) raises (InvalidContext, ProcessingError);
    object    frame (object input, object frame, object options) raises (InvalidFrame);
    DOMString normalize (object input, optional object? context) raises (InvalidContext);
    void      triples (object input, JsonLdTripleCallback tripleCallback, optional object? context) raises (InvalidContext);

};

2.1.1 Methods

compact
Compacts the given input according to the steps in the Compaction Algorithm . The input must be copied, compacted and returned if there are no errors. If the compaction fails, an appropirate exception must be thrown.
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform compaction on.
context object The base context to use when compacting the input .
Exception Description
InvalidContext
INVALID_SYNTAX A general syntax error was detected in the @context . For example, if a @coerce @type key maps to anything other than a string @id or an array of strings, absolute IRI , this exception would be raised.
MULTIPLE_DATATYPES LOAD_ERROR There is more than one target datatype specified for was a single property in the list of coercion rules. This means that the processor does not know what the developer intended for the target datatype for problem encountered loading a property. remote context.
ProcessingError
LOSSY_COMPACTION The compaction would lead to a loss of information, such as a @language value.
CONFLICTING_DATATYPES The target datatype specified in the coercion rule and the datatype for the typed literal do not match.
Return type: object
expand
Expands the given input according to the steps in the Expansion Algorithm . The input must be copied, expanded and returned if there are no errors. If the expansion fails, an appropriate exception must be thrown.
Parameter Type Nullable Optional Description
input object The JSON-LD object to copy and perform the expansion upon.
context object An external context to use additionally to the context embedded in input when expanding the input .
Exception Description
InvalidContext
INVALID_SYNTAX A general syntax error was detected in the @context . For example, if a @coerce @type key maps to anything other than a string @id or an array of strings, absolute IRI , this exception would be raised.
MULTIPLE_DATATYPES LOAD_ERROR There is more than one target datatype specified for was a single property in the list of coercion rules. This means that the processor does not know what the developer intended for the target datatype for problem encountered loading a property. remote context.
Return type: object
frame
Frames the given input using the frame according to the steps in the Framing Algorithm . The input is used to build the framed output and is returned if there are no errors. If there are no matches for the frame, null must be returned. Exceptions must be thrown if there are errors.
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform framing on.
frame object The frame to use when re-arranging the data.
options object A set of options that will affect the framing algorithm.
Exception Description
InvalidFrame
INVALID_SYNTAX A frame must be either an object or an array of objects, if the frame is neither of these types, this exception is thrown.
MULTIPLE_EMBEDS A subject IRI was specified in more than one place in the input frame. More than one embed of a given subject IRI is not allowed, and if requested, must result in this exception.
Return type: object
normalize
Normalizes the given input according to the steps in the Normalization Algorithm . The input must be copied, normalized and returned if there are no errors. If the compaction fails, null must be returned. The output is the serialized representation returned from the Normalization Algorithm . It's still an open question if the result is a DOMString representing the serialized graph in JSON-LD, or an array representation which is in normalized form.
Parameter Type Nullable Optional Description
input object The JSON-LD object to perform normalization upon.
context object An external context to use additionally to the context embedded in input when expanding the input .
Exception Description
InvalidContext
INVALID_SYNTAX A general syntax error was detected in the @context . For example, if a @coerce @type key maps to anything other than a string @id or an array of strings, absolute IRI , this exception would be raised.
MULTIPLE_DATATYPES LOAD_ERROR There is more than one target datatype specified for was a single property in the list of coercion rules. This means that the processor does not know what the developer intended for the target datatype for problem encountered loading a property. remote context.
Return type: object DOMString
triples
Processes the input according to the RDF Conversion Algorithm , calling the provided tripleCallback for each triple generated.
Parameter Type Nullable Optional Description
input object The JSON-LD object to process when outputting triples.
tripleCallback JsonLdTripleCallback A callback that is called whenever a processing error occurs on the given input .
This callback should be aligned with the RDF API.
context object An external context to use additionally to the context embedded in input when expanding the input .
Exception Description
InvalidContext
INVALID_SYNTAX A general syntax error was detected in the @context . For example, if a @coerce @type key maps to anything other than a string @id or an array of strings, absolute IRI , this exception would be raised.
MULTIPLE_DATATYPES LOAD_ERROR There is more than one target datatype specified for was a single property in the list of coercion rules. This means that the processor does not know what the developer intended for the target datatype for problem encountered loading a property. remote context.
Return type: object void

2.2 JsonLdTripleCallback

The JsonLdTripleCallback is called whenever the processor generates a triple during the triple() call.

] interface { };
[NoInterfaceObject Callback]
interface JsonLdTripleCallback {
    void triple (DOMString subject, DOMString property, DOMString objectType, DOMString object, DOMString? datatype, DOMString? language);

};

2.2.1 Methods

triple
This callback is invoked whenever a triple is generated by the processor.
Parameter Type Nullable Optional Description
subject DOMString The subject IRI that is associated with the triple.
property DOMString The property IRI that is associated with the triple.
objectType DOMString The type of object that is associated with the triple. Valid values are IRI and literal .
object DOMString The object value associated with the subject and the property.
datatype DOMString The datatype associated with the object.
language DOMString The language associated with the object in BCP47 format.
No exceptions.
Return type: void

3. Algorithms

All algorithms described in this section are intended to operate on language-native data structures. That is, the serialization to a text-based JSON document isn't required as input or output to any of these algorithms and language-native data structures must be used where applicable.

3.1 Syntax Tokens and Keywords

JSON-LD specifies a number of syntax tokens and keywords keyword s that are using in all algorithms described in this section:

@context
Used to set the local context .
@base @id
Used to set the base IRI for all object IRIs affected by Sets the active context . subject.
@vocab @language
Used to set specify the base IRI language for all property IRIs affected by the active context . @coerce Used to specify type coercion rules. @literal Used to specify a literal value. @iri Used to specify an IRI value. literal.
@language @type
Used to specify set the language for type of the active subject or the datatype of a literal.
@datatype @value
Used to specify the datatype for value of a literal.
:
The separator for JSON keys and values that use the prefix mechanism.
@subject Sets the active subject. @type Used to set the type of the active subject.

All JSON-LD tokens and keywords are case-sensitive.

3.2 Algorithm Terms

initial context
a context that is specified to the algorithm before processing begins. The contents of the initial context is defined in Appendix B .
active subject
the currently active subject that the processor should use when processing.
active property
the currently active property that the processor should use when processing.
active object
the currently active object that the processor should use when processing.
active context
a context that is used to resolve term s while the processing algorithm is running. The active context is the context contained within the processor state .
blank node
a blank node is a resource which is neither an IRI nor a literal . Blank nodes may be named or unnamed and often take on the role of a variable that may represent either an IRI or a literal .
local context
a context that is specified within a JSON object , specified via the @context keyword. keyword .
processor state
the processor state , which includes the active context , active subject , and active property . The processor state is managed as a stack with elements from the previous processor state copied into a new processor state when entering a new JSON object .
JSON-LD input
The JSON-LD data structure that is provided as input to the algorithm.
JSON-LD output
The JSON-LD data structure that is produced as output by the algorithm.
term
A term is a short word defined with a context that may be expanded to an IRI
prefix
A prefix is a term that expands to a Web Vocabulary base IRI. IRI . It is typically used along with a suffix to create an IRI within a Web Vocabulary.
plain literal
A plain literal is a literal without a datatype, possibly including a language.
typed literal
A typed literal is a literal with an associated IRI which indicates the literal's datatype.

3.3 Context

Processing of JSON-LD data structure is managed recursively. During processing, each rule is applied using information provided by the active context . Processing begins by pushing a new processor state onto the processor state stack and initializing the active context with the initial context . If a local context is encountered, information from the local context is merged into the active context .

The active context is used for expanding keys and values of a JSON object (or elements of a list (see List Processing )). )) using a term mapping . It is also used to maintain coercion mapping s from IRIs associated with terms to datatypes, and list mapping s for IRIs associated with terms.

A local context is identified within a JSON object having a key of @context with string , array or a JSON object value. When processing a local context , special processing rules apply:

  1. Create a new, empty local context .
  2. Let value be the value of @context
    1. If value is an array , process each element as value , in order using Step 2 .
    2. If value is a simple string , it must have a lexical form of IRI. Set value to the result of performing absolute IRI Expansion on value . .
      1. If value does is not an absolute IRI, abort this processing step. Otherwise, Dereference value .
      2. If the resulting document is a JSON document, extract the top-level @context element using the JSON Pointer "/@context" "/@context" as described in [ JSON-POINTER ]. Set value to the extracted content, or an empty JSON Object if no value exists.
      3. Merge the of local context into the active context .
  3. If value is a JSON object , perform the following steps:
    1. If value has a @base @language key, it must have a value of a simple string with the lexical form of an absolute IRI. or null . Add the base mapping language to the local context . Turtle allows @base to be relative. If we did this, we would have to add IRI Expansion .
    2. If Otherwise, for each key in value has a @vocab key, it must have a value having the lexical form of NCName (see [ XML-NAMES ]), or is an empty string,
      1. If the key's value is a simple string with , the value must have the lexical form of an term , prefix :suffix, absolute IRI. Add IRI . Determine the vocabulary IRI mapping to the local context after value by performing IRI Expansion on the associated value. If the result of the IRI mapping is an absolute IRI , merge the key-value pair into the local context term mapping .
      2. Otherwise, the key's value must be a JSON object .
        1. If The value must has have a @coerce @id key, it key with a string value, the value must have a value the form of a JSON object . Add term , prefix :suffix, absolute IRI . Determine the @coerce IRI mapping to the local context value by performing IRI Expansion on the associated value(s). value. If the result of the IRI mapping is an absolute IRI , merge the key-value pair into the local context term mapping .
        2. If the value has a @language @type key, it the value must have a value the form of a simple string term , prefix :suffix, absolute IRI or the keyword null @id . Add Determine the language to IRI by performing IRI Expansion on the associated value. If the result of the IRI mapping is an absolute IRI or @id , merge into the local context coercion mapping .
        3. Otherwise, for each key in If the value having has a @list key, the lexical form of NCName , it's value must have be a simple string with the lexical form of IRI. true or false . Merge the key-value pair into the local context list mapping .
      3. Merge the of local context 's @coerce mapping into the active context 's @coerce mapping as described below .
      4. Merge all Repeat Step 3.2 until no entries other than the @coerce mapping from are added to the local context to the active context overwriting any duplicate values. .
      Coerce Map each key-value pair in the local context 's @coerce mapping into the active context 's @coerce mapping, overwriting any duplicate values in the active context 's @coerce mapping. The @coerce mapping has either

It can be difficult to distinguish between a single prefix:suffix value, a single term value or an array of prefix:suffix or term values. When merging with an existing mapping in the active context , map all term values to array form and replace with the union of the value from the local context and the value of the active context . If the result is an array with a single value, the processor may represent this as a string value. Initial Context The initial context is initialized absolute IRI , as follows: @base is set using section 5.1 Establishing a Base URI of [ RFC3986 prefix ]. Processors may provide seem to be a means of setting the base valid IRI programatically. @coerce is set with scheme . When performing repeated IRI expansion, a single term used as a prefix may not have a valid mapping from @iri due to @type . dependencies in resolving term definitions. By continuing Step 3.2 until no changes are made, mappings to IRIs created using an undefined term prefix will eventually resolve to absolute IRIs.

{ "@base": , "@coerce": { "@iri": "@type" } } Issue 43 concerns performing IRI expansion in the key position of a context definition.

3.4 IRI Expansion

Keys and some values are evaluated to produce an IRI. IRI . This section defines an algorithm for transforming a value representing an IRI into an actual IRI. IRI .

IRIs may be represented as an absolute IRI, IRI , a term , or a prefix :suffix construct, or as a value relative to @base or @vocab . construct.

The algorithm for generating an IRI is:

  1. Split the value into a prefix and suffix from the first occurrence of ':'.
  2. If the prefix is a '_' (underscore), the IRI is unchanged. value represents a named blank node .
  3. If the active context contains a term mapping for prefix , using a case-sensitive comparison, generate an IRI by prepending the mapped prefix to the (possibly empty) suffix using textual concatenation. Note that an empty suffix and no suffix (meaning the value contains no ':' string at all) are treated equivalently.
  4. If Otherwise, use the IRI being processed is for a property (i.e., a key's value in a JSON object , or a value in a directly as an IRI .

Previous versions of this specification used @coerce @base mapping) and the active context has a @vocab mapping, join the mapped value to the suffix using textual concatenation. If the define IRI being processed is for a subject or object (i.e., not a property) and the active context has a @base mapping, join the mapped value prefixes used to resolve relative IRIs. It was determined that this added too much complexity, but the suffix using the method described issue can be re-examined in [ RFC3986 ]. Otherwise, use the value directly as an IRI. future based on community input.

3.5 IRI Compaction

Some keys and values are expressed using IRIs. This section defines an algorithm for transforming an IRI to a compact IRI using the term s specified in the local context .

The algorithm for generating a compacted IRI is:

  1. Search every key-value pair in the active context for a term that is a complete match against the IRI. IRI . If a complete match is found, the resulting compacted IRI is the term associated with the IRI in the active context .
  2. If a complete match is not found, search for a partial match from the beginning of the IRI. IRI . For all matches that are found, the resulting compacted IRI is the term associated with the partially matched IRI in the active context concatenated with a colon (:) character and the unmatched part of the string. If there is more than one compacted IRI produced, the final value is the shortest and lexicographically least value of the entire set of compacted IRIs.

3.6 Value Expansion

Some values in JSON-LD can be expressed in a compact form. These values are required to be expanded at times when processing JSON-LD documents.

The algorithm for expanding a value is: takes an active property and active context . It is implemented as follows:

  1. If the key that value is associated with true , false or number , expand the value has an associated coercion entry in the local context , by adding a two new key-value pairs. The first key-value pair will be @value and the resulting expansion is an object populated according to string representation of value . The second key-value pair will be @type> , and the following steps: expanded version of xsd:boolean , xsd:integer , or xsd:double , depending on value .
  2. If Otherwise, if active property is the coercion target is of an @iri , @id coercion, expand the value by adding a new key-value pair where the key is @iri @id and the value is the expanded IRI according to the IRI Expansion rules.
  3. If Otherwise, if active property is the coercion target is a of typed literal, literal coercion, expand the value by adding two new key-value pairs. The first key-value pair will be @literal @value and the unexpanded value. The second key-value pair will be @datatype @type and the associated coercion datatype expanded according to the IRI Expansion rules.
  4. Otherwise, if the active context has a @language , expand value by adding two new key-value pairs. The first key-value pair will be @value and the unexpanded value. The second key-value pair will be @language and value of @language from the active context .
  5. Otherwise, value is already expanded.

3.4 3.7 Value Compaction

Some values, such as IRIs and typed literals, may be expressed in an expanded form in JSON-LD. These values are required to be compacted at times when processing JSON-LD documents.

The algorithm for compacting a an expanded value is: value takes an active property and active context . It is implemented as follows:

  1. If the local value may be expressed as true , false or number , the value is the native representation of the @value value.
  2. Otherwise, if the active context contains a coercion target for the key that is associated with matches the expression of the value, compact the value using the following steps:
    1. If the coercion target is an @iri @id , the compacted value is the value associated with the @iri @id key, processed according to the IRI Compaction steps.
    2. If the coercion target is a typed literal, the compacted value is the value associated with the @literal @value key.
  3. Otherwise, if value contains an @id key, the compacted value is value with the value of @id processed according to the IRI Compaction steps.
  4. Otherwise, if the active context contains a @language , which matches the @language of the value, or the value has only a @value key, the compacted value is the value associated with the @value key.
  5. Otherwise, if the value contains a @type key, the compacted value is value with the @type value processed according to the IRI Compaction steps.
  6. Otherwise, the value is not modified.

3.5 3.8 Expansion

Expansion is the process of taking a JSON-LD document and applying a context such that all IRI, IRI , datatypes, and literal values are expanded so that the context is no longer necessary. JSON-LD document expansion is typically used as a part of Framing or Normalization .

For example, assume the following JSON-LD input document:

{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": { "@iri": "homepage" } }, "name": "Manu Sporny", "homepage": "http://manu.sporny.org/"
{
   "@context":
   {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": {
        "@id": "http://xmlns.com/foaf/0.1/homepage",
        "@type", "@id"
      }
   },
   "name": "Manu Sporny",
   "homepage": "http://manu.sporny.org/"

}

Running the JSON-LD Expansion algorithm against the JSON-LD input document provided above would result in the following output:

{ "http://xmlns.com/foaf/0.1/name": "Manu Sporny", "http://xmlns.com/foaf/0.1/homepage": { "@iri": "http://manu.sporny.org/" }
{
   "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
   "http://xmlns.com/foaf/0.1/homepage": {
      "@id": "http://manu.sporny.org/"
   }

}

3.5.1 3.8.1 Expansion Algorithm

This

The algorithm is takes three input variables: an active context , an active property , and a work in progress, do not implement it. If value to be expanded. To begin, the top-level item in active context is set to the initial context , active property is set to nil, and value is set to the JSON-LD input .

  1. If value is an array , process each item in the array value recursively using this algorithm. If the top-level item in algorithm, passing copies of the JSON-LD input active context and active property .
  2. Otherwise, if value is an object, update object
    1. Update the local active context according to the steps outlined in the context section. Process section and remove it from the expanded result.
    2. For each key, expanding key and value in value :
      1. If the key is @id or @type and the value is a string , expand the value according to IRI Expansion .
      2. Otherwise, if the key is @value , the value must be a string and is not subject to further expansion.
      3. Otherwise, if the key is not a keyword , expand the key according to IRI Expansion rules. rules and set as active property .
      4. Process each If the value is an array , and active property is subject to @list expansion, replace the value associated with each key: a new key-value key where the key is @list and value set to the current value.
      5. If the value is an array , process each item in the array recursively using this algorithm. algorithm, passing copies of the active context and active property .
      6. If the value is an object, process the object recursively using this algorithm. algorithm, passing copies of the active context and active property .
      7. Otherwise, check to see the associated key has an associated coercion rule. If the value should be coerced, expand the value according to the Value Expansion rules. If the value does not need to be coerced, leave the value as-is. rules, passing active property .
    3. Remove the context from the object.
  3. Otherwise, expand value according to the Value Expansion rules, passing active property .
What are the implications for expanding lists?

3.6 3.9 Compaction

Compaction is the process of taking a JSON-LD document and applying a context such that the most compact form of the document is generated. JSON is typically expressed in a very compact, key-value format. That is, full IRIs are rarely used as keys. At times, a JSON-LD document may be received that is not in its most compact form. JSON-LD, via the API, provides a way to compact a JSON-LD document.

For example, assume the following JSON-LD input document:

{ "http://xmlns.com/foaf/0.1/name": "Manu Sporny", "http://xmlns.com/foaf/0.1/homepage": { "@iri": "http://manu.sporny.org/" }
{
  "http://xmlns.com/foaf/0.1/name": "Manu Sporny",
  "http://xmlns.com/foaf/0.1/homepage": {
    "@id": "http://manu.sporny.org/"
  }

}

Additionally, assume the following developer-supplied JSON-LD context:

{ "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": { "@iri": "homepage" }
{
  "name": "http://xmlns.com/foaf/0.1/name",
  "homepage": {
    "@id": "http://xmlns.com/foaf/0.1/homepage",
    "@type": "@id"
  }

}

Running the JSON-LD Compaction algorithm given the context supplied above against the JSON-LD input document provided above would result in the following output:

{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "@coerce": { "@iri": "homepage" } }, "name": "Manu Sporny", "homepage": "http://manu.sporny.org/"
{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/homepage",
      "@type": "@id"
    }
  },
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/"

}

The compaction algorithm also enables the developer to map any expanded format into an application-specific compacted format. While the context provided above mapped http://xmlns.com/foaf/0.1/name to name , it could have also mapped it to any arbitrary string provided by the developer.

3.6.1 3.9.1 Compaction Algorithm

This

The algorithm is takes two input variables: an active property , and a work in progress, do not implement it. Perform value to be expanded. To begin, the active property is set to nil, and value is set to the result of performing the Expansion Algorithm on the JSON-LD input . This removes any existing context to allow the given context to be cleanly applied. Set the The active context to the given context.

  1. If the top-level item value is an array , process each item in the array recursively, starting at value recursively using this step. algorithm, passing a copy of the active property .
  2. If the top-level item Otherwise, if value is an object, compress for each key using the steps defined and value in value
    1. If the key is @id or @type
      1. If the value of the key is a string , the compacted value is the result of performing IRI Compaction on the value.
      2. Otherwise, the compacted value is the result of performing this algorithm on the value with the current active property .
    2. Otherwise:
      1. If the key is not a keyword , set as active property and compress each compact according to IRI Compaction .
      2. If the value using is an object
        1. If the steps defined in value contains only an @id key or the value contains a @value key, the compacted value is the result of performing Value Compaction . on the value.
        2. Otherwise, if the value contains only a @list key, and the active property is subject to list coercion, the compacted value is the result of performing this algorithm on that value.
        3. Otherwise, the compacted value is the result of performing this algorithm on the value.
        What are
      3. Otherwise, if the implications for compacting lists? value is an array , the compacted value is the result of performing this algorithm on the value.
      4. Otherwise, the value is already compacted.
  3. Otherwise, the compacted value is the value .

3.7 3.10 Framing

JSON-LD Framing allows developers to query by example and force a specific tree layout to a JSON-LD document.

A JSON-LD document is a representation of a directed graph. A single directed graph can have many different serializations, each expressing exactly the same information. Developers typically work with trees, represented as JSON object s. While mapping a graph to a tree can be done, the layout of the end result must be specified in advance. A Frame can be used by a developer on a JSON-LD document to specify a deterministic layout for a graph.

Framing is the process of taking a JSON-LD document, which expresses a graph of information, and applying a specific graph layout (called a Frame ).

The JSON-LD document below expresses a library, a book and a chapter:

{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title", "@coerce": { "@iri": "contains" }, }, "@subject": [{ "@subject": "http://example.com/library", "@type": "Library", "contains": "http://example.org/library/the-republic" }, { "@subject": "http://example.org/library/the-republic", "@type": "Book", "creator": "Plato", "title": "The Republic", "contains": "http://example.org/library/the-republic#introduction" }, { "@subject": "http://example.org/library/the-republic#introduction", "@type": "Chapter", "description": "An introductory chapter on The Republic.", "title": "The Introduction" }]
{
  "@context": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     {
      "@id": "http://example.org/vocab#contains",
      "@type": "@id"
    },
    "creator":      "http://purl.org/dc/terms/creator",
    "description":  "http://purl.org/dc/terms/description",
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title"
  },
  "@id":
  [{
    "@id": "http://example.com/library",
    "@type": "Library",
    "contains": "http://example.org/library/the-republic"
  },
  {
    "@id": "http://example.org/library/the-republic",
    "@type": "Book",
    "creator": "Plato",
    "title": "The Republic",
    "contains": "http://example.org/library/the-republic#introduction"
  },
  {
    "@id": "http://example.org/library/the-republic#introduction",
    "@type": "Chapter",
    "description": "An introductory chapter on The Republic.",
    "title": "The Introduction"
  }]

}

Developers typically like to operate on items in a hierarchical, tree-based fashion. Ideally, a developer would want the data above sorted into top-level libraries, then the books that are contained in each library, and then the chapters contained in each book. To achieve that layout, the developer can define the following frame :

{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title" }, "@type": "Library", "contains": { "@type": "Book", "contains": { "@type": "Chapter" } }
{
  "@context": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     "http://example.org/vocab#contains",
    "creator":      "http://purl.org/dc/terms/creator"
    "description":  "http://purl.org/dc/terms/description"
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title"
  },
  "@type": "Library",
  "contains": {
    "@type": "Book",
    "contains": {
      "@type": "Chapter"
    }
  }

}

When the framing algorithm is run against the previously defined JSON-LD document, paired with the frame above, the following JSON-LD document is the end result:

{ "@context": { "Book": "http://example.org/vocab#Book", "Chapter": "http://example.org/vocab#Chapter", "contains": "http://example.org/vocab#contains", "creator": "http://purl.org/dc/terms/creator" "description": "http://purl.org/dc/terms/description" "Library": "http://example.org/vocab#Library", "title": "http://purl.org/dc/terms/title" }, "@subject": "http://example.org/library", "@type": "Library", "contains": { "@type": "Book", "contains": { "@type": "Chapter", }, },
{
  "@context": {
    "Book":         "http://example.org/vocab#Book",
    "Chapter":      "http://example.org/vocab#Chapter",
    "contains":     "http://example.org/vocab#contains",
    "creator":      "http://purl.org/dc/terms/creator"
    "description":  "http://purl.org/dc/terms/description"
    "Library":      "http://example.org/vocab#Library",
    "title":        "http://purl.org/dc/terms/title"
  },
  "@id": "http://example.org/library",
  "@type": "Library",
  "contains": {
    "@id": "http://example.org/library/the-republic",

    "@type": "Book",
    "creator": "Plato",
    "title": "The Republic",

    "contains": {
      "@id": "http://example.org/library/the-republic#introduction",

      "@type": "Chapter",
      "description": "An introductory chapter on The Republic.",
      "title": "The Introduction"

    },
  },

}

3.7.1 3.10.1 Framing Algorithm Terms

This algorithm is a work in progress, do not implement it. There was also a recent update to the algorithm in order to auto-embed frame-unspecified data (if the explicit inclusion flag is not set) in order to preserve graph information. This change is particularly important for comparing subgraphs (or verifying digital signatures on subgraphs). This change is not yet reflected in the algorithm below.

input frame
the initial frame provided to the framing algorithm.
framing context
a context containing the object embed flag , the explicit inclusion flag and the omit default flag .
object embed flag
a flag specifying that objects should be directly embedded in the output, instead of being referred to by their IRI. IRI .
explicit inclusion flag
a flag specifying that for properties to be included in the output, they must be explicitly declared in the framing context .
omit missing properties flag
a flag specifying that properties that are missing from the JSON-LD input should be omitted from the output.
omit default flag
Referenced from framing context , but not defined
match limit
A value specifying the maximum number of matches to accept when building arrays of values during the framing algorithm. A value of -1 specifies that there is no match limit.
map of embedded subjects
A map that tracks if a subject has been embedded in the output of the Framing Algorithm .

3.7.2 3.10.2 Framing Algorithm

The framing algorithm takes JSON-LD input that has been normalized according to the Normalization Algorithm ( normalized input ), an input frame that has been expanded according to the Expansion Algorithm ( expanded frame ), and a number of options and produces JSON-LD output . The following series of steps is the recursive portion of the framing algorithm:

  1. Initialize the framing context by setting the object embed flag , clearing the explicit inclusion flag , and clearing the omit missing properties flag . Override these values based on input options provided to the algorithm by the application.
  2. Generate a list of frames by processing the expanded frame :
    1. If the expanded frame is not an array , set match limit to 1, place the expanded frame into the list of frames , and set the JSON-LD output to null .
    2. If the expanded frame is an empty array , place an empty object into the list of frames , set the JSON-LD output to an array , and set match limit to -1.
    3. If the expanded frame is a non-empty array , add each item in the expanded frame into the list of frames , set the JSON-LD output to an array , and set match limit to -1.
  3. Create a match array for each expanded frame in the list of frames halting when either the match limit is zero or the end of the list of frames is reached. If an expanded frame is not an object, the processor must throw a Invalid Frame Format exception. Add each matching item from the normalized input to the matches array and decrement the match limit by 1 if:
    1. The expanded frame has an rdf:type that exists in the item's list of rdf:type s. Note: the rdf:type can be an array , but only one value needs to be in common between the item and the expanded frame for a match.
    2. The expanded frame does not have an rdf:type property, but every property in the expanded frame exists in the item.

    matches array not defined anywhere.

  4. Process each item in the match array with its associated match frame :
    1. If the match frame contains an @embed keyword, keyword , set the object embed flag to its value. If the match frame contains an @explicit keyword, keyword , set the explicit inclusion flag to its value. Note: if the keyword exists, but the value is neither true or false , set the associated flag to true .
    2. If the object embed flag is cleared and the item has the @subject @id property, replace the item with the value of the @subject @id property.
    3. If the object embed flag is set and the item has the @subject @id property, and its IRI is in the map of embedded subjects , throw a Duplicate Embed exception.
    4. If the object embed flag is set and the item has the @subject @id property and its IRI is not in the map of embedded subjects :
      1. If the explicit inclusion flag is set, then delete any key from the item that does not exist in the match frame , except @subject @id .
      2. For each key in the match frame , except for keywords keyword s and rdf:type :
        1. If the key is in the item, then build a new recursion input list using the object or objects associated with the key. If any object contains an @iri @id value that exists in the normalized input , replace the object in the recursion input list with a new object containing the @subject @id key where the value is the value of the @iri @id , and all of the other key-value pairs for that subject. Set the recursion match frame to the value associated with the match frame 's key. Replace the value associated with the key by recursively calling this algorithm using recursion input list , recursion match frame as input.
        2. If the key is not in the item, add the key to the item and set the associated value to an empty array if the match frame key's value is an array or null otherwise.
        3. If value associated with the item's key is null , process the omit missing properties flag :
          1. If the value associated with the key in the match frame is an array, use the first frame from the array as the property frame , otherwise set the property frame to an empty object.
          2. If the property frame contains an @omitDefault keyword, keyword , set the omit missing properties flag to its value. Note: if the keyword exists, but the value is neither true or false , set the associated flag to true .
          3. If the omit missing properties flag is set, delete the key in the item. Otherwise, if the @default keyword is set in the property frame set the item's value to the value of @default .
    5. If the JSON-LD output is null set it to the item, otherwise, append the item to the JSON-LD output .
  5. Return the JSON-LD output .

The final, non-recursive step of the framing algorithm requires the JSON-LD output to be compacted according to the Compaction Algorithm by using the context provided in the input frame . The resulting value is the final output of the compaction algorithm and is what should be returned to the application.

What are the implications for framing lists?

3.8 3.11 Normalization

This algorithm is a work in progress, do not implement it.

Normalization is the process of taking JSON-LD input and performing a deterministic transformation on that input that results in all aspects of the graph being fully expanded a normalized and named in the serialized JSON-LD output . The normalized output representation.

Normalization is generated achieved by transforming JSON-LD input to RDF, as described in such a way that any conforming RDF Conversion , invoking the normalization procedure as described in [ RDF-NORMALIZATION ], returning the serialized results.

There an open issue ( ISSUE-53 ) on the purpose and results of performing normalization. Previous versions of the specification generated JSON-LD processor will generate identical output given as the same input. The problem result of the normalization algorithm, however normalization is a fairly difficult technical problem to solve because it requires a directed graph to process required across different linked data serializations. To be ordered into a set of nodes and edges in useful, a deterministic way. This graph requires an identical normalized representation that is easy to do when all independent of the nodes have unique names, but very difficult to do when some data format originally used for markup, or the way in which language features or publisher preferences create differences in the markup of identical graphs.

It may be that the nodes are not labeled. need for either or both of flattening algorithm or to retrieve such a cryptographic signature.

Normalization is useful when comparing two graphs against one another, when generating a detailed list of differences between two graphs, and when generating a cryptographic digital signature for information contained in a graph or when generating a hash of the information contained in a graph.

The example below is an un-normalized JSON-LD document:

{ "@context": { "name": "http://xmlns.com/foaf/0.1/name", "homepage": "http://xmlns.com/foaf/0.1/homepage", "xsd": "http://www.w3.org/2001/XMLSchema#", "@coerce": { "@iri": ["homepage"] } }, "name": "Manu Sporny", "homepage": "http://manu.sporny.org/"
{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/homepage",
      "@type": "@id"
    },
    "xsd": "http://www.w3.org/2001/XMLSchema#"
  },
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/"

}

The example below is the normalized form of the JSON-LD document above:

Whitespace is used below to aid readability. The normalization algorithm for JSON-LD removes all unnecessary whitespace in the fully normalized form.

[{ "@subject": { "@iri": "_:c14n0" }, "http://xmlns.com/foaf/0.1/homepage": { "@iri": "http://manu.sporny.org/" }, "http://xmlns.com/foaf/0.1/name": "Manu Sporny"

Not clear that whitespace must be normalized, as the JSON-LD representation can't be used directly to create a signature, but would be based on the serialized result of [ RDF-NORMALIZATION ].

[{

  "@id": "_:c14n0",
  "http://xmlns.com/foaf/0.1/homepage": {
    "@id": "http://manu.sporny.org/"
  },
  "http://xmlns.com/foaf/0.1/name": "Manu Sporny"

}]

Notice how all of the term s have been expanded and sorted in alphabetical order. Also, notice how the subject has been labeled with a named blank node identifier . . Normalization ensures that any arbitrary graph containing exactly the same information would be normalized to exactly the same form shown above.

In time, there may be more than one normalization algorithm that will need to be identified. For identification purposes, this algorithm is named "Universal Graph Normalization Algorithm 2011" ( UGNA2011 ).

3.8.1 3.11.1 Normalization Algorithm Terms

label The subject IRI associated with a graph node.

The subject IRI is expressed using a key-value pair in a JSON object where the key is @subject and the value is a string that is an IRI or a JSON object containing the key @iri and a value that is a string that is an IRI. list of expanded nodes A list of all nodes in normalization algorithm transforms the JSON-LD input graph containing no embedded objects and having all keys and values expanded according to the steps in the Expansion Algorithm . alpha and beta values The words alpha and beta refer to the first and second nodes or values being examined in an algorithm. The names are merely used to refer to each input value to a comparison algorithm. renaming counter A counter that is used during the Node Relabeling Algorithm . The counter typically starts at one (1) and counts up for every node that is relabeled. There will be two such renaming counters in an implementation of the normalization algorithm. The first is the labeling counter and the second is the deterministic labeling counter . serialization label An identifier that is created to aid in the normalization process in the Deep Comparison Algorithm . The value typically takes the form of s<NUMBER> or c<NUMBER> . 3.8.2 Normalization State When performing the steps required by the normalization algorithm, it is helpful to track the many pieces of information in a data structure called the normalization state . Many of these pieces simply provide indexes into the graph. The information contained in the normalization state is described below. node state Each node in the graph will be assigned a node state . This state contains the information necessary to deterministically label all nodes in the graph. A node state includes: node reference A node reference is a reference to a node in the graph. For a given node state , its node reference refers to the node that the state is for. When a node state is created, its node reference should be to the node it is created for. outgoing list Lists the label s for all nodes that are properties of the node reference . This list should be initialized by iterating over every object associated with a property in the node reference adding its label if it is another node. incoming list Lists the label s for all nodes in the graph for which the node reference is a property. This list is initialized to an empty list. outgoing serialization map Maps node label s to serialization label s. This map is initialized to an empty map. When this map is populated, RDF, normalizes it will be filled with keys that are the label s of every node in the graph with a label that begins with _: and that has a path, via properties, that starts with the node reference . outgoing serialization A string that can be lexicographically compared to the outgoing serialization s of other node state s. It is a representation of the outgoing serialization map and other related information. This string is initialized according to an empty string. incoming serialization map [ Maps node label s to serialization label s. This map is initialized to an empty map. When this map is populated, it will be filled with keys that are the label s of every node in the graph with a label RDF-NORMALIZATION that begins with _: and that has a path, via properties, that ends with the node reference . incoming serialization A string that can be lexicographically compared to the outgoing serialization s of other node state s. It is a representation of the incoming serialization map ] and other related information. This string is initialized to an empty string. node state map A mapping from a node's label to a node state . It is initialized to an empty map. labeling prefix The labeling prefix is a string that is used as the beginning of a node label . It should be initialized then transforms back to a random base string that starts with the characters _: , is not used by any other node's label in the JSON-LD input , and does not start with the characters _:c14n . JSON-LD. The prefix has two uses. First it is used to temporarily name nodes during the normalization algorithm in a way that doesn't collide with the names that already exist as well as the names that will be generated by the normalization algorithm. Second, it will eventually be set to _:c14n to generate the final, deterministic labels for nodes in the graph. This prefix will be concatenated with the labeling counter to produce a node label . For example, _:j8r3k is a proper initial value for the labeling prefix . labeling counter A counter that is used to label nodes. It is appended to the labeling prefix to create a node label . It is initialized to 1 . deterministic labeling counter Not defined. map of flattened nodes A map containing a representation of all nodes in the graph where the key is a node label and the value result is a single JSON an object representation that has no nested sub-objects and has had all properties for the same node merged into deterministically represents a single JSON object . 3.8.3 Normalization Algorithm The normalization algorithm expands the JSON-LD input , flattens the data structure, and creates an initial set of names for all nodes in the RDF graph. The flattened data structure is then processed by a node labeling algorithm in order to get a fully expanded and named list of nodes which is then sorted. The result is a deterministically named and ordered list of graph nodes.

  1. Expand Transform the JSON-LD input to RDF according to the steps in the Expansion RDF Conversion Algorithm and store the result as the expanded input . Create a normalization state .
  2. Initialize the map of flattened nodes by recursively processing every expanded node in the expanded input in depth-first order: If the expanded node is an unlabeled node, add a new key-value pair to the expanded node where the key is @subject and the value is the concatenation of the labeling prefix Perform [ RDF-NORMALIZATION and the string value ] of the labeling counter . Increment the labeling counter . Add the expanded node that RDF to the map of flattened nodes : If the expanded node 's label is already in the map of flattened nodes merge all properties from the entry in the map create a serialized N-Triples representation of flattened nodes into the expanded node . RDF graph.
  3. Go through every property associated with an Construct a JSON array in the expanded node and remove any duplicate IRI entries from the array. If the resulting array only has one IRI entry, change it from an array to an object. Set the entry for the expanded node 's label in the map of flattened nodes to the expanded node . After exiting the recursive step, replace the reference to the expanded node with an object containing a single key-value pair where the key is @iri and the value is the value of the @subject key in serve as the node. output object.
  4. For every entry in the map of flattened nodes , insert a key-value pair into the node state map where the key is the key from the map of flattened nodes and the value is a node state where its node reference refers to the value from the map of flattened nodes . Populate the incoming list for each node state by iterating over every node in the graph and adding its label to the incoming list associated with each node found in its properties. For every entry in the node state map that has a label that begins with _:c14n , relabel the node using the Node Relabeling Algorithm . Label all of the nodes that contain a @subject key associated with a value starting with _: according to the steps in the Deterministic Labeling Algorithm . 3.8.4 Node Relabeling Algorithm This algorithm renames a node by generating a unique new label and updating all references to that label triple in the node state map . The old label N-Triples document having subject , predicate , and the normalization state must be given as an input to the algorithm. The old label is the current label of the node that is to be relabeled. The node relabeling algorithm is as follows: object :
    1. If the labeling prefix predicate is _:c14n and the old label begins with _:c14n then return as the node has already been renamed. Generate the new label by concatenating the labeling prefix with the string value of the labeling counter . Increment the labeling counter . For the node state associated with the old label , update every node http://www.w3.org/1999/02/22-rdf-syntax-ns#first , let object representation be object represented in the incoming list by changing all the properties that reference the old label to the new label . Change the old label key expanded form as described in the node state map to the new label and set the associated node reference 's label to the new label Value Expansion . 3.8.5 Deterministic Labeling Algorithm The deterministic labeling algorithm takes the normalization state and produces a list of finished nodes that is sorted and contains deterministically named and expanded nodes from the graph.
      1. Set value as the labeling prefix to _:c14n , the labeling counter to 1 , the list of finished nodes to an empty array, and create an empty array, the list of unfinished nodes . For each node reference in the node state map : If the node's label does not start with _: then put the node reference last entry in the list of finished nodes . array .
      2. If the node's label does start with _: then put the node reference last entry in the list of unfinished nodes . Append to the list of finished nodes by processing the remainder of the list of unfinished nodes until it value is empty: Sort the list of unfinished nodes in descending order according to the Deep Comparison Algorithm to determine the sort order. Create a list of labels and initialize subject , replace it to an empty array. For the first node from with the list of unfinished nodes : Add its label a JSON object to the list of labels . For each key-value pair from its associated outgoing serialization map , add the key to having a list and then sort the list according to the lexicographical order of the keys' associated values. Append the list to the list of nodes to label . list of nodes to label not defined. For each key-value key/value pair from its associated incoming serialization map , add the key to a list and then sort the list according to the lexicographical order of the keys' associated values. Append the list to the list of nodes to label . For each label in the list of labels , relabel the associated node according to the Node Relabeling Algorithm . If any outgoing serialization map contains a key that matches the label , clear the map and set the associated outgoing serialization to an empty string. If any incoming serialization map contains a key that matches the label , clear the map and set the associated incoming serialization to an empty string. Remove each node with a label that starts with _:c14n @list from the list of unfinished nodes and add it to the list of finished nodes . Sort the list of finished nodes in descending order according to the Deep Comparison Algorithm to determine the sort order. 3.8.6 Shallow Comparison Algorithm The shallow comparison algorithm takes two unlabeled nodes, alpha and beta , as input and determines which one should come first in a sorted list. The following algorithm determines the steps that are executed in order to determine the node that should come first in a list: Compare the total number of node properties. The node with fewer properties is first. Lexicographically sort the property IRI s for each node and compare the sorted lists. If an IRI is found to be lexicographically smaller, the node containing that IRI is first. Compare the values of each property against one another: The node associated with fewer property values is first. Create an alpha list by adding all values associated with the alpha property that are not unlabeled nodes. Create a beta list by adding all values associated with the beta property that is not an unlabeled node. Compare the length of alpha list array and beta list . The node associated with the list containing the fewer number of items is first. object representation .
      3. Sort alpha list and beta list according to the Object Comparison Algorithm . For each offset into the alpha list , compare the item at the offset against the item at Otherwise, the same offset last key/value entry in the beta list according to the Object Comparison Algorithm . The node associated with the lesser item is first. Process the incoming list s associated with each node to determine order: The node with the shortest incoming list is first. Sort the incoming list s according to incoming property and then incoming label . The node associated with the fewest number of incoming nodes is first. For each offset into the incoming list s, compare the associated properties and label s: The node associated with value must be a label that does not begin with _: is first. If the nodes' label s do not begin with _: , then the node associated with the lexicographically lesser label is first. The node associated with the lexicographically lesser associated property is first. The node with the label that does not begin with _:c14n is first. The node with the lexicographically lesser label is first. Otherwise, the nodes are equivalent. 3.8.7 Object Comparison Algorithm The JSON object comparison algorithm is designed to compare two graph node property values, alpha and beta , against the other. The algorithm is useful when sorting two lists of graph node properties. If one of the values is a string and the other is not, the value that is having a string is first. If both values are string s, the lexicographically lesser string is first. If one single key of the values is @list with a literal and the other is not, the value that is a literal is first. If both values are literals: The lexicographically lesser string associated with @literal is first. The lexicographically lesser string associated with @datatype is first. The lexicographically lesser string associated with @language is first. an array . Append object representation .
    2. If both values are expanded IRI s, the lexicographically lesser string associated with @iri Otherwise, if predicate is first. http://www.w3.org/1999/02/22-rdf-syntax-ns#rest , ignore this triple.
    3. Otherwise, the two values are equivalent. 3.8.8 Deep Comparison Algorithm The deep comparison algorithm is used to compare the difference between two nodes, alpha and beta . A deep comparison takes the incoming and outgoing node edges in a graph into account if the number of properties and value of those properties are identical. The algorithm is helpful when sorting a list of nodes and will return whichever node should be placed first in a list if the two nodes are not truly equivalent. When performing the steps required by the deep comparison algorithm, it is helpful to track state information about mappings. The information contained last entry in a mapping state is described below. mapping state mapping counter Keeps track of the number of nodes that have been mapped to serialization label s. It is initialized to 1 . processed labels map Keeps track of the label s of nodes that have already been assigned serialization label s. It is initialized to an empty map. serialized labels map Maps a node label to its associated serialization label . It is initialized to an empty map. adjacent info map Maps a serialization label to the node label associated with it, the list of sorted serialization label s for adjacent nodes, and the map of adjacent node serialization label s to their associated node label s. It is initialized to an empty map. key stack A stack where each element contains an array of adjacent serialization label s and an index into that array. It is initialized to a stack containing a single element where its array contains a single string element s1 and its index is set to 0 . serialized keys Keeps track of which serialization label s have already been written at least once to the serialization string . It is initialized to an empty map. serialization string A string that is incrementally updated as a serialization is built. It is initialized to an empty string. The deep comparison algorithm is as follows: Perform a comparison between alpha and beta according to the Shallow Comparison Algorithm . If the result does not show that the two nodes are equivalent, return the result. Compare incoming and outgoing edges for each node, updating their associated node state as each node is processed: If the outgoing serialization map for alpha is empty, generate the serialization according to the Node Serialization Algorithm . Provide alpha 's node state , a new mapping state , outgoing direction to the algorithm as inputs. If the outgoing serialization map for beta is empty, generate the serialization according to the Node Serialization Algorithm . Provide beta JSON Object 's node state , a new mapping state , and with an outgoing direction @id to the algorithm as inputs. If alpha 's outgoing serialization is lexicographically less than beta 's, then alpha is first. If it is greater, then beta is first. If the incoming serialization map for alpha is empty, generate the serialization according to the Node Serialization Algorithm . Provide alpha 's node state , a new mapping state with its serialized labels map set to having a copy value of alpha 's outgoing serialization map , and incoming direction to the algorithm as inputs. subject :
      1. If the incoming serialization map for beta is empty, generate the serialization according to the Node Serialization Algorithm . Provide beta 's node state , Create a new mapping state JSON Object with its serialized labels map set to a copy key/value pair of beta 's outgoing serialization map , and incoming direction @id to the algorithm as inputs. If alpha 's incoming serialization is lexicographically less than beta 's, then alpha is first. If it is greater, then beta is first. 3.8.9 Node Serialization Algorithm The node serialization algorithm takes a node state , a mapping state , and a direction (either outgoing direction or incoming direction ) as inputs string representation of subject and generates a deterministic serialization for the node reference . If the label exists in the processed labels map , terminate the algorithm use as the serialization label has already been created. Set the value associated with the label in the processed labels map to true . .
      2. Generate the next serialization label for the label according Otherwise, set value to the Serialization Label Generation Algorithm . Create an empty map called the adjacent serialized labels map that will store mappings from serialization label s to adjacent node label s. value.
      3. Create an empty array called the adjacent unserialized labels list that will store label s of adjacent nodes that haven't been assigned serialization label s yet.
    4. For every label in a list, where the list the outgoing list if the direction is If outgoing direction predicate and the incoming list otherwise, if the label starts with _: , it is the target node label : http://www.w3.org/1999/02/22-rdf-syntax-ns#type :
      1. Look up the target node label in the processed labels map and if a mapping exists, update the adjacent serialized labels map where the key is the value in the serialization map ( serialization map is used, but should it be directed serialization map , outgoing serialization map or incoming serialization map ? ) and the If value is the target node label . Otherwise, add the target node label to the adjacent unserialized labels list . Set the maximum serialization combinations to 1 or the length of the adjacent unserialized labels list , whichever is greater. While the maximum serialization combinations is greater than 0 , perform the Combinatorial Serialization Algorithm passing the node state , the mapping state for the first iteration and a copy has an key/value pair of it for each subsequent iteration, the generated serialization label , the direction , the adjacent serialized labels map , and the adjacent unserialized labels list . Decrement the maximum serialization combinations by 1 @type for each iteration. 3.8.10 Serialization Label Generation Algorithm The algorithm generates a serialization label given a label and a mapping state and returns the serialization label . If the label is already in the serialization labels map , return its associated value. ( serialization labels map is used, but should it be directed serialization map an array , outgoing serialization map or incoming serialization map ? ) If the label starts with append the string _:c14n , the serialization label representation of object is the letter c followed by the number to that follows _:c14n in the label . array.
      2. Otherwise, the serialization label is the letter s followed by the string if value has an key of mapping counter . Increment the mapping counter by 1 . Create @type , replace that value with a new key-value pair in the serialization labels map where the key is the label and array containing the existing value is the generated serialization label . 3.8.11 Combinatorial Serialization Algorithm The combinatorial serialization algorithm takes a node state , a mapping state , a serialization label , a direction , a adjacent serialized labels map , and a adjacent unserialized labels list as inputs and generates the lexicographically least serialization string representation of nodes relating to the node reference . If the adjacent unserialized labels list is not empty: Copy the adjacent serialized labels map to the adjacent serialized labels map copy . object .
      3. Remove the first unserialized label from the adjacent unserialized labels list and Otherwise, create a new new serialization label according to the Serialization Label Generation Algorithm . Create a new key-value mapping entry in the adjacent serialized labels map copy where the key is the new serialization label and the value is the unserialized label . Set the maximum serialization rotations to 1 or the length of the adjacent unserialized labels list , whichever is greater. While the maximum serialization rotations is greater than 0 : Recursively perform the Combinatorial Serialization Algorithm passing the mapping state for the first iteration of the loop, and a copy of it for each subsequent iteration. Rotate the elements in the adjacent unserialized labels list by shifting each of them once to the right, moving the element at the end of the list to the beginning of the list. Decrement the maximum serialization rotations by 1 for each iteration. If the adjacent unserialized labels list is empty: Create a list of keys from the keys in the adjacent serialized labels map and sort it lexicographically. Add with a key-value pair to the adjacent info map where the key is the serialization label and the value is an object containing the node reference 's label, the list of keys and the adjacent serialized labels map . Update the serialization string according to the Mapping Serialization Algorithm . If the direction is outgoing direction @type then directed serialization refers to the outgoing serialization and the directed serialization map refers to the outgoing serialization map , otherwise it refers to the incoming serialization and the directed serialization map refers to the incoming serialization map . Compare the serialization string to the directed serialization according to the Serialization Comparison Algorithm . If the serialization string is less than or equal to the directed serialization : For each value in the list of keys , run the Node Serialization Algorithm . Update the serialization string according to the Mapping Serialization Algorithm . Compare the serialization string to the directed serialization again and if it is less than or equal and the length of the serialization string is greater than or equal to the length of the directed serialization , then set the directed serialization to the serialization string and set the directed serialization map to the serialized labels map . 3.8.12 Serialization Comparison Algorithm The serialization comparison algorithm takes two serializations, alpha and beta and returns either which of the two is less than the other or that they are equal. Whichever serialization is an empty string is greater. If they are both empty strings, they are equal. Return the result of being a lexicographical comparison of alpha and beta up to the number of characters in the shortest of the two serializations. 3.8.13 Mapping Serialization Algorithm The mapping serialization algorithm incrementally updates the serialization string in a mapping state . If the key stack is not empty: Pop the serialization key info off representation of the key stack . object .
      4. For each serialization key in the serialization key info array, starting at the serialization key index from the serialization key info : If the serialization key is not in the adjacent info map , push the serialization key info onto the key stack and exit from this loop.
    5. If the serialization key is a key in serialized keys , a cycle has been detected. Append the concatenation of the _ character and the serialization Otherwise, let key to by the serialization string . Otherwise, serialize all outgoing representation of predicate and incoming edges let object representation be object represented in the related node by performing the following steps: Mark the serialization key expanded form as having been processed by adding a new key-value pair to serialized keys where the key is the serialization key and the value is true . Set the serialization fragment to the value of the serialization key . Set the adjacent info to the value of the serialization key described in the adjacent info map . Set the adjacent node label to the node label from the adjacent info map Value Expansion .
    6. If a mapping for the adjacent node label exists in the map of all labels : Append the result of the Label Serialization Algorithm to the serialization fragment . map of all labels referenced but not defined. Append all of the keys in the adjacent info map to the serialization fragment . Append the serialization fragment to the serialization string . Push a new key info object containing the keys from the adjacent info map and value has an index key/value pair of 0 onto the key stack . Recursively update the serialization string according to the Mapping Serialization Algorithm . 3.8.14 Label Serialization Algorithm The label serialization algorithm serializes information about a node that has been assigned a particular serialization label . Initialize the label serialization to and an empty string. Append the [ character to the label serialization . Append all properties to the label serialization by processing each key-value pair in the node reference array , excluding the @subject property. The keys should be processed in lexicographical order and their associated values should be processed in the order produced by the Object Comparison Algorithm : Build a string using the pattern < KEY > where KEY is the current key. Append string append object representation to the label serialization . that array.
    7. The Otherwise, if value may be a single object or has an array of objects. Process all key of the objects that are associated with the key, building an object string for each item: If the object contains an @iri key with a value , replace that starts with _: , set the object string to the value _: . If the value does not start with _: , build the object string using the pattern < IRI > where IRI is the value associated with the @iri key. If the object contains a @literal key and a @datatype key, build the object string using the pattern " LITERAL "^^< DATATYPE > where LITERAL is the value associated with the @literal key and DATATYPE is new array containing the existing value associated with the @datatype key. If the object contains a @literal key and a @language key, build the object string using the pattern " LITERAL "@ LANGUAGE where LITERAL is the value associated with the @literal key and LANGUAGE is the value associated with the @language key. representation .
    8. Otherwise, the value is create a string. Build the object string using the pattern " LITERAL " where LITERAL is the new entry in value associated with the current key. If this is the second iteration of the loop, append a | separator character to the label serialization . Append the object string to the label serialization . Append the ] character to the label serialization . Append the [ character to the label serialization . Append all incoming references for the current label to the label serialization by processing all key of the items associated with the incoming list : Build a reference string using the pattern < PROPERTY > < REFERER > where PROPERTY is the property associated with the incoming reference key and REFERER is either the subject of the node referring to the label in the incoming reference or _: if REFERER begins with _: . If this is the second iteration of the loop, append a | separator character to the label serialization . Append the reference string to the label serialization . object representation .
  5. Append the ] character to the label serialization . Append all adjacent node labels to the label serialization by concatenating the string value for all of them, one after the other, to the label serialization . adjacent node labels referenced but not defined.
  6. Push the adjacent node labels onto the key stack and append the result of the Mapping Serialization Algorithm to Return array as the label serialization . normalized graph representation.

3.9 3.12 Data Round Tripping

When normalizing xsd:double values, implementers must ensure that the normalized value is a string. In order to generate the string from a double value, output equivalent to the printf("%1.6e", printf("%1.6e", value) function in C must be used where "%1.6e" "%1.6e" is the string formatter and value is the value to be converted.

To convert the a double value in JavaScript, implementers can use the following snippet of code:

// the variable 'value' below is the JavaScript native double value that is to be converted
// the variable 'value' below is the JavaScript native double value that is to be converted
(value).toExponential(6).replace(/(e(?:\+|-))([0-9])$/,
'$10$2')

When data needs to be normalized, JSON-LD authors should not use values that are going to undergo automatic conversion. This is due to the lossy nature of xsd:double values.

Some JSON serializers, such as PHP's native implementation, backslash-escapes the forward slash character. For example, the value http://example.com/ would be serialized as http:\/\/example.com\/ in some versions of PHP. This is problematic when generating a byte stream for processes such as normalization. There is no need to backslash-escape forward-slashes in JSON-LD. To aid interoperability between JSON-LD processors, a JSON-LD serializer must not backslash-escape forward slashes.

Round-tripping data can be problematic if we mix and match @coerce coercion rules with JSON-native datatypes, like integers. Consider the following code example:

var myObj = { "@context" : { "number" : "http://example.com/vocab#number", "@coerce": { "xsd:nonNegativeInteger": "number" } }, "number" : 42 }; // Map the language-native object to JSON-LD var jsonldText = jsonld.normalize(myObj); // Convert the normalized object back to a JavaScript object
var myObj = { "@context" : {
                "number" : {
                  "@id": "http://example.com/vocab#number",
                  "@type": "xsd:nonNegativeInteger"
                }
              },
              "number" : 42 };

// Map the language-native object to JSON-LD
var jsonldText = jsonld.normalize(myObj);

// Convert the normalized object back to a JavaScript object

var
myObj2
=
jsonld.parse(jsonldText);

At this point, myObj2 and myObj will have different values for the "number" "number" value. myObj will be the number 42, while myObj2 will be the string "42". "42". This type of data round-tripping error can bite developers. We are currently wondering if having a "coerce validation" "coercion validation" phase in the parsing/normalization phases would be a good idea. It would prevent data round-tripping issues like the one mentioned above.

3.10 3.13 RDF Conversion

A JSON-LD document may be converted to any other RDF-compatible document format using the algorithm specified in this section.

The JSON-LD Processing Model describes processing rules for extracting RDF from a JSON-LD document. Note that many uses of JSON-LD may not require generation of RDF.

The processing algorithm described in this section is provided in order to demonstrate how one might implement a JSON-LD to RDF processor. Conformant implementations are only required to produce the same type and number of triples during the output process and are not required to implement the algorithm exactly as described.

The RDF Conversion Algorithm is a work in progress.

3.10.1 3.13.1 Overview

This section is non-normative.

JSON-LD is intended to have an easy to parse grammar that closely models existing practice in using JSON for describing object representations. This allows the use of existing libraries for parsing JSON.

As with other grammars used for describing Linked Data , a key concept is that of a resource . . Resources may be of three basic types: IRI s, for describing externally named entities, BNodes , resources for which an external name does not exist, or is not known, and Literals, which describe terminal entities such as strings, dates and other representations having a lexical representation possibly including an explicit language or datatype.

An Internationalized Resource Identifier ( IRI ), as described in [ RFC3987 ], is a mechanism for representing unique identifiers on the web. In Linked Data , an IRI is commonly used for expressing a subject , a property or an object .

Data described with JSON-LD may be considered to be the representation of a graph made up of subject and object resources resource s related via a property resource. resource . However, specific implementations may choose to operate on the document as a normal JSON description of objects having attributes.

3.10.2 3.13.2 RDF Conversion Algorithm Terms

default graph
the destination graph for all triples generated by JSON-LD markup.

3.10.3 3.13.3 RDF Conversion Algorithm

The algorithm below is designed for in-memory implementations with random access to JSON object elements.

A conforming JSON-LD processor implementing RDF conversion must implement a processing algorithm that results in the same default graph that the following algorithm generates:

  1. Create a new processor state with with the active context set to the initial context and active subject and active property initialized to NULL.
  2. If a JSON object is detected, perform the following steps:
    1. If the JSON object has a @context key, process the local context as described in Context .
    2. Create a new JSON object by mapping the keys from copy of the current JSON object using the active context to new , changing keys using the associated value from the current JSON object . Repeat the mapping until no entry is found within the active context that map to JSON-LD keyword for the key. s with those keyword s. Use the new JSON object in subsequent steps.
    3. If the JSON object has an @iri key, set the active object by performing IRI Expansion on the associated value. Generate a triple representing the active subject , the active property and the active object . Return the active object to the calling location. @iri really just behaves the same as @subject , consider consolidating them. If the JSON object has a @literal @value key, set the active object to a literal value as follows:
      1. as a typed literal if the JSON object contains a @datatype @type key after performing IRI Expansion on the specified @datatype @type .
      2. otherwise, as a plain literal . If the JSON object contains a @language key, use it's value to set the language of the plain literal.
      3. Generate If the neither the active subject nor the active property , generate a triple representing the active subject , the active property and the active object .
      4. Return the active object to the calling location.
    4. If the JSON object has a @list key and the value is an array process the value as a list starting at Step 4 as described in List Conversion .
    5. If the JSON object has a @subject @id key:
      1. If the value is a string , set the active object to the result of performing IRI Expansion . Generate a triple representing the active subject , the active property and the active object . Set the active subject to the active object .
      2. Create a new processor state copies of the active context , active subject and active property .
        1. If the active property is the target of a @list coercion, and the value is an array , process the value as a list starting at Step 4 . Otherwise, process Process the value starting at Step 2 .
        2. Proceed using the previous processor state .
    6. If the JSON object does not have a @subject @id key, set the active object to newly generated blank node identifier . Generate a triple representing the active subject , the active property and the active object . Set the active subject to the active object .
    7. For each key in the JSON object that has not already been processed, perform the following steps:
      1. If the key is @type , set the active property to rdf:type .
      2. Otherwise, set the active property to the result of performing IRI Expansion on the key.
      3. Create If the active property is the target of a @list coercion, and the value is an array , process the value as a list as described in in List Conversion .
      4. Otherwise, create a new processor state copies of the active context , active subject and active property and process the value starting at Step 2 and proceed using the previous processor state .
    8. Return the active object to the calling location.
  3. If a regular array is detected, process each value in the array by doing the following returning the result of processing the last value in the array :
    1. Create a new processor state using copies of the active context , active subject and active property and process the value starting at Step 2 then proceed using the previous processor state .
  4. Generate an RDF List by linking each element of the list using rdf:first and rdf:next , terminating the list with rdf:nil using the following sequence: If the list has no element, generate a triple using the active subject , active property and rdf:nil . Otherwise, generate a triple using using the active subject , active property and a newly generated BNode identified as first blank node identifier . For each element other than the last element in the list: Create a processor state using the active context, first blank node identifier as the active subject , and rdf:first as the active property . Unless this is the last element in the list, generate a new BNode identified as rest blank node identifier , otherwise use rdf:nil . Generate a new triple using first blank node identifier , rdf:rest and rest blank node identifier . Set first blank node identifier to rest blank node identifier .
  5. If a string is detected:
    1. If the active property is the target of a @iri @id coercion, set the active object by performing IRI Expansion on the string.
    2. Otherwise, if the active property is the target of coercion, set the active object by creating a typed literal using the string and the coercion key as the datatype IRI. IRI .
    3. Otherwise, set the active object to a plain literal value created from the string. If the active context contains a language key with a non- null value, use it's value to set the language of the plain literal .
    Generate a triple representing the active subject , the active property and the active object .
  6. If a number is detected, generate a typed literal using a string representation of the value with datatype set to either xsd:integer or xsd:double , depending on if the value contains a fractional and/or an exponential component. Generate a triple using the active subject , active property and the generated typed literal.
  7. Otherwise, if true or false is detected, generate a triple using the active subject , active property and a typed literal value created from the string representation of the value with datatype set to xsd:boolean .

3.10.4 3.13.4 Acknowledgements List Conversion

List Conversion is the process of taking an array of values and adding them to a newly created RDF Collection (see [ RDF-SCHEMA ]) by linking each element of the list using rdf:first and rdf:next , terminating the list with rdf:nil using the following sequence:

The algorithm is invoked with an array array , the active property , and the active context and returns a value to be used as an active object .

This algorithm does not support lists containing lists.
  1. If array is empty return rdf:nil .
  2. Otherwise, generate a triple using using the active subject , active property and a newly generated BNode identified as first blank node .
  3. For each element in array other than the last element:
    1. Create a processor state using the active context , first blank node as the active subject , and rdf:first as the active property .
      1. Process the value starting at Step 2 .
      2. Proceed using the previous processor state .
    2. Unless this is the last element in array , generate a new BNode identified as rest blank node , otherwise use rdf:nil .
    3. Generate a new triple using first blank node , rdf:rest and rest blank node .
    4. Set first blank node to rest blank node .
    5. Return first blank node .

A. Acknowledgements

The editors would like to thank Mark Birbeck, who provided a great deal of the initial push behind the JSON-LD work via his work on RDFj, Dave Longley, Dave Lehn and Mike Johnson who reviewed, provided feedback, and performed several implementations of the specification, and Ian Davis, who created RDF/JSON. Thanks also to Nathan Rixham, Bradley P. Allen, Kingsley Idehen, Glenn McDonald, Alexandre Passant, Danny Ayers, Ted Thibodeau Jr., Olivier Grisel, Niklas Lindström, Lindstr�m, Markus Lanthaler, and Richard Cyganiak for their input on the specification. Another huge thank you goes out to Dave Longley who designed many of

B. Initial Context

The initial context is defined with the algorithms used following default entries:

{

  "@context": {
    "http://www.w3.org/1999/02/22-rdf-syntax-ns#type": { "@type": "@id"}
  }
}

Processors must act as if the initial context is defined in this specification, including the normalization algorithm which was a monumentally difficult design challenge. outer-most level when processing JSON-LD documents.

Should we define other default prefixes?

A. C. References

A.1 C.1 Normative references

[JSON-LD]
Manu Sporny, Gregg Kellogg, et al. Kellogg. The JSON-LD Syntax Latest. W3C Editor's Draft. URL: http://json-ld.org/spec/latest/json-ld-syntax/
[JSON-POINTER]
P. Bryan, Ed. JSON Pointer Latest. IETF Draft. URL: http://www.ietf.org/id/draft-pbryan-zyp-json-pointer-01.txt
[RDF-CONCEPTS]
Graham Klyne; Jeremy J. Carroll. Resource Description Framework (RDF): Concepts and Abstract Syntax. 10 February 2004. W3C Recommendation. URL: http://www.w3.org/TR/2004/REC-rdf-concepts-20040210
[RFC3986]
[RDF-NORMALIZATION]
T. Berners-Lee; R. Fielding; L. Masinter. Manu Sporny, Dave Longley. Uniform Resource Identifier (URI): Generic Syntax. RDF Graph Normalization January 2005. Internet RFC 3986. Latest. W3C Editor's Draft. URL: http://www.ietf.org/rfc/rfc3986.txt http://json-ld.org/spec/latest/rdf-graph-normalization/
[RDF-SCHEMA]
Dan Brickley; Ramanathan V. Guha. RDF Vocabulary Description Language 1.0: RDF Schema. 10 February 2004. W3C Recommendation. URL: http://www.w3.org/TR/2004/REC-rdf-schema-20040210
[RFC3987]
M. Dürst; D�rst; M. Suignard. Internationalized Resource Identifiers (IRIs). January 2005. Internet RFC 3987. URL: http://www.ietf.org/rfc/rfc3987.txt
[RFC4627]
D. Crockford. The application/json Media Type for JavaScript Object Notation (JSON) July 2006. Internet RFC 4627. URL: http://www.ietf.org/rfc/rfc4627.txt
[WEBIDL]
Cameron McCormack. Web IDL. 19 December 2008. 27 September 2011. W3C Working Draft. (Work in progress.) URL: http://www.w3.org/TR/2008/WD-WebIDL-20081219 http://www.w3.org/TR/2011/WD-WebIDL-20110927/

A.2 C.2 Informative references

[ECMA-262]
ECMAScript Language Specification, Third Edition. Specification. December 1999. URL: http://www.ecma-international.org/publications/standards/Ecma-262.htm
[MICRODATA]
Ian Hickson; et al. Microdata 04 March 2010. W3C Working Draft. URL: http://www.w3.org/TR/microdata/
[MICROFORMATS]
Microformats . URL: http://microformats.org
[RDFA-CORE]
Shane McCarron; et al. RDFa Core 1.1: Syntax and processing rules for embedding RDF through attributes. 31 March 15 December 2011. W3C Working Draft. URL: http://www.w3.org/TR/2011/WD-rdfa-core-20110331 http://www.w3.org/TR/2011/WD-rdfa-core-20111215
[XML-NAMES]
Richard Tobin; et al. Namespaces in XML 1.0 (Third Edition). 8 December 2009. W3C Recommendation. URL: http://www.w3.org/TR/2009/REC-xml-names-20091208/