... Previously container was limited to 4 values. Now we have a new value space for container.
... so for compatibility, a 1.1 processor and sees it in 1.0 mode will complain
David Wood: Versioned MIME 1.0 -- intention to later come up with 1.1. Unexpectedly, the process of versioning caused MIME to never increase its version ✪
... built into so many tools
... With schema.org and other usage, what barriers are there practically for adoption?
... brought up a couple already, are there others?
Alexandre Bertails: Depending on what could be broken when 1.0 processors read @version 1.1, maybe it would make sense to use 2.0 instead of 1.1 ✪
Gregg Kellogg: New keys can confuse 1.0 and 1.1 implementations. If this property was used, 1.1 would interpret the data differently ✪
... need to face what that means as a community. Far fewer implementations of the processor than the uses of the language, we could choose to simply say we're cutting off the 1.0 features or restrictions
Ivan Herman: Far fewer code implementations, but no idea how often it's used. Forcing an upgrade ✪
Manu Sporny: Kingsley sent me an email 2 days ago, just tried your 1.1 context and it blew up on me. ... what are we doing? ✪
... that was a good thing as it was a 1.0 processor
... Good for it to blow up as there are features we absolutely need
... His digital signature would be just wrong
Gregg Kellogg: How does a CG manage it? Need a working group. ✪
Ivan Herman: If its successful, take it to WG and the version will be decided ✪
... agree with alexandre that 2.0 sends a new message
Benjamin Young: +1 To calling it 2.0 (or just 2) ✪
... rdf 1.1 is incremental, nothign radical
Farbian Gandon: Fabien: it is ok to do 1.1 or 2.0 in a CG or WG ✪
Cwebber2: Agree with Manu that it should have blown up in that circumstance ✪
... don't want someone using schema who it was using it in prod, don't want it to randomly blow up
... learnt lesson in activitypub/streams, if you add a new term to a context, and had a cached context, it wouldn't validate
... doesn't change this, but might be worth encouraging. Good to version contexts... schema.org + some version identifier. Can wait to update to that version when your code is ready
... should start pushing more
Kim Hamilton Duffy: Abstracted from just JSON -- does that change RDF canonicalization ✪
Gregg Kellogg: Canonicalization happens at the rdf abstract model level ✪
... if your data is in YAML, you might eg have whitespace issues ... but also in RDF/XML
... no different barriers
... turning JSON-LD into RDF for normalization goes through the algorithms that parse the syntax into an intermediate form, that you then run alogithms on
... just a different surface syntax. Can be used where the target model can go to that structure
Benjamin Young: But not round trip everything like index ✪
... (round tripping meaning)
Gregg Kellogg: Definitely need to leave it as an issue. Good discussion here, in issue, and hopefully in a WG ✪
Reto Gmür: One issue was a huge file that couldnt' be parsed. Worked with a developer, JSON-LD looks a bit different, then people have a problem. Google structured data -- put examples in and there are errors ✪
... if you look at it like JSON, then can be surprised when things can mean the same thing and look differently
Ivan Herman: Two JSON-LD are equivalent if the RDF is euivalent, but files can be very different ✪
... hard to understand if you don't have the background
David Wood: JSON and JSON-LD have idfferent data models -- LD is explicit, JSON is implicit ✪
Topic: ID Maps
Gregg Kellogg: (Slides) ID maps -- keys of URIs that reference the description of the identified resource ✪
... these are equivalent to an array of two objects with @id in the objects
... round trippable. If you compact with a context that doesn't have a container id, then you get array. If you add it, you get the object
Gregg Kellogg: So the context comes into play for values of the term. Name is Manu at the top level, then an interest which has two keys name and topic ✪
... Now name and topic are evaluated according to foaf not schema
... gets around the framing problem of putting contexts lower down in the frame
... desire in the community, or at least a misunderstanding, about what contexts are for
... thought if you added data to a term definition to the context it would be added to the data. That's not the case
Benjamin Young: Fixes the confusion that you're defining a term globally, and name now has one meaning through out. Fixes the problem for "found json" ... others json I want to be -ld ✪
... you get some snowflake json and want to make it LD, and they used the same key differently
Alexandre Bertails: /Me notes that we needed something like in schema.org because of some terms pointing to GoodRelations entities ✪
... scoped context based on type. Key off of @type to assign the context.
... So the term definition is based on the @type of the resource, not the key that is used to refer to it
David I. Lehn: Didn't you throw the parser writers under the bus? ✪
Gregg Kellogg: It requires injecting code at the beginning of expansion where we look at type. Have to look at key for expansion, then also pass the context in. ✪
... appends the context to be used. Doesn't require two passes
Alexandre Bertails: Does order of properties matter? ✪
Gregg Kellogg: Does it have a key that expands to @type? ✪
Alexandre Bertails: Can't stream the JSON though ✪
Manu Sporny: Wanted streaming processors to work, but couldn't do it. If you want streaming, then you do need to order the keys in the right way as the publisher ✪