(important typo correction in last paragraph)
On Dec 10, 2013, at 3:08 PM, Bjoern Hoehrmann wrote:
> * Allen Wirfs-Brock wrote:
>> On Dec 9, 2013, at 5:40 PM, Bjoern Hoehrmann wrote:
>>> If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
>>> ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
>>> discussion of JSON and a clear indication that future editions will not
>>> add such discussion, and will not change the grammar without IETF con-
>>> sensus, I would be willing to entertain the idea of making ECMA-404 a
>>> normative reference.
>> The second paragraph is speaking about the language described by the
>> grammar, not the actual formalism used to express the grammar. I'm quite
>> sure that there is no interest at all within TC39 to ever change the
>> actual JSON language. If you are looking for some sort of contractual
>> commitment from ECMA, I suspect you are wasting your time. Does the IETF
>> make such commitments?
> As you know, the charter of the JSON Working Group says
> The resulting document will be jointly published as an RFC and by
> ECMA. ECMA participants will be participating in the working group
> editing through the normal process of working group participation.
> The responsible AD will coordinate the approval process with ECMA so
> that the versions of the document that are approved by each body are
> the same.
> If things had gone according to plan, it seems likely that Ecma would
> have requested the IANA registration for application/json jointly lists
> the IETF and Ecma International has holding Change Control over it, and
> it seems unlikely there would have been much disagreement about that.
> It is normal to award change control to other organisations, for
> instance, RFC 3023 gives change control for the XML media types to the
> W3C. I can look up examples for jointly held change control if that
> would help.
> And no, I am not looking for an enforceable contract, just a clear
> formal decision and statement.
Obviously, the originally envisioned process broke down, but I don't think we need to discuss that right here, right now.
TC39's concern seems to be both narrower (just the JSON syntax and static semantics, not wire encodings) and wider (implementations that aren't tied to the application/json media type) than the JSON WG's. I know that the TC39 consensus is that ECMA-404 (probably with some revision) should be serviceable as a foundation for other specs that address other issues.
>> This doesn't mean that TC39 would necessarily agree to eliminate the
>> Syntax Diagrams, or that we wouldn't carefully audit any grammar
>> contribution to make sure that it is describing the same language.
>> There may also be minor issues that need to be resolved. But we seem to
>> agree that we already are both accurately describing the same language
>> so this is really about notational agreement.
> Having non-normative syntax diagrams in addition to the ABNF grammar
> would be fine if they can automatically be generated from the ABNF.
> I was talking about removing most of the prose, leaving only boiler-
> plate, a very short introduction, and references. Then it would be a
> specification of only the syntax and most technical concerns would be
> addressed on both sides. If you see this as a viable way forward, then
> I think the JSON WG should explore this option further.
I agree, this sounds plausible to me.
>> As a base line, ECMA-404 was created in less than a week. It takes a
>> couple months to push through a letter ballot to above a revised
> The RFC4627bis draft could be approved and be held for normatives re-
> ferences to materialise; this is not uncommon for IETF standards. It
> usually takes a couple of months for the RFC editor to process the
> document anyway, so personally a couple of months of waiting for a
> revised edition of ECMA-404 would be okay with me.
I don't see why we >shouldn't< be about to mutually resolve this.
For example, the ECMA spec needs the order of key/value pairs in an object to be significant
I don't think ECMA-404 has such a requirement. In fact, it says otherwise in at least two places.
Earlier in this discussion I believe AWB said he thought that ECMA-404 shouldn't say so.
No, currently ECMA-404 intentionally does not say that key/value pairs are unordered because there is clearly an ordering to them in a JSON text. it does't say one way or another whether any downstream semantics is derived from the ordering.
There is a de facto standard for the ordering of the most common cases. At some point this ordering will probably be included in ECMA-262
The ES6 draft currently depends on ECMAScript evaluation semantics to determine the value represented by a JSON text.
At this point in time, this is just for economy of specification text as we know that for syntactically valid JSON input that object literal evaluation produces the same result that JSON.parse needs to produce . In both cases, source code key/value pairs are processed in left-to-right source code order and constructs an object whose property enumeration order is derived from the processing order.
So assuming TC39 fixes ECMAScript at some point to match the reality on the Web, JSON.parse will specified to preserve order. I believe current implementations of JSON.parse already do so.
yes and even if we changed how we expressed the specification for JSON.parse it would still have to preserve the current de facto ordering.
Maybe I am worrying unnecessarily about this. It would certainly make life simpler to be able to say JSON objects are unordered collections without qualification. But I suspect that this doesn't reflect reality on the Web.
and you'd be correct. I think you could get away with recommending that applications not depend upon the ordering but i doubt that web developers would pay attention. ECMAScript stated out trying to not define a specific enumeration order for object properties. But developers still wrote code that dependent upon the ordering they observed in the wild and interoperability pressures have generally forced convergence upon the ordering used by the browsers with the most market share.
The way to deal with this is by careful spec layering.
An alternative way to deal with this is to write less in the IETF spec, and use a normative reference to ECMA-404.
That is exactly the kind of layering I think we need.
It's probably obvious that I'd also agree with that.
On Dec 10, 2013, at 3:08 PM, Bjoern Hoehrmann wrote:
... If things had gone according to plan, it seems likely that Ecma would have requested the IANA registration for application/json jointly lists the IETF and Ecma International has holding Change Control over it, and it seems unlikely there would have been much disagreement about that.
It is normal to award change control to other organisations, for instance, RFC 3023 gives change control for the XML media types to the W3C. I can look up examples for jointly held change control if that would help.
Perhaps this is another area that needs coordination.
> And I'm pretty sure that you don't want to prevent a streaming parser from presenting object members in the order in which they appear in the jSON text.
Of course not. I’d recommend adding all the whitespace and a bit for each character indicating whether it was escaped or not (don’t laugh, there are JSON extensions that use this information).
I think what many people here are saying is a point about interoperability, not about what a specific system “can” do. If you want to benefit from the wide availability of JSON implementations, you *cannot* make the object member order (or whitespace or escaping) significant in your application. The other side may not have a way to generate a specific member order (or form of whitespace or pattern of escaping).
Of course, consenting applications can *extend* JSON to make all these things significant (and an implementation way want to be prepared for these extensions by, say, preserving character escaping information). However, saying that these applications are “using JSON” to communicate is inaccurate. They are using a JSON extension. It may not as obviously be an extension as, say, YAML is (it is still using the same grammar), but the fact that some aspects of the serial representation have been assigned a meaning that don’t have one in JSON still makes them using an extension.
The fact that a widely used set of JSON implementations allows easy access to a JSON extension does not make that particular extension part of JSON any more than the convention of escaping the first character in a string to make it meta is a part of JSON. (Within a monoculture, it is always tempting to consider the features of that monoculture to be universal. Don’t yield to that.)
Documenting the syntax of JSON without the attendant semantics is an exercise in leading developers on a path where they suddenly think they have to preserve all the fluff in the syntax just in case a JSON extension might use it. That’s not what JSON is about; JSON is about enabling parsers to aggressively eliminate the fluff, and enabling encoders to generate the fluff in a way that is convenient to them on their specific platform.