Re: [Json] Response to Statement from W3C TAG

classic Classic list List threaded Threaded
44 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock

On Dec 7, 2013, at 11:21 PM, Nico Williams wrote:

> On Sun, Dec 08, 2013 at 04:09:04PM +0900, "Martin J. Dürst" wrote:
>> On 2013/12/08 8:05, Allen Wirfs-Brock wrote:
>>> JSON is derived from JavaSript (whose standard is ECMA-262) and since
>>> 2009, ECMA-262 (and its clone ISO/IEC-16262) has included a normative
>>> specification for parsing JSON text that includes an ordering
>>> semantics for object members.
>>
>> RFC 4627 was published in July 2006, so the ECMA-262 version of 2009
>> may not be very relevant.
>
> ECMA-262 may well have been a codification of older behavior -- there's
> winning this sort of argument.  If there really is an unreconcilable
> divergence as-deployed, then we ought to document that.  A simple
> addition to the last paragraph (or a new paragraph after it) of section
> 4 of RFC4627bis-08 should suffice.

I agree.  Fundamentally I have been asking what is the technical meaning of the statement "an object is an unordered collection" that occurs in section 1 (Introduction) of the current draft for 4627bis.  No one has yet responded to my question regarding whether or not statements in the Introduction are considered normative.  

Section 4 which presumably is supplying the normative specification of a JSON object currently says nothing about member ordering.  

If the intent is for the statement in the introduction to have some normative mean, then please make that meaning technically clear in section 4.

Allen

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by Allen Wirfs-Brock
On 08 Dec 2013, at 19:57, Allen Wirfs-Brock <[hidden email]> wrote:

> {
> "allenwb":  "there is an objectively observable order to the members of a JSON object",
> "JSON WG participant 1":  "It would be insane to depend upon that ordering",
> "allenwb":  "not if there is agreement between a producer and consumer on the meaning of the ordering",
> "JSON WG participant 2":  "But JSON.parse and similar language bindings don't preserve order",
> "allenwb":  "A streaming JSON parser would naturally preserve member order",
> "JSON WG participant 2": "I din't think there are any such parsers",
> "allenwb": "But someone might decide to create one, and if they do it will expose object members, in order",
> "allenwb": "Plus, in this particular case the schema is so simple the application developer might well design to write a custom, schema specific streaming parser."
> }

Which by at least one JSON decoder*) is decoded as:

---
allenwb: Plus, in this particular case the schema is so simple the application developer
  might well design to write a custom, schema specific streaming parser.
JSON WG participant 1: It would be insane to depend upon that ordering
JSON WG participant 2: I din't think there are any such parsers

(For readability, this one encoded in YAML, another JSON extension.)

Nice demonstration of the point here.

Grüße, Carsten

*) ruby -rjson -ryaml -e 'puts JSON.parse(File.read("allen.json")).to_yaml' >allen.yaml

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Nick Niemeir
In reply to this post by Allen Wirfs-Brock

On Sun, Dec 8, 2013 at 10:57 AM, Allen Wirfs-Brock <[hidden email]> wrote:

However, that would not necessarily be the case for an application that is using a streaming JSON parser. 


------------start JSON text-------------
{
"allenwb":  "there is an objectively observable order to the members of a JSON object",
"JSON WG participant 1":  "It would be insane to depend upon that ordering",
"allenwb":  "not if there is agreement between a producer and consumer on the meaning of the ordering",
"JSON WG participant 2":  "But JSON.parse and similar language bindings don't preserve order",
"allenwb":  "A streaming JSON parser would naturally preserve member order",
"JSON WG participant 2": "I din't think there are any such parsers",
"allenwb": "But someone might decide to create one, and if they do it will expose object members, in order",
"allenwb": "Plus, in this particular case the schema is so simple the application developer might well design to write a custom, schema specific streaming parser."
}
-----------end JSON text-------


One good example of a streaming parser is the npm package JSONStream.
If you wanted to accept conversations on standard in and output allenwb's statements on standard out you could use this node program:

```javascript
var JSONStream = require('JSONStream')

process.stdin
  .pipe(JSONStream.parse('allenwb'))
  .pipe(process.stdout)
```

With this JSON text the output is the expected:
```
there is an objectively observable order to the members of a JSON objectnot if there is agreement between a producer and consumer on the meaning of the orderingA streaming JSON parser would naturally preserve member orderBut someone might decide to create one, and if they do it will expose object members, in orderPlus, in this particular case the schema is so simple the application developer might well design to write a custom, schema specific streaming parser.
```

--nick


_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Bjoern Hoehrmann
In reply to this post by Allen Wirfs-Brock
* Allen Wirfs-Brock wrote:

>------------start JSON text-------------
>{
>"allenwb":  "there is an objectively observable order to the members of a JSON object",
>"JSON WG participant 1":  "It would be insane to depend upon that ordering",
>"allenwb":  "not if there is agreement between a producer and consumer on the meaning of the ordering",
>"JSON WG participant 2":  "But JSON.parse and similar language bindings don't preserve order",
>"allenwb":  "A streaming JSON parser would naturally preserve member order",
>"JSON WG participant 2": "I din't think there are any such parsers",
>"allenwb": "But someone might decide to create one, and if they do it will expose object members, in order",
>"allenwb": "Plus, in this particular case the schema is so simple the application developer might well design to write a custom, schema specific streaming parser."
>}
>-----------end JSON text-------

There is observable white space outside strings in JSON texts. It would
be insane to depend on the placement of white space outside strings. Not
if there is agreement on the meaning of that white space. Most parsers
do not preserve such white space. A generic ABNF parser would naturally
preserve it...

It is quite possible that there are steganographic or cryptographic pro-
tocols that use insignificant white space in JSON texts as subtle form
of communication or for integrity protection, just like they might use
order of object members for the same purpose.

However, what we are discussing here is what people should assume when
we say "We use JSON!" so there do not have to be detailed negotiations
to establish agreements, i.e., a Standard. And people should very much
assume that the ordering of object members is as insignificant as the
placement of white space outside strings.
--
Björn Höhrmann · mailto:[hidden email] · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by Bjoern Hoehrmann
On 09 Dec 2013, at 05:10, Martin J. Dürst <[hidden email]> wrote:

> In the original text, neither are these two usages disambiguated, nor is there any explanation about where the "10" is coming from or how it has to be used.

I think this is symptomatic of a larger problem that we occasionally fall for when writing specs.

ECMA-404 appears to be a textbook example of a “trapdoor spec” — if you already know what it is supposed to say, then it reads fine, but if you approach it as a fresh spec, it is undecipherable, as it relies on tacit knowledge to connect the dots.

Now in this case that may not be as big a problem because everybody already does know what JSON is*).
I’m still not thrilled to use it as a normative reference.

More importantly, reducing JSON to its surface syntax, and removing a few points about the data model (even though much of it remains in the form of allusions) opens the door to forking the data model.
This will allow all kinds of cool things to be done by repurposing the JSON syntax, but will damage the JSON ecosystem that is built around that data model.

One wonders whether that is the point.

Grüße, Carsten

*) Here specifically, we all know how to write numbers in programming languages, and (as long as you don’t address the hard problems like exactness) the idiosyncratic syntax details (decimal only, no leading zeroes on mantissa, no plus, but leading zeroes or plus are allowed on the exponent, E can be upper or lower case) are all that is needed to detail this spec, even though there is much more to actual interoperability.  Few implementers will get the semantics wrong from that skimpy spec.

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by Carsten Bormann
> So what's the reason you talk about two levels?

If you interpret the first three racetracks as generating a sequence of characters, or the last two as generating a sequence of tokens, you get the wrong result.

>> RFC 4627 does that implicitly by saying "The representation of numbers is similar to that used in most programming languages.".)
>
> That's not very precise either, but it's at least telling the reader where to look further if s/he doesn't understand what's intended.

Actually, to the extent that RFC 4627 does define JSON's data model, the result of this simple statement is surprisingly precise.
It only stops helping you much when you reach the limits of precision or range (e.g., what to do with 1e400.)

> Another problem is that it's not scalable, in the sense that it won't work anymore if everybody would do it.

Right.  But then, section 11.8.3.1 of the ES6 draft is an example for why it is tedious to do this.
(It is also, I believe, a nice example how easy it would be to get this wrong and that nobody would actually notice a mistake buried in there, unless they do the work to systematically check every detail or to translate it into a machine-checkable form.  Fortunately, our number system is relatively stable; I’d hate to maintain a spec that has this level of tedium on something that actually evolves.  For added fun, compare with 7.1.3.1.1, which is mostly saying the same thing, but does it in a subtly different way.  That’s why ES6 is 531 pages...)

> I'm not planning to do any work. I was just trying to point out that the technical work is not that difficult (after some leaps of faith to take the 'most obvious' interpretation of racetracks,…).

Yep.  But if nobody does that work (or, more precisely, admits to having done that work), we simply don’t know whether the statement that triggered this little subthread is true or not.  I have made too many stupid mistakes in seemingly simple specs that became obvious only as soon as I used a tool to check the spec.

Grüße, Carsten

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock

On Dec 9, 2013, at 3:53 AM, Carsten Bormann wrote:

So what's the reason you talk about two levels?

If you interpret the first three racetracks as generating a sequence of characters, or the last two as generating a sequence of tokens, you get the wrong result.

RFC 4627 does that implicitly by saying "The representation of numbers is similar to that used in most programming languages.".)

That's not very precise either, but it's at least telling the reader where to look further if s/he doesn't understand what's intended.

Actually, to the extent that RFC 4627 does define JSON's data model, the result of this simple statement is surprisingly precise.
It only stops helping you much when you reach the limits of precision or range (e.g., what to do with 1e400.)

Another problem is that it's not scalable, in the sense that it won't work anymore if everybody would do it.

Right.  But then, section 11.8.3.1 of the ES6 draft is an example for why it is tedious to do this.
(It is also, I believe, a nice example how easy it would be to get this wrong and that nobody would actually notice a mistake buried in there, unless they do the work to systematically check every detail or to translate it into a machine-checkable form.  Fortunately, our number system is relatively stable; I’d hate to maintain a spec that has this level of tedium on something that actually evolves.  For added fun, compare with 7.1.3.1.1, which is mostly saying the same thing, but does it in a subtly different way.  That’s why ES6 is 531 pages...)

I'm not planning to do any work. I was just trying to point out that the technical work is not that difficult (after some leaps of faith to take the 'most obvious' interpretation of racetracks,…).

Yep.  But if nobody does that work (or, more precisely, admits to having done that work), we simply don’t know whether the statement that triggered this little subthread is true or not.  I have made too many stupid mistakes in seemingly simple specs that became obvious only as soon as I used a tool to check the spec.

Grüße, Carsten

I want to address a few points brought up in this subthread, primarily between Carsten and Martin.

First Syntax Diagrams (aks, RailRoad Diagrams and called racetracks in this thread) are a well known formalism for expressing a context free grammar.  For example see http://en.wikipedia.org/wiki/Syntax_diagram Any competent software engineer should be able to recognize and read a syntax diagram of this sort. There is no mystery about them. Any grammar that can be expressed using BNF can also be expressed using a Syntax Diagram although I think most would agree that  BNF is a better alternative for large grammars. 

This whole issue of the use of Syntax Diagrams rather than BNF is a stylist debate that is hard to take seriously. If TC39 informed you that we are converting the notation used in ECMA-404 to a BNF formalism would that end the objections  to normatively referencing  ECMA-404 from 4627bis?  Unfortunately, I'm pretty sure it wouldn't.

Regarding, using of a multiply level definition within ECMA-404.  That is a standard practice within language specification where the "tokens" of a language are often described using a FSM level formalism and the syntactic structure is described using a PDA level formalism.  However, there is nothing that prevents a PDA level abstraction such as a BNF from being using to describe "tokens" even with the full power of a PDA isn't used.  The ECMA-262 specification is an example of a language specification that using a BNF to describe both its lexical and syntactic structure.

In the case of ECMA-404, clause 4 is clearly defining the lexical level of the language (it is talking about "tokens") and it clearly states that numbers and strings are tokens. Hence there is no ambiguity about how to interpret the syntax diagrams for number and string in clauses 8 and 9.  None of the subelements of diagrams are "tokens" so there is no plausible way they could be misconstrued as generating or recognizing a sequence of tokens.

The only normative purpose of the first paragraph in clause 8 (Numbers) is to identify the code points that  are symbolically referenced by the Syntax diagram. Everything else in that paragraph is either redundant (describe by the diagram) or pseudo-semantics that are outside the scope of what ECMA-404 defines. 

This is a common problem seen in many specification that try to clarify a formalism with supplementary prose and instead ends up sowing confusion.  If a bug was filed against this for ECMA-404 it will probably be cleaned up in the next edition. Note that the current 4627bis draft is very similar in this regard.  It talks about an "exponent part" with out defining that term. (it doesn't appear in the grammar).  It doesn't specify how to actually interpret a number token as a mathematical  value or how to generate one from a mathematical value.  It only says that JSON numbers  are  similar to those in most programming languages (which includes a very wide range of possibilities).

Specs. can have both technical and editorial bugs.  If you think there are bugs in ECMA-404 the best thing to do is to submit a bug ticket at bugs.ecmascript.org. If there is a critical bug that you think prevents 4627bis from normatively referencing ECMA-404 say so and assign the bug a high priority in the initial ticket.  But please, start with actual errors, ambiguities, inconsistencies, or similar substantive issue.  Stylistic issues won't be ignore but they are less important and harder to reach agreement on.

Allen


_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Bjoern Hoehrmann
* Allen Wirfs-Brock wrote:
>This whole issue of the use of Syntax Diagrams rather than BNF is a
>stylist debate that is hard to take seriously. If TC39 informed you that
>we are converting the notation used in ECMA-404 to a BNF formalism would
>that end the objections  to normatively referencing  ECMA-404 from
>4627bis?  Unfortunately, I'm pretty sure it wouldn't.

If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
discussion of JSON and a clear indication that future editions will not
add such discussion, and will not change the grammar without IETF con-
sensus, I would be willing to entertain the idea of making ECMA-404 a
normative reference.

How soon would TC39 be able to make such a decision and publish a re-
vised edition of ECMA-404 as described above?
--
Björn Höhrmann · mailto:[hidden email] · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Brendan Eich-3
http://bugs.ecmascript.org/ -- please use it, you will be amazed at how
quickly the bug is resolved. Thanks,

/be

Bjoern Hoehrmann wrote:
> If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
> ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
> discussion of JSON and a clear indication that future editions will not
> add such discussion, and will not change the grammar without IETF con-
> sensus, I would be willing to entertain the idea of making ECMA-404 a
> normative reference.
>
> How soon would TC39 be able to make such a decision and publish a re-
> vised edition of ECMA-404 as described above?
_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock
In reply to this post by Bjoern Hoehrmann

On Dec 9, 2013, at 5:40 PM, Bjoern Hoehrmann wrote:

* Allen Wirfs-Brock wrote:
This whole issue of the use of Syntax Diagrams rather than BNF is a
stylist debate that is hard to take seriously. If TC39 informed you that
we are converting the notation used in ECMA-404 to a BNF formalism would
that end the objections  to normatively referencing  ECMA-404 from
4627bis?  Unfortunately, I'm pretty sure it wouldn't.

If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
discussion of JSON and a clear indication that future editions will not
add such discussion, and will not change the grammar without IETF con-
sensus, I would be willing to entertain the idea of making ECMA-404 a
normative reference.

Note that ECMA-404 already says (in the introduction):

"It is expected that other standards will refer to this one, strictly adhering to the JSON text format, while imposing restrictions on various encoding details. Such standards may require specific behaviours. JSON itself specifies no behaviour.

Because it is so simple, it is not expected that the JSON grammar will ever change. This gives JSON, as a foundational notation, tremendous stability."

The second paragraph is speaking about the language described by the grammar, not the actual formalism used to express the grammar. I'm quite sure that there is no interest at all within TC39 to ever change the actual JSON language.  If you are looking for some sort of contractual commitment from ECMA, I suspect you are wasting your time. Does the IETF make such commitments?

TC39 is a consensus based organization so I can't make commitments for it or the ECMA-404 project editor. But,  let me quote two previous statements I've made on this thread concerning the grammar notation:

"It's silly to be squabbling over such a notational issues and counter-productive if such squabbles results multiple different normative standards for the same language/format. TC39 would likely be receptive to a request to add to ECMA-404 an informative annex with a BNF grammar for JSON (even ABNF, even though it isn't TC39's normal BNF conventions). Asking is likely to produce better results than throwing stones."

"The position stated by TC39 that ECMA-404 already exists as a normative specification of the JSON syntax and we have requested that RFC4627bis normatively reference it as such and that any restatement of ECMA-404 subject matter should be marked as informative.  We think that dueling normative specifications would be a bad thing. Seeing that the form of expression used by ECMA-404 seems to be a issue for some JSON WG participants I have suggested that TC39 could probably be convinced to revise ECMA-404 to include a BNF style formalism for the syntax.  If there is interest in this alternative I'd be happy to champion it within TC39."

This doesn't mean that TC39 would necessarily agree to eliminate the Syntax Diagrams,  or that we wouldn't carefully audit any grammar contribution to make sure that it is describing the same language.  There may also be minor issues that need to be resolved. But we seem to agree that we already are both accurately describing the same language so this is really about notational agreement.


How soon would TC39 be able to make such a decision and publish a re-
vised edition of ECMA-404 as described above?

As a base line, ECMA-404 was created in less than a week.  It takes a couple months to push through a letter ballot to above a revised standard. 

Allen

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

James Clark-8
In reply to this post by Allen Wirfs-Brock
On Fri, Dec 6, 2013 at 2:51 AM, Allen Wirfs-Brock <[hidden email]> wrote:

The static semantics of a language are a set of rules that further restrict  which sequences of symbols form valid statements within the language.  For example, a rule that the 'member' names must be disjoint within an 'object' production could be a static semantic rule (however, there is intentionally no such rule in ECMA-404).

The line between syntax and static semantics can be fuzzy.  Static semantic rules are typically used to express rules that cannot be technically expressed using the chosen syntactic formalism or rules which are simply inconvenient to express using that formalism.  For example, the editor of ECMA-404 chose to simplify the RR track expression of the JSON syntax by using static semantic rules for whitespace rather than incorporating them into RR diagrams. 

Another form of static semantic rules are equivalences that state when two or more different sequences of symbols must be considered as equivalent.  For example, the rules that state equivalencies between escape sequences and individual code points within an JSON 'string'.  Such equivalences are not strictly necessary at this level, but it it simplifies the specification of higher level semantics if equivalent symbol sequences can be normalized at this level of specification.
  
When we talk about the "semantics" of a language (rather than "static semantics") we are talking about attributing meaning (in some domain and context) to well-formed (as specified via syntax and static semantics) statements expressed in that language. 
... 
What we can do, is draw a bright-line just above the level of static semantics.This is what ECMA-404 attempts to do. 

I don't see how you can accommodate the second kind of static semantic rule within the definition of conformance that you have chosen for ECMA-404. Section 2 defines conformance in terms of whether a sequence of Unicode code points conforms to the grammar.  This doesn't even accommodate the first kind of static semantic rule, but it is obviously easy to extend it so that it does.  However, to accommodate the second kind of static semantic rule, you would need a notion of conformance that deals with how conforming parsers interpret a valid sequence of code points.

I think it is coherent to draw a bright-line just above the first level of static semantics.  If you did that, then most of the prose of section 9 (on Strings) would have to be removed; but this would be rather inconvenient, because most specifications of higher-level semantics would end up having to specify it themselves.

However, I find it hard to see any bright-line above the second level of static semantics and below semantics generally.  Let's consider section 9. I would argue that this section should define a "semantics" for string tokens, by defining a mapping from sequences of code points matching the production _string_ (what I would call the "lexical space") into arbitrary sequences of code points (what I would call the "value space"). The spec sometimes seems to be doing this and sometimes seems to be doing something more like your second kind of static semantics. Sometimes it uses the term "code point" or "character" to refer to code points in the lexical space ("A string is a sequence of Unicode code points wrapped with quotation marks"), and sometimes it uses those terms to refer to code points in the value space ("Any code point may be represented as a hexadecimal number").   You could redraft so that it was expressed purely in terms of code points in the lexical space, but that would be awkward and unnatural: for example, an hexadecimal escape would represent either one or two code points in the lexical space.  Furthermore I don't see what you would gain by this.  Once you talk about equivalences between sequences, you are into semantics and you need a richer notion of conformance.

So back to "semantics" and why ECMA-404 tries (perhaps imperfectly) to avoid describing JSON beyond the level of static semantics. 

ECMA-404 see JSON as "a text format that facilitates structured data interchange between all programming languages. JSON
is syntax of braces, brackets, colons, and commas that is useful in many contexts, profiles, and applications".

There are many possible semantics and categories of semantics that can be applied to well-formed statements expressed using the JSON syntax.
...

The problem with trying to standardize JSON semantics is that the various semantics that can be usefully be imposed upon JSON are often mutually incompatible with each other. At a trivial level, we see this with issues like the size of numbers or duplicate object member keys.  It is very hard to decide whose semantics are acceptable and whose is not.

I would argue that ECMA-404 should define the least restrictive reasonable semantics: the semantics should not treat as identical any values that higher layers might reasonably want to treat as distinct.  This is not the one, true JSON semantics: it is merely a semantic layer on which other higher-level semantic layers can in turn be built.  I don't think it's so hard to define this:

1. a value is an object, array, number, string, boolean or null.
2. an object is an ordered sequence of <string, value> pairs
3. an array is an ordered sequence of values
4. a string is an ordered sequence of Unicode code points

Item 2 maybe surprising to some people, but there's not really much choice given that JS preserves the order of object keys.  The difficult case is number. But even with number, I would argue that there are clearly some lexical values that can uncontroversially be specified to be equivalent (for example, 1e1 with 1E1 or 1e1 with 1e+1).  A set of decisions on lexical equivalence effectively determines a value space for numbers.  For example, you might reasonably decide that two values are equivalent if they represent real numbers with the same mathematical value.

If ECMA-404 doesn't provide such a semantic layer, it becomes quite challenging for higher-level language bindings to specify their semantics in a truly rigorous way.  Take strings for example.  I think by far the cleanest way to rigorously define a mapping from string tokens to sequences of code points is to have a BNF and a syntax-directed mapping as the ECMAScript spec does very nicely in 7.8.4 (http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.4).  If ECMA-404 provides merely a syntax and a specification of string equivalence, it becomes quite a challenge to draft a specification that somehow expresses the mapping while still normatively relying on the ECMA-404 spec for the syntax. What will happen in practice is that these higher level mapping will not be specified rigorously.

I think ECMA-404 would be significantly more useful for its intended purpose if it provided the kind of semantics I am suggesting.

I know XML is not very fashionable these days but we have a couple of decades of experience with XML and SGML which I think do have some relevance to a specification of "structured data interchange between programming language".   One conclusion I would draw from this experience is that the concept of an XML Infoset or something like it is very useful.  Most users of XML deal with higher-level semantic abstractions rather than directly with the XML Infoset, but it has proven very useful to be able to specify these higher-level semantic abstractions in terms of the XML Infoset rather than having to specify them directly in terms of the XML syntax.  Another conclusion I would draw is that it would have worked much better to integrate the XML Infoset specification into the main XML specification.  The approach of having a separate XML Infoset specification has meant that there is no proper rigorous specification how to map from the XML syntax to the XML Infoset (it seems to be assumed to be so obvious that it does not need stating).  I tried an integrated approach of specifying the syntax and data model together in the MicroXML spec (https://dvcs.w3.org/hg/microxml/raw-file/tip/spec/microxml.html), and I think it works much better. The current approach of ECMA-404 is a bit like that of the XML Recommendation: it pretends at times to be just specifying when a sequence of code points is valid, and yet the specification contains a fairly random selection of statements of how a valid sequence should be interpreted. 

James


_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by Allen Wirfs-Brock
On 10 Dec 2013, at 01:32, Allen Wirfs-Brock <[hidden email]> wrote:

> Stylistic issues

Well, for 4627bis, we have tools that allowed us to fuzz the ABNF against a set of existing JSON implementations.
This is the kind of care I expect from spec writers.
Nobody has fessed up to having done equivalent work for ECMA-404.
Matter of style?  Yes, but in quite another sense.

Grüße, Carsten

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by James Clark-8
On 10 Dec 2013, at 07:52, James Clark <[hidden email]> wrote:

> Infoset

Not a bad idea to lead us out of this quagmire.

So a JSON infoset would capture a processed AST, but not yet the transformation to the data model level.

JSON implementations would create the JSON data model from that infoset (typically without actually reifying the latter as an AST), and JSON extensions like ECMAscript's would be free to do whatever they want.
It is just important to distinguish the two, so people don’t confuse the data model with the infoset, or think that a JSON implementation needs to provide access to the infoset.

Re the infoset for JSON numbers:  That is clearly a rational, expressed as a pair of two integers: a numerator and a (power of ten) denominator.  (JSON cannot express any other rationals, or any irrationals for that matter.)

1.23 is [123, 100]
1.5 is [15, 10]
1e4 is [10000, 1]
1e-4 is [1, 10000]

Now one could argue whether the infoset should distinguish 1 and 1.0.
Naively, that would be
1 is [1, 1]
1.0 is [10, 10]
I’d argue that you want to reduce toward the denominator being the minimal power of ten, i.e.
1 is [1, 1]
1.0 is [1, 1]
1.5 is [15, 10]

Grüße, Carsten

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

James Clark-8
On Tue, Dec 10, 2013 at 4:12 PM, Carsten Bormann <[hidden email]> wrote:

So a JSON infoset would capture a processed AST, but not yet the transformation to the data model level.

JSON implementations would create the JSON data model from that infoset (typically without actually reifying the latter as an AST), and JSON extensions like ECMAscript's would be free to do whatever they want.
It is just important to distinguish the two, so people don’t confuse the data model with the infoset, or think that a JSON implementation needs to provide access to the infoset.

I agree it would reduce confusion to use a different term for the infoset versus the data model. "Infoset"/"data model" is one possible choice of terms, though I wonder whether the XML heritage of "infoset" might be off putting to many.  Another possibility would be "abstract data model"/"concrete data model".
 
I’d argue that you want to reduce toward the denominator being the minimal power of ten, i.e.
1 is [1, 1]
1.0 is [1, 1]
1.5 is [15, 10]

That would be my preference too.

The only thing that makes me hesitate is that I could imagine implementations that distinguish integers and floats, and use C-style rules to distinguish the two. For example, 1 is an integer but 1.0 or 1e0 is a float. I don't know whether any such implementations exist.

James


 

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by James Clark-8
On 10 Dec 2013, at 07:52, James Clark <[hidden email]> wrote:

> Most users of XML deal with higher-level semantic abstractions rather than directly with the XML Infoset, but it has proven very useful to be able to specify these higher-level semantic abstractions in terms of the XML Infoset rather than having to specify them directly in terms of the XML syntax.

The XML infoset is very much tied to the needs (and idiosyncrasies) of the serialization that XML uses.  There are many ways this infoset is mapped into the data model used by an XML-based application.

The main innovation of JSON was to actually supply such a data model as part of the format.
I would argue that his property was what made JSON “win” over XML.

Turning back the clock and trying to use JSON as a conveyer of an infoset instead of using it with its data model could be considered unproductive.  On the other hand, some people want to do alternative data models with the JSON syntax, so maybe standardization has to cater for that.

One of the reasons many people react so violently to such a proposal is that it is bound to cause confusion that these alternative data models are now also “JSON data models”, reducing the value of the JSON data model as the hingepin of interoperability.

I don’t know how to counteract that confusion while also enabling the use of alternative data models by the definition of the infoset.  But maybe we can find a way.

Grüße, Carsten

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Carsten Bormann
In reply to this post by James Clark-8
On 10 Dec 2013, at 12:39, James Clark <[hidden email]> wrote:

> The only thing that makes me hesitate is that I could imagine implementations that distinguish integers and floats, and use C-style rules to distinguish the two. For example, 1 is an integer but 1.0 or 1e0 is a float. I don't know whether any such implementations exist.

Absolutely, they do, and they all differ in how exactly they do the distinction.
http://www.ietf.org/mail-archive/web/json/current/msg01523.html

This is a cause of real interoperability problems.

The question is how to find out of that maze of different interpretations.
There is no way this can be done so that none of them “breaks”.

It may seem natural to stick to the way numbers are interpreted in many programming languages that distinguish floating point values from integer values.  However, JavaScript doesn’t so it can’t supply guidance.  And that leads to exactly the problem documented in
https://jira.talendforge.org/browse/TDI-26517 — interoperability broken when a non-distinguishing sender accidentally chooses the representation that triggers the wrong behavior at the receiver.

It is probably better to suggest handling 1.0 as 1.

When we are done with that, there is still negative zero.
http://www.ietf.org/mail-archive/web/json/current/msg01661.html

Grüße, Carsten

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Bjoern Hoehrmann
In reply to this post by Allen Wirfs-Brock
* Allen Wirfs-Brock wrote:
>On Dec 9, 2013, at 5:40 PM, Bjoern Hoehrmann wrote:
>> If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
>> ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
>> discussion of JSON and a clear indication that future editions will not
>> add such discussion, and will not change the grammar without IETF con-
>> sensus, I would be willing to entertain the idea of making ECMA-404 a
>> normative reference.

>The second paragraph is speaking about the language described by the
>grammar, not the actual formalism used to express the grammar. I'm quite
>sure that there is no interest at all within TC39 to ever change the
>actual JSON language.  If you are looking for some sort of contractual
>commitment from ECMA, I suspect you are wasting your time. Does the IETF
>make such commitments?

As you know, the charter of the JSON Working Group says

  The resulting document will be jointly published as an RFC and by
  ECMA. ECMA participants will be participating in the working group
  editing through the normal process of working group participation.  
  The responsible AD will coordinate the approval process with ECMA so
  that the versions of the document that are approved by each body are
  the same.

If things had gone according to plan, it seems likely that Ecma would
have requested the IANA registration for application/json jointly lists
the IETF and Ecma International has holding Change Control over it, and
it seems unlikely there would have been much disagreement about that.

It is normal to award change control to other organisations, for
instance, RFC 3023 gives change control for the XML media types to the
W3C. I can look up examples for jointly held change control if that
would help.

And no, I am not looking for an enforceable contract, just a clear
formal decision and statement.

>This doesn't mean that TC39 would necessarily agree to eliminate the
>Syntax Diagrams,  or that we wouldn't carefully audit any grammar
>contribution to make sure that it is describing the same language.  
>There may also be minor issues that need to be resolved. But we seem to
>agree that we already are both accurately describing the same language
>so this is really about notational agreement.

Having non-normative syntax diagrams in addition to the ABNF grammar
would be fine if they can automatically be generated from the ABNF.

I was talking about removing most of the prose, leaving only boiler-
plate, a very short introduction, and references. Then it would be a
specification of only the syntax and most technical concerns would be
addressed on both sides. If you see this as a viable way forward, then
I think the JSON WG should explore this option further.

>As a base line, ECMA-404 was created in less than a week.  It takes a
>couple months to push through a letter ballot to above a revised
>standard.

The RFC4627bis draft could be approved and be held for normatives re-
ferences to materialise; this is not uncommon for IETF standards. It
usually takes a couple of months for the RFC editor to process the
document anyway, so personally a couple of months of waiting for a
revised edition of ECMA-404 would be okay with me.
--
Björn Höhrmann · mailto:[hidden email] · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock
In reply to this post by James Clark-8

On Dec 9, 2013, at 10:52 PM, James Clark wrote:

On Fri, Dec 6, 2013 at 2:51 AM, Allen Wirfs-Brock <[hidden email]> wrote:

The static semantics of a language are a set of rules that further restrict  which sequences of symbols form valid statements within the language.  For example, a rule that the 'member' names must be disjoint within an 'object' production could be a static semantic rule (however, there is intentionally no such rule in ECMA-404).

The line between syntax and static semantics can be fuzzy.  Static semantic rules are typically used to express rules that cannot be technically expressed using the chosen syntactic formalism or rules which are simply inconvenient to express using that formalism.  For example, the editor of ECMA-404 chose to simplify the RR track expression of the JSON syntax by using static semantic rules for whitespace rather than incorporating them into RR diagrams. 

Another form of static semantic rules are equivalences that state when two or more different sequences of symbols must be considered as equivalent.  For example, the rules that state equivalencies between escape sequences and individual code points within an JSON 'string'.  Such equivalences are not strictly necessary at this level, but it it simplifies the specification of higher level semantics if equivalent symbol sequences can be normalized at this level of specification.
  
When we talk about the "semantics" of a language (rather than "static semantics") we are talking about attributing meaning (in some domain and context) to well-formed (as specified via syntax and static semantics) statements expressed in that language. 
... 
What we can do, is draw a bright-line just above the level of static semantics.This is what ECMA-404 attempts to do. 

I don't see how you can accommodate the second kind of static semantic rule within the definition of conformance that you have chosen for ECMA-404. Section 2 defines conformance in terms of whether a sequence of Unicode code points conforms to the grammar.  This doesn't even accommodate the first kind of static semantic rule, but it is obviously easy to extend it so that it does.  However, to accommodate the second kind of static semantic rule, you would need a notion of conformance that deals with how conforming parsers interpret a valid sequence of code points.

Well, its certainly is a nit to pick, but in context I interpret the term "grammar" as used in clause 2 (and also the Introduction) as meaning the full normative content of clauses 4 to 9. This includes the actual CFG specification and the associated static semantic rules. 

The notion of a conforming parser could be added, I less sure that it is really necessary.  We don't even need to consider string escapes to get into the issue of equivalent JSON texts as it also exists because of optional white space.


I think it is coherent to draw a bright-line just above the first level of static semantics.  If you did that, then most of the prose of section 9 (on Strings) would have to be removed; but this would be rather inconvenient, because most specifications of higher-level semantics would end up having to specify it themselves.

I generally agree with this, including the convenience perspective.  It essentially also applies to the decimal interpretation of numbers.  There is an argument to be made that both should just be discussed informatively and leave to higher level semantic specs. to make those interpretation normative.


However, I find it hard to see any bright-line above the second level of static semantics and below semantics generally.  Let's consider section 9. I would argue that this section should define a "semantics" for string tokens, by defining a mapping from sequences of code points matching the production _string_ (what I would call the "lexical space") into arbitrary sequences of code points (what I would call the "value space"). The spec sometimes seems to be doing this and sometimes seems to be doing something more like your second kind of static semantics. Sometimes it uses the term "code point" or "character" to refer to code points in the lexical space ("A string is a sequence of Unicode code points wrapped with quotation marks"), and sometimes it uses those terms to refer to code points in the value space ("Any code point may be represented as a hexadecimal number").   You could redraft so that it was expressed purely in terms of code points in the lexical space, but that would be awkward and unnatural: for example, an hexadecimal escape would represent either one or two code points in the lexical space.  Furthermore I don't see what you would gain by this.  Once you talk about equivalences between sequences, you are into semantics and you need a richer notion of conformance.

Generally agree. We are probably seeing some editorial confusion as feedback (including mine)  was integrated into the editor's initial draft. This can all be improved in a subsequent edition


So back to "semantics" and why ECMA-404 tries (perhaps imperfectly) to avoid describing JSON beyond the level of static semantics. 

ECMA-404 see JSON as "a text format that facilitates structured data interchange between all programming languages. JSON
is syntax of braces, brackets, colons, and commas that is useful in many contexts, profiles, and applications".

There are many possible semantics and categories of semantics that can be applied to well-formed statements expressed using the JSON syntax.
...

The problem with trying to standardize JSON semantics is that the various semantics that can be usefully be imposed upon JSON are often mutually incompatible with each other. At a trivial level, we see this with issues like the size of numbers or duplicate object member keys.  It is very hard to decide whose semantics are acceptable and whose is not.

I would argue that ECMA-404 should define the least restrictive reasonable semantics: the semantics should not treat as identical any values that higher layers might reasonably want to treat as distinct.  This is not the one, true JSON semantics: it is merely a semantic layer on which other higher-level semantic layers can in turn be built.  I don't think it's so hard to define this:

1. a value is an object, array, number, string, boolean or null.
2. an object is an ordered sequence of <string, value> pairs
3. an array is an ordered sequence of values
4. a string is an ordered sequence of Unicode code points

Indeed, this aligns very well with my perspective


Item 2 maybe surprising to some people, but there's not really much choice given that JS preserves the order of object keys.  The difficult case is number. But even with number, I would argue that there are clearly some lexical values that can uncontroversially be specified to be equivalent (for example, 1e1 with 1E1 or 1e1 with 1e+1).  A set of decisions on lexical equivalence effectively determines a value space for numbers.  For example, you might reasonably decide that two values are equivalent if they represent real numbers with the same mathematical value.

If ECMA-404 doesn't provide such a semantic layer, it becomes quite challenging for higher-level language bindings to specify their semantics in a truly rigorous way.  Take strings for example.  I think by far the cleanest way to rigorously define a mapping from string tokens to sequences of code points is to have a BNF and a syntax-directed mapping as the ECMAScript spec does very nicely in 7.8.4 (http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.4).  If ECMA-404 provides merely a syntax and a specification of string equivalence, it becomes quite a challenge to draft a specification that somehow expresses the mapping while still normatively relying on the ECMA-404 spec for the syntax. What will happen in practice is that these higher level mapping will not be specified rigorously.

I think ECMA-404 would be significantly more useful for its intended purpose if it provided the kind of semantics I am suggesting.

I know XML is not very fashionable these days but we have a couple of decades of experience with XML and SGML which I think do have some relevance to a specification of "structured data interchange between programming language".   One conclusion I would draw from this experience is that the concept of an XML Infoset or something like it is very useful.  Most users of XML deal with higher-level semantic abstractions rather than directly with the XML Infoset, but it has proven very useful to be able to specify these higher-level semantic abstractions in terms of the XML Infoset rather than having to specify them directly in terms of the XML syntax.  Another conclusion I would draw is that it would have worked much better to integrate the XML Infoset specification into the main XML specification.  The approach of having a separate XML Infoset specification has meant that there is no proper rigorous specification how to map from the XML syntax to the XML Infoset (it seems to be assumed to be so obvious that it does not need stating).  I tried an integrated approach of specifying the syntax and data model together in the MicroXML spec (https://dvcs.w3.org/hg/microxml/raw-file/tip/spec/microxml.html), and I think it works much better. The current approach of ECMA-404 is a bit like that of the XML Recommendation: it pretends at times to be just specifying when a sequence of code points is valid, and yet the specification contains a fairly random selection of statements of how a valid sequence should be interpreted. 

Thank you, this is very useful feedback.  Would you mind submit this as a bug report against ECMA-404 at bugs.ecmascript.org ? I can do it, but community feedback is important and I'd like to to be on the CC list for the bug.

Allen



James



_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock
In reply to this post by Allen Wirfs-Brock

On Dec 10, 2013, at 2:08 AM, Martin J. Dürst wrote:

> On 2013/12/10 9:32, Allen Wirfs-Brock wrote:
>
>> ...
>
>> Specs. can have both technical and editorial bugs.  If you think there are bugs in ECMA-404 the best thing to do is to submit a bug ticket at bugs.ecmascript.org. If there is a critical bug that you think prevents 4627bis from normatively referencing ECMA-404 say so and assign the bug a high priority in the initial ticket.  But please, start with actual errors, ambiguities, inconsistencies, or similar substantive issue.  Stylistic issues won't be ignore but they are less important and harder to reach agreement on.
>
> I'll submit some. What about the ECMA people submitting some bug reports on 4627bis in return?

Is there a bug tracking system or are perceived bugs simply submitted to the mailing list.

I'm  some of the friction around here is simply a mater of poorly understood processes and differing social conventions.

Allen

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
Reply | Threaded
Open this post in threaded view
|

Re: [Json] Response to Statement from W3C TAG

Allen Wirfs-Brock
In reply to this post by Bjoern Hoehrmann

On Dec 10, 2013, at 3:08 PM, Bjoern Hoehrmann wrote:

> * Allen Wirfs-Brock wrote:
>> On Dec 9, 2013, at 5:40 PM, Bjoern Hoehrmann wrote:
>>> If TC39 said ECMA-404 is going to be replaced by a verbatim copy of the
>>> ABNF grammar in draft-ietf-json-rfc4627bis-08 with pretty much no other
>>> discussion of JSON and a clear indication that future editions will not
>>> add such discussion, and will not change the grammar without IETF con-
>>> sensus, I would be willing to entertain the idea of making ECMA-404 a
>>> normative reference.
>
>> The second paragraph is speaking about the language described by the
>> grammar, not the actual formalism used to express the grammar. I'm quite
>> sure that there is no interest at all within TC39 to ever change the
>> actual JSON language.  If you are looking for some sort of contractual
>> commitment from ECMA, I suspect you are wasting your time. Does the IETF
>> make such commitments?
>
> As you know, the charter of the JSON Working Group says
>
>  The resulting document will be jointly published as an RFC and by
>  ECMA. ECMA participants will be participating in the working group
>  editing through the normal process of working group participation.  
>  The responsible AD will coordinate the approval process with ECMA so
>  that the versions of the document that are approved by each body are
>  the same.
>
> If things had gone according to plan, it seems likely that Ecma would
> have requested the IANA registration for application/json jointly lists
> the IETF and Ecma International has holding Change Control over it, and
> it seems unlikely there would have been much disagreement about that.
>
> It is normal to award change control to other organisations, for
> instance, RFC 3023 gives change control for the XML media types to the
> W3C. I can look up examples for jointly held change control if that
> would help.
>
> And no, I am not looking for an enforceable contract, just a clear
> formal decision and statement.

Obviously, the originally envisioned process broke down, but I don't think we need to discuss that right here, right now.

It isn't clear to me that TC39 is particularly interested in holding changing control for the application/json media type just like it apparently doesn't have change control for the application/ecmascript or application/javascript. In practice those registrations simply have not been of particular concern.  Maybe they should be.  Does anybody actually lookup the application/javascript media type actually think that the relevant reference is still Netscape Communications Corp., "Core JavaScript Reference 1.5", September 2000

TC39's concern seems to be both narrower (just the JSON syntax and static semantics, not wire encodings) and wider (implementations that aren't tied to the application/json media type) than the JSON WG's. I know that the TC39 consensus is that ECMA-404 (probably with some revision)  should be serviceable as a foundation for other specs that address other issues.

>
>> This doesn't mean that TC39 would necessarily agree to eliminate the
>> Syntax Diagrams,  or that we wouldn't carefully audit any grammar
>> contribution to make sure that it is describing the same language.  
>> There may also be minor issues that need to be resolved. But we seem to
>> agree that we already are both accurately describing the same language
>> so this is really about notational agreement.
>
> Having non-normative syntax diagrams in addition to the ABNF grammar
> would be fine if they can automatically be generated from the ABNF.
>
> I was talking about removing most of the prose, leaving only boiler-
> plate, a very short introduction, and references. Then it would be a
> specification of only the syntax and most technical concerns would be
> addressed on both sides. If you see this as a viable way forward, then
> I think the JSON WG should explore this option further.

I agree, this sounds plausible to me.

>
>> As a base line, ECMA-404 was created in less than a week.  It takes a
>> couple months to push through a letter ballot to above a revised
>> standard.
>
> The RFC4627bis draft could be approved and be held for normatives re-
> ferences to materialise; this is not uncommon for IETF standards. It
> usually takes a couple of months for the RFC editor to process the
> document anyway, so personally a couple of months of waiting for a
> revised edition of ECMA-404 would be okay with me.

I don't see why we should be about to mutually resolve this.

Allen



> --
> Björn Höhrmann · mailto:[hidden email] · http://bjoern.hoehrmann.de
> Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
> 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 
> _______________________________________________
> json mailing list
> [hidden email]
> https://www.ietf.org/mailman/listinfo/json
>

_______________________________________________
es-discuss mailing list
[hidden email]
https://mail.mozilla.org/listinfo/es-discuss
123