14:08
<Vladyslav Dalechyn>

Hi folks! Great seeing TC39 being an open and collaborative space!
I joined here to express the problems and possible solutions over JSON serialization when it comes to BigInt type.

I originally come from Web3 where we are dealing with uint256 – 32byte unsigned integer values and folks from Web3 space used to align on third party libraries to support numbers that big (https://github.com/GoogleChromeLabs/jsbi) until the official support for BigInts came live in 2020!
There have been many editions of ECMA-262 published since 1999, and to my belief it needs an upgrade to handle BigInt types.

I will be honest I did not spend that much time to research the efforts of handling such but I've seen many discussions evolving around somehow "smartly" detecting wether a number value should be parsed as a BigInt or Number. I don't believe that those are correct nor backwards compatible.

What if JSON as a whole could retrieve an upgrade to support BigInt literals ending with "n", as it does now in JS?

{
  "value": 1337n
}

I'd like to hear your thoughts wether you think if this possible and if so, what kind of backwards incompatibility issues this might have!
Regarding the latter I don't think this will introduce any issues as this addition doesn't change the serialization behavior of fields of other types – but instead introduces a new one.
I understand that making an addition like this is gonna introduce chain of changes of parsing JSON in different programming languages, but I believe this needs a review.

14:11
<Michael Ficarra>

Hi folks! Great seeing TC39 being an open and collaborative space!
I joined here to express the problems and possible solutions over JSON serialization when it comes to BigInt type.

I originally come from Web3 where we are dealing with uint256 – 32byte unsigned integer values and folks from Web3 space used to align on third party libraries to support numbers that big (https://github.com/GoogleChromeLabs/jsbi) until the official support for BigInts came live in 2020!
There have been many editions of ECMA-262 published since 1999, and to my belief it needs an upgrade to handle BigInt types.

I will be honest I did not spend that much time to research the efforts of handling such but I've seen many discussions evolving around somehow "smartly" detecting wether a number value should be parsed as a BigInt or Number. I don't believe that those are correct nor backwards compatible.

What if JSON as a whole could retrieve an upgrade to support BigInt literals ending with "n", as it does now in JS?

{
  "value": 1337n
}

I'd like to hear your thoughts wether you think if this possible and if so, what kind of backwards incompatibility issues this might have!
Regarding the latter I don't think this will introduce any issues as this addition doesn't change the serialization behavior of fields of other types – but instead introduces a new one.
I understand that making an addition like this is gonna introduce chain of changes of parsing JSON in different programming languages, but I believe this needs a review.

https://github.com/tc39/faq#can-we-change-json
14:27
<Vladyslav Dalechyn>
https://github.com/tc39/faq#can-we-change-json

Hi! Thank you for referencing this FAQ.

Can you please explain how adding a fully backwards-compatible feature that adds a new JSON field type can break computing environments?
Can other environments just follow up and add support for BigInts (or similar) too?
I can feel this can become a "temporary" blocker for some environments. – consider C++ environment exchanging JSONs with Rust environment where one supports bigints and the other doesn't. However if the other one controls the process of building such JSONs, it most likely won't send unsupported bigint fields.

14:38
<littledan>
Honestly we should expand on this FAQ a bit. It reads a bit flippant right now. And I hear this is coming up in an IETF RFC.
14:47
<Chris de Almeida>

Hi! Thank you for referencing this FAQ.

Can you please explain how adding a fully backwards-compatible feature that adds a new JSON field type can break computing environments?
Can other environments just follow up and add support for BigInts (or similar) too?
I can feel this can become a "temporary" blocker for some environments. – consider C++ environment exchanging JSONs with Rust environment where one supports bigints and the other doesn't. However if the other one controls the process of building such JSONs, it most likely won't send unsupported bigint fields.

what you described is not backward compatible
15:23
<Aapo Alasuutari>
You'd need to know what JSON version your "opponent" supports, and to know that you'd either need to control both sides or you'd need a version decision negotiation.
15:26
<bakkot>
differences between different JSON implementations cause enough pain in the world already, and that's just with only a single nominal version of JSON https://seriot.ch/projects/parsing_json.html
15:27
<bakkot>
introducing new features only supported by some implementations would make that 100 times worse
15:27
<bakkot>
the only reasonable route here is to make a different thing, and not call it JSON, and give it whatever new features you want
15:32
<Aapo Alasuutari>
littledan: That IETF RTC sounds very interesting. Got any link or more info in general?
16:14
<Aapo Alasuutari>
Hmm, maybe this: https://datatracker.ietf.org/doc/html/rfc8785#name-dealing-with-big-numbers ?
16:49
<Vladyslav Dalechyn>

it just doesn't seem to be a huge change at all from the lexical standpoint.

Aapo Alasuutari, I totally get the concern of version decision negotiation dillema.

At the same time, the way BigInt's are handled now is via specific serializers other than native JSON (superjson i.e.).

Saying if you have a backend and you provide API services to your clients, supporting an "updated" JSON would most likely be served via different endpoint.

I agree with the clause that updating JSON is not the best choice to do.

16:51
<Vladyslav Dalechyn>
How we can make this better? what do you think of introducing JSON/2 with support of bigints and other "major" differences? HTTP went over it with HTTP/2 HTTP/3, I believe there is a solution for the future of JSONs too.
19:06
<Meghan Denny>

Hi folks! Great seeing TC39 being an open and collaborative space!
I joined here to express the problems and possible solutions over JSON serialization when it comes to BigInt type.

I originally come from Web3 where we are dealing with uint256 – 32byte unsigned integer values and folks from Web3 space used to align on third party libraries to support numbers that big (https://github.com/GoogleChromeLabs/jsbi) until the official support for BigInts came live in 2020!
There have been many editions of ECMA-262 published since 1999, and to my belief it needs an upgrade to handle BigInt types.

I will be honest I did not spend that much time to research the efforts of handling such but I've seen many discussions evolving around somehow "smartly" detecting wether a number value should be parsed as a BigInt or Number. I don't believe that those are correct nor backwards compatible.

What if JSON as a whole could retrieve an upgrade to support BigInt literals ending with "n", as it does now in JS?

{
  "value": 1337n
}

I'd like to hear your thoughts wether you think if this possible and if so, what kind of backwards incompatibility issues this might have!
Regarding the latter I don't think this will introduce any issues as this addition doesn't change the serialization behavior of fields of other types – but instead introduces a new one.
I understand that making an addition like this is gonna introduce chain of changes of parsing JSON in different programming languages, but I believe this needs a review.

JSON would not need to change as a format to support BigInt; the root issue is that numbers in JSON.parse always parses numbers as Number (ie IEEE 754 doubles). perhaps u could work on this by passing a reviver function
19:06
<Meghan Denny>
*around this
19:15
<Meghan Denny>
i do this behavior in a json parsing library of my own where i give the user the string value back (while knowing its meant to be interpreted as a number and letting them pick which type to parse as
19:15
<bakkot>
you can't currently because the reviver is only passed the parsed value, which already loses precision, but you could with https://github.com/tc39/proposal-json-parse-with-source
19:15
<Meghan Denny>
ref: https://github.com/nektro/zig-json/blob/2958707/json.zig#L586-L609