00:18
<Anthony Bullard>

shu: the JS0/JSSugar presentation is already making some waves for sure. This video and its comments might be useful to ingest for feedback:

https://youtu.be/onCHSujPlfg?si=76ar9kWMy-GmI-JJ

00:19
<Anthony Bullard>
There seems to be some confusion from the slides on what JS0 is, with some suggesting it’s a new bytecode format for engines. 😂
01:57
<shu>
sigh
01:57
<shu>
it's in the slides...
01:58
<shu>
I feel like I don't quite understand the proposed separation as it relates specifically to browsers. when I added nullish things to V8, the changes were purely in the parser and bytecode generator. I had a separate change later to the lower down parts to add a new "is null or undefined" op. isn't this already basically jssugar/js0? why does it need to be externalized to tooling rather than a separation within browser internals?
those components can have bugs, and those bugs can lead to RCE or data corruption or whatever
01:58
<Anthony Bullard>
I know. I was actually surprised at how people have reacted to it
01:59
<shu>
it's hard to have any actual discussion if people are riding the outrage wave
02:00
<shu>
Anthony Bullard: if you've watched that video, can you summarize if there're any useful signals? i'd rather save myself the stress
02:01
<Anthony Bullard>
To be clear, the creator here was going off an unsympathetic article discussing it and the slides with no talk track. And he is generally ambivalent towards JS particularly the tooling - which this proposal would lean into
02:02
<Anthony Bullard>
My main takeaway: people are worried about two main things - a further increase in the role of tooling in the JS ecosystem (an already complex landscape), and the impact that pushing desugaring into the packaging layer will have on bundle sizes.
02:04
<Anthony Bullard>
Also debugging is always a talking point with this (compilation/transpilation) as many source map tools in practice are found wanting.
02:06
<Anthony Bullard>
Lastly, there is for many a philosophical objection to a official separation of the language into a superset for users willing/able to leverage these tools and those people who just want to write JS and run it. Relatedly, some disgruntlement on how all this is necessary for a nominally interpreted language.
02:06
<shu>
thanks for your thoughts
02:07
<Anthony Bullard>
I sympathize with some of these, but understand the broader context that you are approaching this from.
02:08
<shu>
there is a definitely disconnect with JS-that-i-grew-up-with and JS-in-the-world-today, and i also feel bad that JS-that-i-grew-up-with and how many people learned to is no longer the use case being solved for
02:08
<Anthony Bullard>
I’ve been trying to convince some delegates to be proactive and discuss these proposals (at the appropriate time) with hese massively influential creators and have a discussion early
02:08
<shu>
oh i don't know if that's a good idea
02:08
<shu>
you think we should actively engage influencers?
02:09
<Anthony Bullard>
Absolutely. If we don’t explain and contextualize early, they and there audience can take the narrative in any direction
02:10
<Anthony Bullard>
And poison the well so to speak
02:10
<shu>
but we aren't a direct democracy
02:10
<Anthony Bullard>
They don’t have decision making power
02:11
<Anthony Bullard>
We should just say “this is what we are actually talking about doing and why”
02:11
<shu>
good luck getting people to also volunteer time for PR, i guess
02:11
<Anthony Bullard>
But I can understand why that may not be desirable for some delegates and in some cases
02:11
<Anthony Bullard>
For sure, you are way too busy for that
02:12
<Anthony Bullard>
Maybe having a comms role within the committee , or at least an official protocol around it, is a good idea?
04:12
<kriskowal>
Axel Rauschmeyer has been running TC39’s entire PR division solo pro bono for a while. A useful email address to have.
05:46
<snek>
those components can have bugs, and those bugs can lead to RCE or data corruption or whatever
i mean the parsing and bytecode generation step is theoretically pure, no? like if you could generate an RCE with a change to it, you could also generate an RCE with a specially crafted input without that change.
13:07
<littledan>
good luck getting people to also volunteer time for PR, i guess
TC39 and web standards are all about broad consensus-building. We don't really have a private back room where deals can be made that don't align with the rest of the JavaScript community, because we are all part of it and listening to each other. If one person doesn't have time for some parts of this consensus-building, then they can work with other allies/co-champions to help explain things broadly. I haven't watched any of the videos but I agree with a lot of the points of criticism that have been raised, including Anthony's summary above.
13:09
<littledan>
I think we need to examine some more concrete proposals and how they are affected by the concerns raised in the JSSugar presentation. I don't think we actually have a lot of non-trivial sugar proposals coming up [if the trivial ones like void aren't considered have a lot of cost]--mostly it's just pattern matching and extractors. So let's go deeper into examining that one and see how these concerns are related.
13:10
<littledan>
The biggest problem I heard with extractors was that array destructuring is slow. What if we can change the semantics of array destructuring to always do the original iteration semantics if it's given an actual Array? That would make it much easier to optimize.
13:10
<littledan>
(tbd if that is web-compatible)
13:12
<littledan>
we also need to strengthen critical analysis of proposals in general, if people feel like we're letting things through which aren't worth it. Such proposals probably shouldn't be included in sugar either. I hope TG5 can help, though past efforts have held themselves to a very high level of scientific rigor, preventing them from drawing conclusions.
13:16
<rbuckton>
The biggest problem I heard with extractors was that array destructuring is slow. What if we can change the semantics of array destructuring to always do the original iteration semantics if it's given an actual Array? That would make it much easier to optimize.
What does "original iteration semantics" mean here?
13:19
<littledan>
What does "original iteration semantics" mean here?
I mean, the behavior of the original Array.prototype[Symbol.iterator] and %ArrayIterator%.prototype.next
13:19
<littledan>
i.e. the ES5 downleveling semantics
13:19
<littledan>
so it'd check IsArray and branch to that, otherwise eagerly convert it to an array (via the actual iteration protocol) and do the same
13:20
<littledan>
(we'd only do this for destructuring, and leave for-of loops intact)
13:21
<littledan>
so the hard part will be checking web compatibility. (I hope we can avoid getting stuck like the recent Mozilla species investigation, finding a non-answer and not feeling like it's worth it to allocate resources to keep going.)
13:26
<littledan>
any thoughts on that idea?
13:58
<bakkot>
not sure why we'd special-case destructuring and not all other uses of the iteration protocol
13:59
<bakkot>
but I would be happy to make array iteration special, in whatever way
14:00
<littledan>
well, this is a simpler use case of it, with no early return, and especially easy to implement an eager conversion function for the slow path without changing semantics. But yeah I'd be happy to generalize it to all iteration if we find that that's web-compatible
14:00
<littledan>
arguably we already special-cased it for the default subclass constructor, so...
14:15
<littledan>
not sure why we'd special-case destructuring and not all other uses of the iteration protocol
how do you feel about special-casing random cases of iteration when it's especially easy/useful/proven to be web-compatible? does it seem too unprincipled?
14:16
<bakkot>
eh... it's not that I'm opposed on principle, but in practice, I would be kinda surprised if those random cases are web-compat and other cases aren't, and if we're going to run the risk of trying it seems a shame not to get all the wins we can
14:17
<littledan>
fair, yeah let's see if we can do it
14:48
<Mathieu Hofman>
I mean, the behavior of the original Array.prototype[Symbol.iterator] and %ArrayIterator%.prototype.next
I don't understand why this can't be a transparent optimization when the [Symbol.Iterator] function === %Array.prototype.values% ?
14:51
<Mathieu Hofman>
In general, if we know that the iterator making function is an intrinsic, we know what behavior the iterator will have and it doesn't need to actually be created if the spec itself is doing the iterating, no?
14:52
<Mathieu Hofman>
I mean sure we can spell it out in the spec, but that wouldn't be observable, and usually we don't spell out things that are unobservable optimizations
15:34
<bakkot>
the proposal would be to change the spec so that implementations don't have to do the quite complicated bookkeeping to ensure that all the array iterator machinery is intact before they can do that optimization
15:35
<bakkot>
"it is ok for this to be uselessly complicated because a sufficiently smart engine could make it fast anyway" is not that compelling of an argument
15:36
<bakkot>
if you're just writing an interpreter that sort of optimization is fairly easy but once you're trying to do codegen it gets pretty complicated
15:38
<littledan>
also this optimization has the cool property that you can also implement it in a desugaring (with a small support library). And you have to desugar destructuring if you want to, e.g., implement extractors in a transpiler.
15:39
<littledan>
and you have to check for two things: whether Array.prototype[Symbol.iterator] is the original value, and whether %ArrayIteratorPrototype%.next is the original value
15:39
<littledan>
it's not free to maintain a bit in advance for this check
16:28
<shu>
i strongly disagree that broad consensus building at the level of committee delegates itself (on the order of 40 people) is somehow the same kind of work as broadcasting with influencers on social media. is that really what you're saying?
16:31
<shu>
and we definitionally have a back room. people on twitter and youtube comments can't come and break consensus in tc39!
16:50
<littledan>
they can't break consensus, but it's all the same broader conversation, and points of view from outside of committee can influence committee members. We purposely operate in the open.
16:52
<littledan>
"broad consensus-building" doesn't mean everyone in that broad group vetoes things. As Michael Saboff has pointed out, consensus doesn't always mean this veto-driven thing that we do.
16:59
<shu>
i'm still unclear what you're getting at
16:59
<shu>
we're not running public campaigns for proposals
17:01
<shu>
i'm assuming you don't think private fields were a giant mistake given the negative public reaction
17:02
<littledan>
yeah that's a good example of where we definitely did have a public conversation with mixed views from outside and we drew a conclusion in committee. I'm not talking about external groups having a veto, but I did make a specific effort to explain outwardly what it was about, and work with non-committee-members and people doing public-facing communication/education.
17:03
<littledan>
I don't think all proposal champions need to do this, but it's good to have people in the coalition working on a proposal who want to interact with people outside of committee.
17:04
<littledan>
it's a normal activity for TC39. But there are lots of normal activities that only a subset of us do, because there's just so much to do.
17:05
<littledan>
private fields took longer because we made an honest attempt to understand and engage with this external feedback. It was frustrating for me at the time, but I think we should continue to do that (though maybe somehow timeboxed where it makes sense)
17:08
<shu>
i think we drew pretty different conclusions from that episode
17:09
<shu>
and i disagree that it should be a normal activity for tc39
17:09
<shu>
there's the abstract problem of "developer signal" which should be a normal activity for tc39
17:09
<shu>
but this particular form of it, most definitely not
17:11
<littledan>
Yeah I think we can leave this as, we agree that we want to get developer signals. I didn’t mean to make a bigger point than that.
18:59
<Mathieu Hofman>
and you have to check for two things: whether Array.prototype[Symbol.iterator] is the original value, and whether %ArrayIteratorPrototype%.next is the original value
If people mess with %ArrayIteratorPrototype%.next I'd say they immediately lose rights to any optimization
19:07
<littledan>
If people mess with %ArrayIteratorPrototype%.next I'd say they immediately lose rights to any optimization
but... do they retain rights to have that messing-with be respected by iteration? I'd say no
19:07
<littledan>
the extra work that engines would have to do would be to retain those rights. and that's not trivial.
19:11
<rbuckton>
I wonder if we could freeze array iterator instances and make %AIP%. next non-writable, non-configurable
19:17
<Mathieu Hofman>
but... do they retain rights to have that messing-with be respected by iteration? I'd say no
I mean we'd be changing the semantics. If someone overwrites %ArrayIteratorPrototype%.next to observe every array iteration, I think it's their expectation it'd work given how we currently specified it. Would that break someone is another question. So yeah I suppose that is where you need a normative / observable change if that's something you want to stop supporting.
19:18
<Mathieu Hofman>
Of course frozen intrinsics would solve all this mess ;) Once the environment is locked down, your optimizer would know what it can permanently rely on or not.
19:20
<littledan>
I mean we'd be changing the semantics. If someone overwrites %ArrayIteratorPrototype%.next to observe every array iteration, I think it's their expectation it'd work given how we currently specified it. Would that break someone is another question. So yeah I suppose that is where you need a normative / observable change if that's something you want to stop supporting.
yeah, this is an observable change, the question is whether this would break any actual reasonable and widely-deployed program. We don't give people the ability to observe object destructuring, and we shouldn't add that; this is more of an accidental capability IMO.
19:21
<littledan>
Of course frozen intrinsics would solve all this mess ;) Once the environment is locked down, your optimizer would know what it can permanently rely on or not.
we are not operating in a frozen intrinsics world; even if we're there sometimes, we need extractors to be reasonably efficient outside of that.
19:23
<Mathieu Hofman>
But it'd be such a nice carrot for the environment to move to frozen intrinsics if there was a perf benefit to gain from it.
19:24
<littledan>
that doesn't seem like a great reason by itself to not do this sort of integrity enhancement for array iteration (or just destructuring) if it's otherwise web-compatible.
20:42
<Mathieu Hofman>
Oh sure one doesn't prevent the other. But in general, I would be hopeful that frozen objects and frozen intrinsics should allow engines to do some optimizations instead of making things slower (as it seems to be the case currently). We hear stories of people liking hardened environments for the integrity guarantees, but being unhappy with the performance impact, which is a blocker for adoption in production in some cases.
22:56
<Justin Ridgewell>
well, this is a simpler use case of it, with no early return, and especially easy to implement an eager conversion function for the slow path without changing semantics. But yeah I'd be happy to generalize it to all iteration if we find that that's web-compatible
I mentioned this in Tokyo, I want to spec the fast path like we did with await. Just turn it into arr[i] access internally
23:02
<littledan>
I mentioned this in Tokyo, I want to spec the fast path like we did with await. Just turn it into arr[i] access internally
Yeah that’s what I’m saying