00:01
<hax (HE Shi-Jun)>
I think we just need a "use semicolons" parser switch that turns off ASI honestly.
Anti proposal: "no asi harzards" parser switch that fix ASI hazards according to developer's expectation. (Yeah , it's very very impossible , especially the committee introduced the very serious new hazards in ES2020)
00:54
<ljharb>
i doubt anything that even tacitly encourages omission of semicolons will achieve consensus :-)
01:01
<rbuckton>
I've updated https://github.com/tc39/proposal-throw-expressions/pull/18 to use Static Semantics rules to avoid ASI for +, -, /, and /=, in keeping with how we also use Static Semantics rules to avoid ASI for optional chains followed by a template literal on a following line. I don't believe the end result is as complex as was suggested, but I'd appreciate if anyone with time could take a look over it as I am still short a reviewer.
01:02
<ljharb>
i was going to say "tcq needs updating" but i refreshed and it's not loading at all for me now, just hanging
01:02
<Chris de Almeida>
TCQ down for everyone or just me?
01:02
<nicolo-ribaudo>
tcqdownforeveryone.status
01:03
<Chris de Almeida>
this would never happen if it was using IBM infra! /s
01:03
<Chris de Almeida>
TCQ appears to be operational once again
01:04
<Chris de Almeida>
🤞
01:04
<hax (HE Shi-Jun)>
i doubt anything that even tacitly encourages omission of semicolons will achieve consensus :-)
yeah , we lost the feasibility forever after ES2020.
01:05
<ljharb>
not because of that, but because some number of us (at least one 😄) feel that omitting semicolons ~isn't a good~ is a harmful style to encourage.
01:09
<rkirsling>
we should make a t-shirt
01:09
<rkirsling>
"eat your vegetables and use your semicolons"
01:09
<shu>
you should use your whole colon tbh
01:10
<Christian Ulbrich>
omitting semicolons is eval()!
01:16
<Michael Ficarra>
I never know how to understand the feedback "we would like to see this done but are unwilling to dedicate resources to doing it ourselves"
01:17
<Michael Ficarra>
is there not an inherent conflict there?
01:17
<shu>
Michael Ficarra: i think you know in your heart how to understand that feedback
01:17
<hax (HE Shi-Jun)>
While I understand there are always people don't like omitting semicolons, also there are always people don't like to be forced to add semicolons. After all, I believe the original design of the language (by BE) encourage omitting semicolon, just like the most recent new application programming languages (swift, kotlin, scala3, etc.) Unfortunately BE made some small mistakes and introduce ASI hazards (though the most ASI hazards patterns were not used by any one in early days), but it still fixable IMO, until ES2020 (the best time to fix asi harzards was ES5 strict mode, of coz).
01:21
<Christian Ulbrich>
We should probably take this to TDZ;
01:22
<ljharb>
this still assumes that removing hazards from something hazardous is a "fix"
01:23
<snek>
this is a cool and relevant pr https://github.com/nodejs/node/pull/48528
01:25
<snek>
we actually see some huge performance boosts here
01:25
<Justin Ridgewell>
Yah, this will be considerably faster than AsyncLocalStorage in Node.js
01:28
<leobalter>
Can you help me understanding this better?
01:33
<Chengzhong Wu>
And it is using mechanisms that are already available in V8
01:35
<Michael Ficarra>
I was hoping for the same
01:40
<shu>
V8 is a mutable codebase
01:41
<shu>
i'd not consider the state things are today to be immutable -- e.g., don't assume ContinuationPreservedEmbedderData and its current performance characteristics to be like, unchanging facts of the world forever and forever
01:42
<Jack Works>
I still not convinced by the motivation of get-intrinsics. it looks tooooo specialized to me, only spec lawyer will use it
01:42
<snek>
i'd not consider the state things are today to be immutable -- e.g., don't assume ContinuationPreservedEmbedderData and its current performance characteristics to be like, unchanging facts of the world forever and forever
yeah that's why it hasn't merged yet
01:43
<snek>
upstream discussion
01:43
<littledan>
sometimes in readmes, I link to issues for open questions. Or at least I try not to claim that they are already solved.
01:43
<shu>
yeah that's why it hasn't merged yet
yet champions keep repeating the argument that "no performance problem because V8 already ships this"
01:43
<leobalter>

I can tell from my perspective how I want to dedicate resources on a proposal I’m championing going through a part that depends dedicated expertise that is out of my domain. This requires cross collaboration and contracting in which I’m doing.

Yet I’m not sure if you’re referring to me on ShadowRealm, so I don’t know if that’s the clarification you’re looking for, as you are also not asking for me directly or in a way that makes it seem you’re actually trying to understand.

With that said, it took us time, but we are happy to work again with people that can provide the expertise to complete the tasks needed.

I’m also glad that TC39 is composed by a diverse set of expertises and backgrounds. We all must aim for the goal of interoperability, which is the best value of JS.

01:44
<littledan>
i'd not consider the state things are today to be immutable -- e.g., don't assume ContinuationPreservedEmbedderData and its current performance characteristics to be like, unchanging facts of the world forever and forever
It's more like an existence proof that this technique isn't way too slow--I would've assumed the opposite before this had shipped.
01:44
<Jack Works>
It's more like an existence proof that this technique isn't way too slow--I would've assumed the opposite before this had shipped.
or it isn't way too much used
01:44
<Jack Works>
like with.
01:44
<littledan>
the copy is being done unconditionally
01:45
<shu>
i understand the state of the world today
01:45
<littledan>
yeah that's why it hasn't merged yet
What is it that hasn't merged?
01:45
<Jack Works>
(but actually I found with does not that bad on performance, at least in JSC)
01:45
<snek>
the pr i linked
01:45
<shu>
i am saying, V8 retains the optionality to become unhappy with CPED
01:45
<snek>
because like shu said, google isn't sure that CPED will continue to exist
01:46
<Jack Works>
what is CPED, Carnegie Project on the Education Doctorate?
01:46
<snek>
ContinuationPreservedEmbedderData
01:47
<hax (HE Shi-Jun)>
ContinuationPreservedEmbedderData
Still not understand what is it 🤣
01:48
<snek>
Still not understand what is it 🤣
its sort of like [[AsyncContextSnapshot]] in the async context proposal
01:48
<snek>
though it doesn't map exactly
01:48
<Christian Ulbrich>
every variadic function makes TS team sweat...
01:49
<Jack Works>
every variadic function makes TS team sweat...
<T>(a: T): [T]
<T, T1>(a: T, b: T1): [T, T1]
<T, T1, T2>(a: T, b: T1, c: T2): [T, T1, T2]
...
01:49
<Jack Works>
it happens in many languages
01:52
<snek>
i don't think "i don't have an iterable" is a problem in practice
01:52
<snek>
if you have more than 0 things, there is generally a way to turn at least one of them into the head of the iterator
01:52
<snek>
[].values, Iterator.from, etc etc
01:57
<rbuckton>
Iterator.once(value) is potentially an option, assuming it were lighter weight than just using a single-element array.
01:57
<bakkot>
Iterator.of, surely?
01:58
<Justin Ridgewell>
i'd not consider the state things are today to be immutable -- e.g., don't assume ContinuationPreservedEmbedderData and its current performance characteristics to be like, unchanging facts of the world forever and forever
My design for AsyncContext was independent of CPED, it just happens that CPED is the exact same thing that's needed (it's based on the poorlyfill I implemented for Vercel). The priority is today's code is unaffected by the proposal (tasks need to store an additional pointer), resuming a task is unaffected (only a pointer swap), and we can figure out how to optimize creating new context afterwards. I found CPED while digging through the V8 codebase's host hooks afterwards, and it did exactly what I wanted.
01:58
<rbuckton>
But generally, I'm fine with concat/append. I use them in my own packages.
01:59
<snek>
there is iter::once(value) in rust but i have basically never seen any code that uses it
01:59
<snek>
i'm sure it exists
01:59
<snek>
but its far more uncommon than chaining so i wouldn't want to over-index on that
01:59
<shu>
Justin Ridgewell: i am responding to the argument that's repeatedly been put forth that this will not have performance problems because CPED exists in V8
01:59
<Christian Ulbrich>
When I saw Iterator.of I immediately thought of RxJS.of() , though it has been while, that I did reactive streams, .of() looks like a good choice...
01:59
<bakkot>
Also Array.of
01:59
<shu>
Justin Ridgewell: i just want to make the point that that argument is not strictly true, since V8 retains the optionality to become unhappy with the performance CPED as it stands today
02:00
<snek>
all this being said, i don't know why v8 doesn't like CPED
02:00
<snek>
i just know that every time it comes up someone from v8 makes sure to say that
02:00
<rbuckton>
I'll also add the C# has Concat, Append, and Prepend as well.
02:00
<shu>
we don't like it because it unconditionally incurs memory cost on every callback
02:01
<Justin Ridgewell>
Justin Ridgewell: i just want to make the point that that argument is not strictly true, since V8 retains the optionality to become unhappy with the performance CPED as it stands today
If CPED is changed, that's fine. I think we're abusing it because it exists. AsyncContext requires something with similar functionality (global pointer, store a pointer on tasks, restore that pointer to resume).
02:01
<shu>
sounds good
02:01
<shu>
that matches my understanding
02:02
<Justin Ridgewell>
Justin Ridgewell: i am responding to the argument that's repeatedly been put forth that this will not have performance problems because CPED exists in V8
It just proves that it can be implemented in a way that gives you the same performance as today's V8, which as Dan said, was unexpected for everyone.
02:02
<littledan>
maybe in a few more years, we'll catch up with Python Itertools as of 2007!
02:02
<littledan>
great work Michael Ficarra
02:02
<snek>
it is funny how much effort it takes to add a suite of functionality to js
02:02
<Michael Ficarra>
thanks littledan 🙂
02:03
<snek>
watching how rust has evolved. stuff like "chain" barely gets any attention by anyone, whereas we have an entire proposal dedicated just to it
02:03
<Michael Ficarra>
snek: I actually appreciate that because I didn't realise how many possible shapes of solutions there were to these problem spaces
02:03
<Chris de Almeida>
need conclusion+summary for iterator sequencing 🙏
02:03
<snek>
oh yeah its definitely important to consider these things
02:03
<Michael Ficarra>
I am confident that our process will produce a great solution here
02:04
<rkirsling>
I think I'm still confused by the use of .values()
02:04
<snek>
can we advance the queue
02:04
<Michael Ficarra>
rkirsling: don't worry about it, you will never need to use that
02:05
<littledan>
I am confident that our process will produce a great solution here
It's just to get an iterator from which you can call flatMap, which will then do the concat
02:05
<Michael Ficarra>
rkirsling: ^
02:05
<Chris de Almeida>
TCQ has returned
02:05
<snek>
high level the pattern is just flatMap(iterator of iterators)
02:05
<Michael Ficarra>
oh thank god
02:06
<bakkot>
petition to add iter as an alias for symbol.iterator on all built-in iterators
02:06
<snek>
only if we can use ::
02:06
<bakkot>
:(
02:06
<snek>
petition to add iter as an alias for symbol.iterator on all built-in iterators
i think i'd be OK with this
02:08
<Michael Ficarra>
so from the discussion today, I think I will lean toward Iterator.of/Iterator.prototype.flat for iterator sequencing stage 2
02:08
<bakkot>
:(
02:08
<Michael Ficarra>
and as a bonus, those methods are useful for other things
02:08
<bakkot>
I do not think people will reach for that
02:08
<bakkot>
I would really like a method named something obvious
02:08
<bakkot>
that makes it clear you are doing concatenation
02:09
<snek>
i think a static method is a non-starter
02:09
<snek>
for this problem space
02:09
<rbuckton>
I'd be perfectly happy with .concat, .append, and .prepend
02:09
<bakkot>
also flat is problematic in ways that concat is not, wrt handling of non-iterable values
02:10
<ljharb>
i don't think "i don't have an iterable" is a problem in practice
it is very common in practice to have something that can either be one thing, or a list of things, and [].concat(x) is incredibly useful and commonplace to ensure one always has an array without having to care
02:10
<snek>
i don't think i've ever seen code that has [].concat(x)
02:10
<bakkot>
I have literally never seen that pattern in any code not written by you
02:10
<snek>
i believe you that it exists
02:10
<ljharb>
lol
02:10
<Jesse (TC39)>
OH cons
02:10
<snek>
but like, i wrote code with chain literally today (in rust) at work
02:11
<ljharb>
none the less, i didn't make it up, it's common in lots of codebases i've worked in, which is why i use it in all of mine
02:11
<rbuckton>
I have literally never seen that pattern in any code not written by you
It definitely exists in transpiled emit, not sure how often it is hand-written
02:11
<ljharb>
i agree concatSpreadable is awful, but not because "concat takes an array or a scalar" is awful.
02:12
<Michael Ficarra>
two things can be awful at the same time
02:12
<rbuckton>
"preferred" is also a bit strong.
02:13
<ljharb>
sure. i just don't agree the second one is awful, because i don't think in terms of haskelly types
02:13
<Michael Ficarra>
I mean even TypeScript-y types don't unify those things
02:14
<Michael Ficarra>
you don't need some advanced type system to be told that that is inappropriate
02:14
<rbuckton>
waldemar: You don't get a prototype, at least as far as we've been discussing.
02:14
<rbuckton>
As in, [[Prototype]] is null
02:14
<bakkot>
also flat is problematic in ways that concat is not, wrt handling of non-iterable values
to elaborate on this: either it throws on non-iterable values, or it doesn't; both are problematic. throwing on non-iterable values means you can't use it like Array.prototype.flat to flatten a tree. not throwing on non-iterable values means that ever making anything iterable is a breaking change.
02:14
<snek>
imo having a variable number of things is what vectors are for, not vector or item
02:14
<ljharb>
typescript certainly can't handle [].concat() at all, it's been a long standing open issue on the repo. that doesn't mean it's bad, it's just another gap in TS's ability to describe JS.
02:16
<hax (HE Shi-Jun)>
I remember the original draft don't allow prototype on shared struct values, now it's allowed but just omitted when transfer? Is my understanding correct?
02:17
<snek>
i'm curious why prototypes are a thing at all
02:17
<snek>
on shared structs
02:17
<rbuckton>
I remember the original draft don't allow prototype on shared struct values, now it's allowed but just omitted when transfer? Is my understanding correct?
Not quite. If you want thread-local behavior, you must explicitly opt in and supply that behavior in each thread.
02:17
<snek>
i'd expect them to be an underlying implementation detail of some higher level thing
02:17
<ljharb>
i'm also confused why a struct would have behavior (functions) attached
02:17
<rkirsling>
me hearing "just an ephemeron" just now felt like like most people probably feel hearing "just a monoid object in the category of endofunctors on C"
02:17
<Michael Ficarra>
I don't understand this. Why is throwing on non-iterables bad?
02:17
<rbuckton>
i'd expect them to be an underlying implementation detail of some higher level thing
There is a lot of complexity to this. Attaching behavior is very important.
02:18
<Jack Works>
maybe just reference counter...
02:18
<ljharb>
having behavior is. attaching it is just a style choice. functions can live separate from the data they affect.
02:18
<hax (HE Shi-Jun)>
Not quite. If you want thread-local behavior, you must explicitly opt in and supply that behavior in each thread.
Not fully understand... I mean I can always use functions (not methods)?
02:19
<bakkot>
typescript certainly can't handle [].concat() at all, it's been a long standing open issue on the repo. that doesn't mean it's bad, it's just another gap in TS's ability to describe JS.
https://github.com/microsoft/TypeScript/issues/47351 is the issue presumably
02:19
<ljharb>
what i read is, "because it's actually an important use case to have an X and a non-X and concatenate them"
02:19
<rbuckton>
having behavior is. attaching it is just a style choice. functions can live separate from the data they affect.
No, attaching is necessary for many projects, this is a result of feedback from multiple sources experimenting with the origin trial. Not everyone needs it, but many do.
02:19
<bakkot>
https://github.com/microsoft/TypeScript/issues/47351 is the issue presumably
(with exactly 0 +1s...)
02:19
<ljharb>
i would love to be convinced that it's "need", but people quite often incorrectly think their wants are needs.
02:19
<hax (HE Shi-Jun)>
There is a lot of complexity to this. Attaching behavior is very important.
I think it might be solved by extensions proposal? Don't use prototype, always use extension methods/accessor on shared struct values.
02:19
<snek>
i think sharing code within a struct makes sense and is well motivated. i think the whole prototype thing is weird
02:20
<Jack Works>
me hearing "just an ephemeron" just now felt like like most people probably feel hearing "just a monoid object in the category of endofunctors on C"
at least ephemeron can be explained in 2 lines and you may need a math phd to know the category theory
02:20
<rbuckton>
I've been experimenting with this in TypeScript. I absolutely cannot use it efficiently without the ability to attach behavior, unless I use Proxy or completely duplicate our AST as regular objects for the public API.
02:20
<rbuckton>
anything else would be a significant breaking change.
02:20
<bakkot>
I don't understand this. Why is throwing on non-iterables bad?
did you not see the "to flatten a tree" bit
02:20
<ljharb>
i don't understand why x.foo() and foo(x) are meaningfully different in terms of capabilities
02:21
<rkirsling>
(I think garbage collection is more difficult than category theory, but that's probably an issue of motivation)
02:21
<Michael Ficarra>
oh you mean like with a depth and stuff?
02:21
<littledan>
I've been experimenting with this in TypeScript. I absolutely cannot use it efficiently without the ability to attach behavior, unless I use Proxy or completely duplicate our AST as regular objects for the public API.
this makes perfect sense to me. It's necessary to be able to use objects directly, without wrapping them.
02:21
<rbuckton>
i don't understand why x.foo() and foo(x) are meaningfully different in terms of capabilities
If you are writing a project from scratch, sure. If you are adapting an existing project, it might not be feasible.
02:21
<Michael Ficarra>
nobody should be doing that
02:21
<Michael Ficarra>
don't do that
02:21
<bakkot>
I was assuming you would have the depth parameter like on arrays, yes
02:21
<ljharb>
If you are writing a project from scratch, sure. If you are adapting an existing project, it might not be feasible.
that's not a need, that's just difficulty adapting to a different style.
02:21
<Michael Ficarra>
flat should not take a depth, it should always be 1
02:21
<snek>
I've been experimenting with this in TypeScript. I absolutely cannot use it efficiently without the ability to attach behavior, unless I use Proxy or completely duplicate our AST as regular objects for the public API.
to be clear what i'm suggesting is like. you could have a property of a struct, which holds a sharable code value (a function of sorts). it would probably be callable like a.b() even.
02:21
<bakkot>
would the plan be to not do that?
02:21
<Michael Ficarra>
yeah no depth parameter
02:21
<bakkot>
well
02:21
<Michael Ficarra>
fucking gross
02:21
<hax (HE Shi-Jun)>
i don't understand why x.foo() and foo(x) are meaningfully different in terms of capabilities
DX, and x.foo() could be virtual methods.
02:22
<ljharb>
why? .flat(Infinity) is great
02:22
<rbuckton>
that's not a need, that's just difficulty adapting to a different style.
That's not an option for a widely used public API, without breaking an entire ecosystem of tooling.
02:22
<Michael Ficarra>
🤦‍♂️ please
02:22
<bakkot>
I am ok with this but I think if you're going to push jordan to accept an asymmetry with array methods, you should push for .concat, the nice normal thing
02:22
<rbuckton>
And I'm not talking about TS, though we would be included in that. This is feedback from other attempts to migrate other projects to use structs.
02:23
<ljharb>
what tooling ecosystem? certainly APIs may need breaking changes; asynchrony forces that, for example.
02:23
<ljharb>
it's perfectly fine if multithreading forces that too (even if it's ideal not to) we have semver and code refactoring tools and codemods that can make that easier.
02:23
<rbuckton>
I don't believe that it is, and there are solutions under discussion to make that a non-issue.
02:25
<Justin Ridgewell>
Need pipeline and static functions 🤞
02:25
<ljharb>
oof, didn't we already make this global registry mistake with symbols?
02:26
<Richard Gibson>
this is even worse than that; it links objects!
02:26
<ljharb>
when the solution is "universal state!" then maybe the problem's not worth solving
02:26
<snek>
what global registry mistake? registered symbols are super useful
02:26
<ljharb>
i'm not sure how, they're just fancy strings.
02:27
<Michael Ficarra>
yeah just use a string
02:27
<snek>
strings can be constructed
02:27
<snek>
from random places
02:27
<rbuckton>
yeah just use a string
Then why did we add Symbol.iterator? It's just a fancy __iterator__.
02:27
<snek>
i don't think the domain overlap makes sense
02:27
<hax (HE Shi-Jun)>
yeah, I also don't fully understand the use cases of Symbol.for()
02:28
<Michael Ficarra>
snek: that's the same thing as passing the string to Symbol.for...
02:28
<snek>
i think a great example is Symbol.for('nodejs.util.inspect')
02:28
<bakkot>
what global registry mistake? registered symbols are super useful
this is the first time I've heard anyone express this opinion - what are they useful for?
02:28
<bakkot>
what does that do that a string wouldn't do?
02:28
<ljharb>
not sure how that's great; node would work the same if it used a non-global symbol - and in fact it did use a non-global one prior to the change to make it global
02:28
<hax (HE Shi-Jun)>
i think a great example is Symbol.for('nodejs.util.inspect')
u could also use the string directly as key...
02:28
<snek>
you could use a string
02:28
<snek>
but strings can come from random places
02:29
<ljharb>
so can global symbols
02:29
<snek>
you could end up with that object by accident or maliciously
02:29
<ljharb>
same with global symbols
02:29
<Michael Ficarra>
say something that you can't say about strings
02:29
<snek>
no, you have to choose to invoke Symbol.for on some random string
02:29
<snek>
its not the default
02:29
<ljharb>
maybe less likely by accident but equally likely by malice?
02:29
<hax (HE Shi-Jun)>
To be honest , global symbol might be a little bit safer than string, but just a very little...
02:29
<Jack Works>
same with global symbols
and if you're using a symbol this way, you're deliberately trying to break things
02:29
<ljharb>
using some random string is pretty unlikely too, especially that java package style node uses
02:30
<Michael Ficarra>
I dunno, I randomly call Symbol.for on things all the time, just in case
02:30
<TabAtkins>
...i don't understand what could possibly be gross about a depth. It's just a shorthand for .flat().flat().flat()...
02:30
<rkirsling>
this is very amusing because I always thought there was secretly a good reason why Symbol.for is a thing and I simply never needed to know it
02:30
<rbuckton>
There have been several attempts to use the origin trial, which is currently data only, to adopt shared structs in existing projects. The feedback has been the same: Without the ability to attach behavior, shared structs require a complete rewrite of the project, including the public API, which is a burden that we would rather not enforce on the project maintainers, nor their consumers. That makes the feature unusable for projects that would sorely benefit from the capability.
02:30
<snek>
i am surprised to hear that people don't like symbol.for
02:31
<TabAtkins>
And personally, I use .flat(Infinity) more than any other value. ^_^
02:31
<snek>
its definitely rare use cases but i think its well motivated at least
02:32
<snek>
There have been several attempts to use the origin trial, which is currently data only, to adopt shared structs in existing projects. The feedback has been the same: Without the ability to attach behavior, shared structs require a complete rewrite of the project, including the public API, which is a burden that we would rather not enforce on the project maintainers, nor their consumers. That makes the feature unusable for projects that would sorely benefit from the capability.
what is the difference between thing = wrap(struct) and Object.setPrototypeOf(struct, ...)?
02:32
<rbuckton>
what is the difference between thing = wrap(struct) and Object.setPrototypeOf(struct, ...)?
Performance and memory consumption.
02:33
<rbuckton>
In TypeScript, If you have a large AST, you either double your memory consumption for the AST, or redirect through accessors which is slower.
02:33
<snek>
you want the entire AST to be shared structs?
02:34
<snek>
even in languages with proper thread apis and such i would never want to do that
02:34
<nicolo-ribaudo>
you want the entire AST to be shared structs?
That's how I would see us using this proposal in Babel too
02:34
<rbuckton>
Yes. That is what I'm doing right now to parallelize the TypeScript parser in an experiment.
02:35
<nicolo-ribaudo>
And we could parallelize transform by having locks on subtrees
02:35
<snek>
have you looked into how projects like rslint work
02:35
<rbuckton>
I could also parallelize emit, so I'd prefer to use the same AST everywhere.
02:36
<rbuckton>
Does rslint operate on a file on a time, or does it require whole program knowledge?
02:37
<snek>
i'm not sure if it does whole program but i was more talking about how it operates on a shared AST in parallel
02:37
<Michael Ficarra>
TabAtkins: an infinitely-nested iterator will not be able to yield anything
02:38
<rbuckton>
I am not familiar with the internals of rslint.
02:38
<TabAtkins>
Yeah and mapping over an infinite iterator will infinite loop too. Is this something that's easy to hit by accident?
02:38
<snek>
it might be good to look at how projects which operate on a "shared ast" as a concept work in other languages
02:38
<ljharb>
basically nobody uses infinite iterators, that's just not a problem that happens
02:38
<snek>
rslint is one example, but there are lots
02:38
<snek>
anyway i suspect i will end up losing this argument, i don't have the energy to care about typescript internals more than you
02:40
<Justin Ridgewell>
JS inability to paralellize and share memory is the main reason we didn't use it for Turbobpack.
02:40
<Christian Ulbrich>
I find the idea of "having non-byte stuff shared" , quite easy to grasp, that this "should work with functions" as well (bcause it is JS), seems no to be so far-fetched. Question is, whether this is really doable, not so much to me, whether someone would use it.
02:41
<Justin Ridgewell>
We have proper sharing in Rust, but we don't access the same AST in parallel, only split the parsing/transforming between threads
02:41
<snek>
if we just have shared functions of some form, we don't need the prototype hacks
02:41
<Rob Palmer>
So you're saying that when this goes to Stage 4, Turbopack can get rewritten in JS?
02:41
<bakkot>
aaaaaanyway this is why I would not recommend trying to move forward with .flat
02:42
<Justin Ridgewell>
No, Rust offers us a lot more, but this was the non-starter for TS
02:42
<Justin Ridgewell>
Rust macros are magic ✨
02:42
<Michael Ficarra>
TabAtkins: mapping over an infinite iterator works fine, it does not infinite loop
02:43
<Christian Ulbrich>
So you say, if we have macros in JS, that turbopack... ah sorry
02:43
<Michael Ficarra>
also people do use infinite iterators ljharb
02:43
<bakkot>
for of over an infinite iterator will infinite loop
02:43
<TabAtkins>
yeah, that's what i meant
02:43
<bakkot>
and infinite iterators are way more common than infinitely nested iterators
02:43
<TabAtkins>
exhausting it
02:43
<Michael Ficarra>
infinitely nested iterators just don't exist at all, period
02:43
<TabAtkins>
I mean. Strings.
02:43
<Michael Ficarra>
there is no (useful) code that would construct them
02:44
<TabAtkins>
Because we fucked up.
02:44
<hax (HE Shi-Jun)>
Rust macros are magic ✨
There is also TS macros ( https://github.com/GoogleFeud/ts-macros ),maybe not very magic 😅
02:44
<bakkot>
strings at least would not be a problem here
02:44
<rbuckton>
if we just have shared functions of some form, we don't need the prototype hacks
Looking at rslint, it looks like they start up a bunch of threads, parse in parallel, and then transmit the files back. Does std::sync::mpsc::channel clone the sent data, or does it reuse the same memory?
02:44
<bakkot>
infinitely nested iterators just don't exist at all, period

you:

an infinitely-nested iterator will not be able to yield anything

02:45
<bakkot>
if they don't exist, then .flat(Infinity) is not at problem
02:45
<rbuckton>
I'm not that familiar with the Rust API
02:45
<TabAtkins>
Outside of strings, infinitely nested iterators are indeed virtually never a thing.
02:45
<snek>
depends on what you're sending through it
02:45
<Michael Ficarra>
also they're iterable not iterator, so .flat(Infinity) on them wouldn't infinitely recurse
02:45
<TabAtkins>
So yeah, I'm also confused why they're an objection to flat(Infinity)
02:45
<Michael Ficarra>
if they don't exist, then .flat(Infinity) is not at problem
it also has no reason to exist
02:45
<bakkot>
.flat would certainly also work on iterables, like flatMap does?
02:45
<snek>
mpsc is "single consumer" so in most cases it is a move operation (reuses the same memory)
02:46
<Michael Ficarra>
bakkot: except strings lol
02:46
<bakkot>
except strings, yes
02:46
<bakkot>
everyone agrees on that
02:46
<TabAtkins>
Why do you think it has no reason to exist? You seem to dislike infinite flat on arrays too, while I use that more than anything else, so maybe this is a mismatch in expectations?
02:46
<rbuckton>
The issue with using module blocks is that it means the worker has no access to the struct declarations until after they are received from the originating thread, which complicates startup.
02:47
<Michael Ficarra>
TabAtkins: infinitely-nested arrays don't exist because you can't even construct them
02:47
<snek>
i would love if we had sharable funclets or whatever
02:47
<snek>
we can just revive domenic's proposal
02:47
<TabAtkins>
Sure. But indefinitely nested ones do, and that's what I mean by Infinity.
02:47
<Michael Ficarra>
at least you can construct an infinitely-nested iterator, even if you could never iterate it
02:47
<ljharb>
sure you can construct them. const a = []; a.push(a)
02:47
<TabAtkins>
lol indeed
02:48
<ljharb>
people do that a bunch, actually, with objects too, which is why there's npm packages to handle describing or serializing them
02:48
<bakkot>
blöcks
02:48
<rbuckton>
The issue with using module blocks is that it means the worker has no access to the struct declarations until after they are received from the originating thread, which complicates startup.
the DX for that order is terrible. If I want to use a Point in a module in my worker, I can't import it. I have to receive the constructor as an argument.
02:48
<ljharb>
(but nobody would be iterating over those, i expect)
02:48
<snek>
yeah blöcks
02:48
<waldemar>
How does source location work with eval?
02:49
<snek>
source location won't work
02:49
<snek>
that has to change
02:49
<Justin Ridgewell>
How does source location work with eval?
It breaks
02:49
<snek>
otherwise you can't use bundlers
02:49
<Jack Works>
How does source location work with eval?
it does not I believe
02:49
<Jack Works>
but eval has a inferred address right? like "vm:123"
02:49
<Jack Works>
that is also some kind of source text identity
02:50
<TabAtkins>
And calling .flat(Infinity) on that array just stack overflows immediately. That's fine.
02:50
<nicolo-ribaudo>
Every eval call generates a new source location
02:50
<nicolo-ribaudo>
As if they were different files
02:50
<rbuckton>
Regarding the source location thing, I had also been working on a mechanism that didn't depend on source location, but a correlation step on the worker thread to do setup to associate a foreign struct with a local prototype. The downside to this approach is that it results in multiple "maps" that affect ICs and require fixup.
02:50
<Chris de Almeida>

people do that a bunch

name them; they do not deserve anonymity

02:50
<snek>
the easiest would just be requiring the registration to use a very qualified name
02:51
<ljharb>
the DOM, for objects :-p
02:51
<snek>
like struct Foo registered "dev.snek.library.Foo" {}
02:51
<nicolo-ribaudo>
com.google.workspace.sheets.myStruct
02:51
<ljharb>
puttin the java in javascript
02:51
<waldemar>
Every eval call generates a new source location
So the correlation from the presentation won't work with eval'd code.
02:51
<rbuckton>
the easiest would just be requiring the registration to use a very qualified name
We have also considered that. I suggested it as a mechanism to address the bundling issue.
02:51
<snek>
yeah, exactly like symbols
02:51
<rbuckton>
We have also considered that. I suggested it as a mechanism to address the bundling issue.
The problem is that it's potentially forgeable
02:51
<Chris de Almeida>
why is that text red? 🤔
02:52
<snek>
We have also considered that. I suggested it as a mechanism to address the bundling issue.
yeah if you have main_thread.js and worker_thread.js, its kind of required i think
02:52
<snek>
since those are different source texts
02:52
<rbuckton>
yeah, exactly like symbols
There's a difference. You don't define anything about a symbol except the description.
02:52
<snek>
ye the ultimate key in the agent would be the name + the shape
02:52
<rbuckton>
yeah if you have main_thread.js and worker_thread.js, its kind of required i think
Not necessarily, many bundlers support splitting out common code shared between different entrypoints.
02:53
<rbuckton>
ye the ultimate key in the agent would be the name + the shape
That's interesting.
02:54
<rbuckton>
I'm not sure if its still not forgeable and thus usable as a communications channel.
02:54
<Michael Ficarra>
it probably contains one of your mention words
02:54
<nicolo-ribaudo>
Exactly, it wouldn't
02:54
<snek>
yeah that's another thing, if you use fast dev mode (eval) your project will break lol
02:55
<rbuckton>
The mechanism I'd been investigating was to correlate struct types between a worker and main thread: https://gist.github.com/rbuckton/08d020fc80da308ad3a1991384d4ff62
02:55
<nicolo-ribaudo>
yeah that's another thing, if you use fast dev mode (eval) your project will break lol
Well in webpack all those eval calls happen just once per file right? So it's ok if they have their own source location
02:55
<littledan>
we can just revive domenic's proposal
how did that solve any of the problems we're discussing here?
02:56
<snek>
it solves sharable code, not other things
02:56
<rbuckton>
it solves sharable code, not other things
Isn't that proposal essentially what module expressions/module blocks is?
02:56
<snek>
it is quite similar yes
02:56
<Chris de Almeida>
hmmm I have zero keywords
02:56
<littledan>
Isn't that proposal essentially what module expressions/module blocks is?
yeah that plus closing over things and structured cloning them
02:57
<snek>
i think both could exist
02:57
<rbuckton>
It also didn't solve the shared code issue with respect to the developer experience.
02:57
<snek>
the main thing is whether you are sharing a module or a callable
02:57
<snek>
i think callables are a lot nicer to work with in many cases
02:57
<snek>
this could be my bias from working with erlang/elixir though
02:59
<nicolo-ribaudo>
I agree that f() is much better than await import(mod).then(m => m.f()), and module expressions are only a viable solution if "prototype calls" don't look like that
02:59
<nicolo-ribaudo>
(specifically, I wouldn't want aawait)
02:59
<snek>
yeah i can definitely see both existing
03:03
<snek>
ah this is my jam
03:09
<rbuckton>
mpsc is "single consumer" so in most cases it is a move operation (reuses the same memory)
Then that is essentially what I'm already doing. I parse the file in a thread pool thread, which produces a shared AST node that I send back to the main thread. If I had used normal objects sent via postMessage, I'd end up producing an entire copy of the AST in the main thread while the worker ends up holding on to the local copy until some point in the future when GC runs, which increases memory pressure and triggers GC more often, and that's would be a huge problem in some very large codebases (it already is, to a degree, so that would just makle it worse).
03:14
<rbuckton>
But since I can't attach behavior to a struct, I still end up having to rebuild the AST as normal objects to hand off to the rest of the compiler. Unfortunately, rewriting the entire compiler just to use a data-only AST isn't something feasible in the time I've spent on it so far. And the TSC compiler is mostly written using the FP style. There are too many bits of code that still expect to be able to call .map, .filter, etc. on a NodeArray that wouldn't otherwise be present in a data-only world. And it's not just arrays. We regularly use Map and Set as well.
03:17
<rbuckton>
Switching back to a purely functional approach even for working with arrays is not a reasonable ask, as there is far too much to reasonably change, and would have an oversized impact on TypeScript API consumers. Maybe we pay the cost to speed up tsc command line compilation, but API consumers wouldn't get that benefit.
03:17
<Chris de Almeida>

hmm.. ok, when I view source, I'm in the mentions block:

    "m.new_content": {
      "body": "the DOM, for objects :-p",
      "format": "org.matrix.custom.html",
      "formatted_body": "the DOM, for objects :-p",
      "m.mentions": {
        "user_ids": [
          "@softwarechris:matrix.org"
        ]
      },
03:21
<snek>
i understand what tsc wants to do i just explicitly don't consider it when thinking about how things should be designed, as it is a very "special" codebase.
03:21
<rbuckton>
Just using f(data) glosses over the fact that f isn't going to be that short in the majority of cases. I have too many places where I've had to write something like concurrentMap_set(map, key, value) when it could have been map.set(key, value). I'd much rather we not turn JS into C to use this feature.
03:22
<snek>
like i'd rather have microsoft toil over rewriting tsc than make the design of a feature worse
03:22
<snek>
it is subjective of course
03:22
<rbuckton>
I would consider the inability to attach behavior to a struct a worse outcome, especially because I believe that are reasonable ways to achieve this.
03:23
<snek>
i'm cool with attaching behavior to structs, i just don't like the unshared prototype or whatever its called
03:23
<rbuckton>
What about that is problematic?
03:23
<snek>
the aesthetics
03:24
<rbuckton>
Do you mean the with nonshared prototype;? That's nowhere near final syntax.
03:25
<rbuckton>
That comes from trying to require explicit opt-in for some things so that we leave room for the potential for actual shared code in the future, and not paint ourselves into a corner.
03:25
<snek>
no not the syntax, i'd rather just skip to having actual shared code
03:26
<rbuckton>
We haven't found a workable solution for that.
03:26
<rbuckton>
JS is not threadsafe, and the way functions work today makes them not threadsafe either.
03:28
<snek>
ye i'm aware of the history there, i was around for blocks and such. i just want to spend more brainpower on that instead of the temporary solution
03:28
<rbuckton>
Neither that proposal nor module expressions are actually shared code.
03:29
<rbuckton>
They are essentially just shared source text.
03:29
<snek>
i'm not sure what you mean by shared code then
03:30
<rbuckton>
If you are talking about just having the same method declarations evaluated in two threads, that's exactly what was being proposed.
03:30
<snek>
i am not talking about that
03:31
<snek>
in v8 terminology i'm thinking of something like bytecode being shared between isolates
03:31
<rbuckton>
If you're talking about the main thread sending the declaration to the child thread, that ends up being a terrible developer experience.
03:31
<rbuckton>
bytecode for the function?
03:31
<snek>
for some given source text
03:31
<snek>
it could also include closed over references
03:32
<snek>
maybe another example would be functions in erlang/elixir, though its hard to say exactly what "closed over" means there because everything is COW
03:32
<rbuckton>
I'm not sure how feasible that would be. If you're just talking about untyped, unoptimized code, maybe, but not optimized representations or every deopt and recompile would stop the world.
03:32
<rbuckton>
And if its just untyped, unoptimized code, that could be achieved with what was discussed as well.
03:33
<snek>
yeah i mean its possible it might have to split apart
03:33
<snek>
these are sort of the two states though
03:34
<snek>
like you write let add = magicCodeContainerSyntax(a, b){a + b} and you expect that sending add to other threads and using it is generally a very cheap operation as long as you don't do anything too weird
03:35
<rbuckton>
What was presented was having a single shared struct declaration that was addressable by source location, in which case v8 or another engine could leverage the statically known shape of the declaration to generate the same hidden class for a struct in multiple threads. It could potentially even generate shared function information for all of the methods, and let each thread refine those on a per-thread basis.
03:36
<snek>
yeah that sounds like a reasonable application
03:38
<rbuckton>
This has issues with bundling and tree shaking, which I've talked about with shu and others in #shared-structs:matrix.org, though there is an argument to be made that bundlers could work around this by splitting out structs into a separate file reused by multiple entrypoints, which is something many bundlers can already do.
03:39
<snek>
yeah the addressing needs to be... addressed
03:39
<snek>
but the overall concept is sound i think
03:39
<snek>
both in terms of theory and in terms of implementation
03:40
<rbuckton>
My concern with trying to use something like module blocks to encapsulate shared code in the main thread and send it to the worker is that it becomes difficult to use in the worker. You can't just import your declarations now, you have to receive them as arguments if you want to create new instances of a struct type to send back to the main thread. That wouldn't work for TypeScript, we have too many different AST nodes.
03:40
<rbuckton>
So I would prefer a DX that allows me to use import.
03:41
<snek>
this is similar to elixir. you can send fn() -> MyModule.foo() end to another process (or even another physical computer) but that doesn't mean MyModule is properly imported for you to use it there.
03:41
<danielrosenwasser>
Have people run into issues with bunders duplicating declarations of classes with private fields?
03:42
<snek>
the issue with bundlers is.... say you have shared_util.js that is imported by both main_thread.js and worker_thread.js
03:42
<snek>
if shared_util is bundled into those two other files, it will have a different source location for each one
03:42
<snek>
which will break the struct addressing
03:42
<snek>
you can leave it split out to fix that
03:43
<snek>
though i'm not personally a fan of that
03:43
<rbuckton>
There is also an argument to be made that I don't want to share bytecode between functions in different threads. One of the biggest issues with worker threads today is that you must essentially duplicate large portions of your application code in memory in the worker. One solution to that would be to use tree shaking to remove unused code, but now anything used by a shared struct method cannot be dropped so I have to pay the cost for unused code.
03:44
<snek>
not sure i'm following what you're saying there
03:46
<rbuckton>
anything duplicated in main_thread.js and worker_thread.js results in that much additional memory taken up by the worker, as it constructs its own copies of every function and object as it evaluates the duplicate code.
03:47
<snek>
i feel like you're saying you want things to be shared but then saying that you don't want things to be shared
03:47
<rbuckton>
If main_thread.js and worker_thread.js both depend on shared_util.js, but each only uses part of that file, tree shaking would elide the unused functionality to remove overhead per-thread.
03:48
<snek>
oh i see, you're saying you don't want the pointer from shared_util to the shared bytecode to hang around if you're not using it
03:48
<snek>
yeah definitely something worth thinking about
03:48
<rbuckton>
That is at odds with what shu presented, and what we've discussed, but is also something I've been considering with the various alternatives we've been working through.
03:49
<snek>
if addressing is not based on source location i think that "just works" with tree shaking
03:49
<danielrosenwasser>
I mean, the downside of this all is polymorphic usage sites, right?
03:49
<danielrosenwasser>
It does not make your code crash
03:50
<snek>
i think the worst case is that the engine has to make a full copy for each thread of whatever the backing representation is (which is just the status quo today)
03:50
<rbuckton>
I mean, the downside of this all is polymorphic usage sites, right?
Yes, the version of the proposal presented today was heavily motivated by reducing polymorphism, but that is at the cost of the potential for increased overhead since tree-shaking becomes limited.
03:52
<rbuckton>
I've suggested a registration mechanism that would be bundler/tree-shaking friendly, but it imposes other limitations to avoid side-channel communication issues related to a shared registry that are also a concern.
03:52
<danielrosenwasser>
So just bundle split?
03:52
<rbuckton>
So just bundle split?
that is what was presented, yes.
03:52
<danielrosenwasser>
this all feels like non-problems
03:53
<snek>
bundle split can be problematic in some cases based on network conditions
03:53
<snek>
as much as we wish everything was http2+ everywhere
03:53
<snek>
its definitely not that bad
03:53
<danielrosenwasser>
but creating a worker off of a separate .js file isn't?
03:54
<snek>
i more just don't like that the feature strongly informs how code can be processed.
03:55
<shu>
this all feels like non-problems
that's my feeling too?
03:55
<rbuckton>

One way to overcome the bundling issue is with a lexical keyword to introduce a shared identity, i.e.:

shared struct S {
  with registration 'e9c1cfb0-e9d6-469c-8fba-99f641e4b64e';
  ...
}
03:55
<rbuckton>
Or a URN, or whatever you like.
03:55
<snek>
yeah that's what i suggested above, except my example was more human-readable lol
03:56
<rbuckton>
We'd discussed it being an API, or just a decorator as well. The problem with an API or a decorator is that it provides a way to communicate between otherwise isolated code.
03:56
<snek>
i'm curious what that communication channel is
03:57
<rbuckton>
If you have a global registry, in which you can insert a keyed element, which throws if the element already existed, then it can be used to communicate.
03:58
<snek>
oh, don't throw
03:58
<rbuckton>
How do you catch developer mistakes?
03:58
<snek>
like if you write out the same definition in two separate files?
03:58
<snek>
and then they fall out of sync?
03:58
<rbuckton>
"use strict" was partly about not silently allowing things that didn't work.
03:58
<snek>
i would consider you to be on your own at that point lol
03:59
<rbuckton>
I'm personally fine with not throwing, but that's also because I work on a type system that could warn you ahead of time about that kind of conflict.
04:00
<snek>
yeah i mean i don't think that's an inherently new problem to structs
04:00
<snek>
we have solutions for it
04:00
<snek>
orthogonal imo
04:03
<rbuckton>
So as not to continue to flood the chat about the shared struct proposal, I'd suggest anyone who is interested in discussing further join #shared-structs:matrix.org or one of the working session meetings in the TC39 calendar.
04:04
<shu>
please do!
04:05
<Rob Palmer>
Type Annotations community call minutes are here: https://github.com/tc39/proposal-type-annotations/issues/184
04:07
<Rob Palmer>
(Specifically the link to the Google Doc)
04:10
<snek>
i like token soup
04:12
<snek>
would be nice to see a future proposal which allows getting the token soup in some way, probably just as a string
04:13
<rkirsling>
when I was skimming the slides before, I assumed "token soup" was intended as an insulting phrase 😅
04:14
<rkirsling>
(like a Perl-y sort of image)
04:15
<bakkot>
what if we made this proposal console-only
04:15
<bakkot>
there was a discussion about having a REPL goal
04:15
<snek>
lmao
04:15
<bakkot>
chrome could just strip TS types in the console
04:16
<bakkot>
I would like that better...
04:16
<snek>
i like repl goal
04:17
<Jack Works>
chrome could just strip TS types in the console
I said the same when it went stage 1
04:18
<snek>

repl goal needs

  • sloppy by default
  • supports redeclarable static import
  • supports redeclarable let/const
  • supports type stripping
  • top level await
  • ???
04:18
<Chris de Almeida>
gentle reminder, please add yourself to the attendee list at the top of the notes if you have not done so. thank you 🙏
04:18
<Jack Works>

repl goal needs

  • sloppy by default
  • supports redeclarable static import
  • supports redeclarable let/const
  • supports type stripping
  • top level await
  • ???
annex J: Repl tool modification of the 262
04:19
<ljharb>
a repl goal that ignores types would be very interesting, and avoids many of the downsides of the current annotations proposal
04:19
<bakkot>
I do not understand the analogy to JSON at all
04:19
<snek>
i feel very undecided about this
04:19
<snek>
i want reflective type annotations so i can write cool things
04:20
<shu>
which champions just said is out of scope
04:20
<danielrosenwasser>
You don't want to send extra bytes for JSON comments over the wire, but it sure would be nice if you had them for configuration readability
04:20
<shu>
the reflection part
04:20
<bakkot>
I think this proposal will impede any reflective type annotations
04:20
<shu>
agreed
04:20
<snek>
i feel like it could come later as like "get the annotation strings"
04:20
<bakkot>
right, but MLS's point was that TS is only useful if you have another tool (which can then do the stripping). that's not true of comments in JSON.
04:20
<snek>
but i also feel like it could make trying to rely on that more hazardous if things are used to stripping them out
04:20
<shu>
i... don't think i can ship that
04:21
<Luca Casonato>
msaboff: total strawman, but could your concern be mitigated by also standardizing the semantics of these type annotations, even if JS engines do not enforce these?
04:21
<bakkot>
have people who are talking about the convenience of not having to strip types used esbuild's watch mode
04:21
<bakkot>
it is so fast
04:21
<bakkot>
you literally do not notice
04:22
<snek>
i... don't think i can ship that
i think it would be fine with lazy re-parsing? but i can imagine it would be annoying
04:22
<shu>
we have different definitions of "fine" i think
04:22
<shu>
it is implementable with re-parsing
04:22
<bakkot>
re: littledan's comment, ok but this proposal does not move to a world where tooling is optional
04:22
<bakkot>
like, at all
04:23
<snek>
i mean v8 already has a reparser for errors, its just a question of how fast do you want your annotations
04:23
<msaboff>
msaboff: total strawman, but could your concern be mitigated by also standardizing the semantics of these type annotations, even if JS engines do not enforce these?
No. There is nothing I see an implementation using TA's.
04:23
<bakkot>
types do nothing for you without additional tools
04:23
<bakkot>
that is the whole proposal
04:23
<Jack Works>
let's have annex J developer mode to contain all of those things
04:23
<shu>
bakkot: perhaps you should express that on the queue
04:23
<bakkot>
the queue is so full
04:23
<shu>
i think it would be valuable to demonstrate that there are skeptics who are not just implementers
04:23
<msaboff>
the queue is so full
There is a 60 minute timebox.
04:23
<bakkot>
bleh
04:23
<bakkot>
ok
04:25
<TabAtkins>
(I do have concerns with the current syntax proposal, and think it needs to be more generic/soupy rather than invasive into the grammar, but it's still absolutely the case that some syntax that's valid (and erased) at runtime is useful.)
04:26
<Luca Casonato>
No. There is nothing I see an implementation using TA's.
I don't understand what you mean. I am not suggesting runtime implementations of ECMA262 do anything other with TA's than strip them. But we could standardize how tooling itself is meant to interpret the grammar and perform the type checking (so there would be type checking implementations such as TypeScript, and runtime implementations such as JSC / V8 / SM)
04:27
<Christian Ulbrich>
The impact is quite high, if we follow this route, we are effectively defining the grammar for types for JavaScript once and for all.
04:27
<snek>
i mean if we have token soup we aren't defining any grammar for types
04:27
<snek>
other than :
04:27
<ljharb>
The impact is quite high, if we follow this route, we are effectively defining the grammar for types for JavaScript once and for all.
which is a pretty permanent thing to do when there isn't even a type system existing that can accurately describe the entire language.
04:27
<rkirsling>
I think I've gotten a bit confused here as to whether this information is meant to be used statically by engines
04:27
<bakkot>
i mean if we have token soup we aren't defining any grammar for types
well, assuming we also don't have interface and etc...
04:28
<bakkot>
and type
04:28
<msaboff>
So what do you expect an implementation doing with TA's besides throwing them away?
04:28
<bakkot>
and all the other TS keywords
04:28
<bakkot>
but of course if we don't have all the other TS keywords, it does not accomplish the goal of letting people ship TS at runtime
04:28
<rkirsling>
because I thought the answer was an obvious "yes" but maybe it's actually "no" and we're just demarcating a section to be ignored by an engine's parser?
04:28
<TabAtkins>
So what do you expect an implementation doing with TA's besides throwing them away?
That is the expectation, yes.
04:28
<Luca Casonato>
So what do you expect an implementation doing with TA's besides throwing them away?
A runtime implementation should do nothing other than to throw them away.
04:28
<Christian Ulbrich>
which is a pretty permanent thing to do when there isn't even a type system existing that can accurately describe the entire language.
I am with you, I have the same feeling, that finding a type system, that can actually define all of JavaScript's types might be difficult. Although there surely would be some kind of any wouldn't it? :)
04:29
<msaboff>
A runtime implementation should do nothing other than to throw them away.
So I think you agree with the comment I made that started this thread.
04:29
<bakkot>
because I thought the answer was an obvious "yes" but maybe it's actually "no" and we're just demarcating a section to be ignored by an engine's parser?
it's the second thing definitely
04:29
<bakkot>
engines cannot use TS types for several reasons
04:29
<ljharb>
I am with you, I have the same feeling, that finding a type system, that can actually define all of JavaScript's types might be difficult. Although there surely would be some kind of any wouldn't it? :)
nope, but unknown, yes. maybe there is no such system, but until there is, why would we want to lock ourselves in
04:30
<rkirsling>
it's the second thing definitely
that at least clarifies for me why "token soup" is meant as a neutral-sentiment phrase
04:33
<bakkot>
ok I also want to know the answer to WH's question though
04:33
<Luca Casonato>
So I think you agree with the comment I made that started this thread.

Ok, let me rephrase the original strawman then:

  • You do not like that a specific tool or set of tools is picked as the "blessed" syntax, I assume (please tell me if I am wrong) because we (TC39) as the body standardising the grammar, do not have control over how tools interpret this syntax, effectively delegating a large amount of syntax semantics to a different body (TypeScript / flow / whoever we "bless")
  • I am suggesting this may be mitigated by moving the semantics that a static checker should interpret the syntax with into the scope of TC39, possibly into ECMA262. This way TC39 keeps this control
  • Runtime implementations of ECMA262 will totally ignore these type annotations. The semantics, if we define them, would only be enforced by other ahead of time static analysis tools
04:33
<bakkot>
is the goal that in 10 years every TS user will write only the subset of TS which is legal ES?
04:34
<TabAtkins>
I suppose that's up to TS and its users, but probably yes.
04:34
<TabAtkins>
I don't understand what you mean, shu.
04:34
<Jack Works>
is the goal that in 10 years every TS user will write only the subset of TS which is legal ES?
JSX:
04:35
<bakkot>
yeah JSX seems like that makes it obviously not-gonna-happen
04:35
<shu>
TabAtkins: maybe i misread what bakkot said, let me re-read
04:35
<TabAtkins>
Python's type syntax is also (a) completely meaningless/erased at runtime, and (b) different tools use different type syntaxes for their purposes, and (c) they all stick within the broad constraints of valid Python annotations, tho.
04:35
<bakkot>
shu: sorry, there was an implicit "in ten years, assuming we do this proposal as standard ES"
04:35
<TabAtkins>
Don't see why that doesn't apply just as well to JS.
04:35
<shu>
TabAtkins: ha indeed i did, i read "10 years every" as "every 10 years"
04:35
<TabAtkins>
ahaha
04:35
<msaboff>

Ok, let me rephrase the original strawman then:

  • You do not like that a specific tool or set of tools is picked as the "blessed" syntax, I assume (please tell me if I am wrong) because we (TC39) as the body standardising the grammar, do not have control over how tools interpret this syntax, effectively delegating a large amount of syntax semantics to a different body (TypeScript / flow / whoever we "bless")
  • I am suggesting this may be mitigated by moving the semantics that a static checker should interpret the syntax with into the scope of TC39, possibly into ECMA262. This way TC39 keeps this control
  • Runtime implementations of ECMA262 will totally ignore these type annotations. The semantics, if we define them, would only be enforced by other ahead of time static analysis tools
Luca, I was only making the last point in my comment here.
I do have other concerns related to the first two points you list.
04:36
<TabAtkins>
I, for example, almost certainly would not use typed Python if I had to run a build step that produced my actual Python files. The fact that my code is runtime-valid is important.
04:37
<shu>
that seems empirically just not borne out by the vast majority of JS authors
04:37
<TabAtkins>
(JS devs appear to have way more appetite for build pain that I do, on average. But still.)
04:37
<shu>
like, everyone uses toolchains
04:37
<bakkot>
this proposal is predicated on the assumption that you are using a toolchain!!!
04:37
<shu>
and, as kevin said, this proposal itself presupposes you already use TS
04:37
<Luca Casonato>
Luca, I was only making the last point in my comment here.
I do have other concerns related to the first two points you list.
Ok, thank you - I'd be interested to learn more about your reasoning for not liking that a specific tool or set of tools is picked as the "blessed" syntax, as that may help find alternative solutions
04:37
<TabAtkins>
That you use a toolchain for production. Not that you use a toolchain to make your code run at all, necessarily.
04:38
<shu>
let me be frank
04:38
<bakkot>
you do typechecks at devtime
04:38
<bakkot>
all of the benefit of typechecks is at devtime
04:38
<TabAtkins>
Like I said, it's the same as minifying being a good thing to do for production, but we don't require you to write minified code to execute locally.
04:38
<Luca Casonato>
you do typechecks at devtime
No, you have your editor do this
04:38
<bakkot>
that is a tool
04:38
<Luca Casonato>
You do not run tsc during dev mode
04:38
<bakkot>
that
04:38
<bakkot>
you are using
04:38
<bakkot>
at devtime
04:38
<shu>
this proposal is asking for engines to ship everything, to everyone, all the time, for a thing that is purportedly not done in production, on the promise that users will also manually strip stuff away at production
04:38
<ljharb>
you should definitely be running tsc in CI before you merge in code
04:38
<shu>
what's in it for me? why would i support this?
04:38
<msaboff>
Ok, thank you - I'd be interested to learn more about your reasoning for not liking that a specific tool or set of tools is picked as the "blessed" syntax, as that may help find alternative solutions
I don't like that TC39, an open standard, is blessing possibly one syntax when there are several other possible syntax options.
04:38
<rbuckton>
i.e., if Type Annotations advances to Stage 3/4, TypeScript would also adopt f::<T>() for type argument lists. We likely would not ban f<T>() in a .ts file, but imagine the community would migrate to the new syntax just as they did for as when we added it to avoid conflicts with JSX.
04:38
<TabAtkins>
Yes, I do typechecks at devtime. But typechecking is different than executing. Being able to execute my code directly, and then typecheck at some convenient point, is nice.
04:39
<shu>
all other things being equal, i agree it is nice
04:39
<shu>
there are real costs here
04:39
<nicolo-ribaudo>
Right now on the Babel codebase we use ESLint to lint, TSC to type-check, Babel to strip away types, and Rollup to bundle. ESLint and TSC are run transaprently by my editor and on CI, Rollup only happens on CI, and the only step between me writing the code and testing it in Node.js is to compile away types
04:40
<Luca Casonato>
I don't like that TC39, an open standard, is blessing possibly one syntax when there are several other possible syntax options.
Ok, thank you - but isn't this something we always do when adding new syntax? We bless one syntax that we decide aligns best with the needs of the ecosystem and the direction we want to take the language? How is this syntax specifically different?
04:40
<shu>
no, we usually invent new syntax from whole cloth
04:40
<bakkot>
node could choose to strip types? but also, esbuild --watch is like
04:40
<shu>
we look at prior art, but not literally pick existing syntax
04:40
<bakkot>
not a big burden, given that existing stack
04:40
<nicolo-ribaudo>
node could choose to strip types? but also, esbuild --watch is like
Well, browsers can strip types too
04:40
<rkirsling>
wait that was just "token soup" as a negative phrase from Eemeli just now
04:41
<shu>
Well, browsers can strip types too
by slowing down parsing for everyone
04:41
<nicolo-ribaudo>
by slowing down parsing for everyone
Would it also slow down parsing for code not using type annotations?
04:41
<shu>
yes!
04:41
<TabAtkins>
shu: Yeah, I think the final syntax needs to be something super cheap to parse. The current proposal isn't it imo.
04:41
<msaboff>
Would it also slow down parsing for code not using type annotations?
Likely YES
04:42
<shu>
especially likely if there's cover grammar-like shenanigans involved
04:42
<TabAtkins>
yup
04:42
<Luca Casonato>
we look at prior art, but not literally pick existing syntax
Well but that is what we are doing here too? We don't literally pick TS syntax, we pick syntax that is similar to TypeScript to ease migration (we did this before with decorators), but ultimately it's a subset that is easier to parse and unambiguous. I don't see this as standardising TypeScript, I see this as standardising a syntax space for types that looks similar to TypeScript (inspiration), because it's what folks in the ecosystem are familiar with
04:42
<rbuckton>
The advantage of "token soup" is that typed languages can continue to innovate within that space without requiring advancing a proposal through TC39 for what is otherwise just a comment.
04:42
<nicolo-ribaudo>
I wonder if there is some perf goal we could aim at. Such as "no cover grammar needed unless there is a token clearly introducing a type context"
04:42
<snek>
the slow part of the annotations is single-char stepping. the slow part of the not-annotations is the other weird complexity layered on top.
04:43
<bakkot>
The advantage of "token soup" is that typed languages can continue to innovate within that space without requiring advancing a proposal through TC39 for what is otherwise just a comment.
but it would not let them do stuff like satisfies, etc
04:43
<bakkot>
TS introduces new keywords... pretty often
04:43
<ljharb>
The advantage of "token soup" is that typed languages can continue to innovate within that space without requiring advancing a proposal through TC39 for what is otherwise just a comment.
there are very few things in this space that wouldn't benefit from going through tc39
04:43
<nicolo-ribaudo>
Or if we managed to entirely avoid new cover grammars, so that the check is just "is the next token a ::</:?"
04:43
<shu>
I wonder if there is some perf goal we could aim at. Such as "no cover grammar needed unless there is a token clearly introducing a type context"
i think that assumes that you've already convinced engines that it's worthwhile to add anything here, and i think we are not yet convinced. at least not myself and michael
04:44
<rbuckton>
but it would not let them do stuff like satisfies, etc
True, but I think that's an acceptable tradeoff.
04:45
<shu>
though to be clear i do not share msaboff's concerns about picking winners as strongly
04:46
<snek>
i think where i am ending up here is that if we can't get any runtime benefit here, runtimes shouldn't deal with this, and it shouldn't move forward
04:46
<snek>
and its pretty clear at this point that there will be no runtime benefit
04:46
<rkirsling>
I just think there's a lot of terribly misleading verbiage surrounding "runtime" and erasure
04:46
<rkirsling>
every engine has a parser for early errors
04:47
<snek>
"runtime benefit" in my sentence is anything from engines using the type annotations to make code faster to simple string reflection
04:47
<rkirsling>
if you just read this presentation you would think that engines will be using this information statically and then throw it away at runtime
04:48
<rkirsling>
and if that's impossible (which it seems like it is?) then this seems just bad...?
04:49
<TabAtkins>
Again, these arguments generally would argue against Python's type annotations as well.
04:49
<rkirsling>
but mostly I'm just concerned about how confused I've felt.
04:49
<TabAtkins>
Make sure you actually mean for your counterargument to be that strong.
04:49
<snek>
Again, these arguments generally would argue against Python's type annotations as well.
python type annotations are reflectable
04:49
<snek>
and this property is used to huge advantage in the ecosystem
04:49
<msaboff>
"runtime benefit" in my sentence is anything from engines using the type annotations to make code faster to simple string reflection
So what should an implementation do if the type is violated while processing? Throw?
04:49
<TabAtkins>
Mehhhhh the reflection barely matters if at all.
04:49
<ljharb>
also python is categorically different than JS, because The Web
04:50
<Rob Palmer>
there are very few things in this space that wouldn't benefit from going through tc39
Zooming out (and ignoring the specifics of soup vs concrete grammar), one of the whole motivations of this proposal is to ensure coordination of real-life type syntax with tc39. By default, if we do nothing type syntax will iterate outside tc39 which brings coordination risk.
04:50
<TabAtkins>
mypy doesn't execute python and inspect the reflection, it parses itself
04:50
<snek>
So what should an implementation do if the type is violated while processing? Throw?
i don't think implementations should try to do it. i'm just saying, if everything we can possibly imagine is off the table, then i don't think its worth it
04:50
<Michael Ficarra>
Python also isn't shipped to me over the network every time I want to run it?
04:50
<TabAtkins>
sure, i run a build step on my python when i ship a new version to pypi
04:50
<snek>
Python also isn't shipped to me over the network every time I want to run it?
one would hope
04:50
<danielrosenwasser>
just wait
04:53
<snek>
Mehhhhh the reflection barely matters if at all.
there are several major libraries that make use of it. some of them are for dependency injection, which i find distasteful, but its still a thing. there are other cool examples like discord.py which uses annotations to build string parsing. i'm talking about packages with hundreds of thousands of downloads per day minimum here.
04:54
<TabAtkins>
ok sure, annotations for non-type-checking reasons i guess. I've never used these because i use the annotations for types instead.
04:54
<TabAtkins>
(you can only do one or the other, I believe)
04:55
<snek>
yeah i agree there
04:56
<bakkot>
to be clear, I don't claim that there are zero projects which are both complex and would benefit from this
04:56
<waldemar>
The advantage of "token soup" is that typed languages can continue to innovate within that space without requiring advancing a proposal through TC39 for what is otherwise just a comment.
As I showed in the presentation earlier in the year, that doesn't actually work. For example, you can't even do something as trivial as using existing ECMAScript expression syntax to calculate a constant inside a token soup.
04:56
<Jack Works>
so, is it really impossible to bring JSX into the language?
04:57
<ljharb>
i think one of the main challenges is how you can lexically indicate what the syntax is equivalent to - ie React.createElement, h, etc
04:58
<ljharb>
otherwise it'd just need to be hardcoded to make an object literal of some kind
04:58
<snek>
yeah it would have to be pojo
04:58
<Jack Works>
i think one of the main challenges is how you can lexically indicate what the syntax is equivalent to - ie React.createElement, h, etc
of course our own representation and ui library comes to support us
04:58
<snek>
yeah it would have to be pojo
or curried
04:58
<snek>
(<p>hi</p>)(h)
04:59
<snek>
is bloomberg going to get rid of tools?
05:00
<Jack Works>
run code in the void
05:00
<shu>
i got bad news for you if you think JS is directly executed by the metal
05:00
<ljharb>
one of the stated goals tho is "unfork the ecosystem"; "unfork" means 0 fork, not "less forked"
05:01
<ljharb>
"spork the ecosystem" doesn't have the same ring to it
05:01
<shu>
ljharb: this is actually a new fork!
05:01
<snek>
critical fork
05:01
<danielrosenwasser>
i got bad news for you if you think JS is directly executed by the metal
even the array accesses? 🥺 /s
05:01
<snek>
especially the array accesses
05:01
<shu>
👉️🥺👈️
05:02
<snek>
is that you putting your fingers in your ears
05:02
<danielrosenwasser>
it can be that too
05:02
<shu>
no i wanted to make that meme where the fingers are together
05:02
<snek>
lol
05:02
<shu>
dunno how to make that on one line
05:02
<ljharb>
A+ reference
05:02
<Jack Works>
one of the stated goals tho is "unfork the ecosystem"; "unfork" means 0 fork, not "less forked"
let's bring JSX in!
05:02
<Rob Palmer>
is bloomberg going to get rid of tools?
I hope not or I'll be out of a job.
05:02
<Luca Casonato>
danielrosenwasser: we're very interested in more devtools collaboration with TypeScript :)
05:02
<danielrosenwasser>
i am a little bit disappointed that we didn't get to discuss topics around in-browser devtools more
05:03
<shu>
i am a little bit disappointed that we didn't get to discuss topics around in-browser devtools more
i remain of the opinion the most highly impactful thing is that, exactly
05:03
<danielrosenwasser>
That was something that flew by the queue a few times
05:03
<shu>
not changing the standard
05:03
<danielrosenwasser>
Then I'd love to talk about that too
05:03
<snek>
the coolest thing about devtools changes is that you can just pr them
05:03
<shu>
(i say that as a non-devtools representative)
05:03
<snek>
you don't need plenary approval
05:04
<shu>
(i say that as a non-devtools representative)
but i have talked with them about it!
05:04
<snek>
bradley and i talking to v8 about devtools improvements in node is where the "repl goal" stuff came from
05:05
<snek>
it goes full circle
05:05
<bakkot>

so let me try to summarize my thoughts:

  • I agree there is would be some benefit for at least some projects, including both beginners and complex projects

  • I am not convinced that in fact most people will be able to skip the build step: this proposal does not cover all of TS as-is, and definitely won't cover JSX

  • and this proposal will constrain the growth of TS, e.g. no new satisfies-like keywords, which TS has wanted to add lots of historically

  • the thing where you want to copy-paste code from your editor to the browser console is very real, but could be solved just in the browser console

  • the grammar in this proposal is massive, which will be a lot of work (and runtime cost) to implement, reason about, and maintain

  • given all that, the costs do not seem worth the benefits, to me.

05:05
<littledan>
BTW we worked on presenting the motivation in this presentation: https://onedrive.live.com/view.aspx?resid=5D3264BDC1CB4F5B!5615&authkey=!ADFHxW4BxojKreE
05:05
<Chris de Almeida>
I probably don't need to say it, buuuut: for the type annotations proposal, our meeting notes, including summary+conclusion get referenced quite a lot, so we should collectively take extra care to clean them up, ensure accuracy, etc
05:06
<rkirsling>
also like, "runtime" as a count noun is one of the worst words our industry has ever devised
05:06
<littledan>
bradley and i talking to v8 about devtools improvements in node is where the "repl goal" stuff came from
right, I still hope we can get back around to the repl goal proposal...
05:07
<littledan>
and it's disappointing that people seem to be imagining that we'd never have alignment at that level
05:07
<snek>
yeah i think we can define some minimum behavior pretty easily. just need people who want to spend time on it
05:07
<snek>
....and also have time to spend on it
05:08
<Jack Works>

so let me try to summarize my thoughts:

  • I agree there is would be some benefit for at least some projects, including both beginners and complex projects

  • I am not convinced that in fact most people will be able to skip the build step: this proposal does not cover all of TS as-is, and definitely won't cover JSX

  • and this proposal will constrain the growth of TS, e.g. no new satisfies-like keywords, which TS has wanted to add lots of historically

  • the thing where you want to copy-paste code from your editor to the browser console is very real, but could be solved just in the browser console

  • the grammar in this proposal is massive, which will be a lot of work (and runtime cost) to implement, reason about, and maintain

  • given all that, the costs do not seem worth the benefits, to me.

this captures my main concerns
05:09
<snek>
yeah same, modulo reflection
05:12
<Mathieu Hofman>
Why don't we just define a binary AST and require all code to be compiled to that?
05:12
<snek>
lol
05:12
<shu>
i'd like that
05:13
<shu>
are you saying that'd be a bad world
05:13
<snek>
did facebook give up on binast
05:13
<snek>
i haven't heard anything about it in years
05:13
<rbuckton>
Wasn't there already an attempt to propose a binary AST?
05:13
<shu>
yes, that was me
05:14
<ljharb>
https://github.com/tc39/proposal-binary-ast
05:14
<msaboff>
Some people were against it 😉
05:14
<snek>
what if the binast was s expressions, and it had 4 primitive types and a small memory buffer, and used a harvard architecture...
05:15
<nicolo-ribaudo>
what if the binast was s expressions, and it had 4 primitive types and a small memory buffer, and used a harvard architecture...
What if we got rid of dynamic types
05:16
<TabAtkins>
The moment I used Decimal seriously, the first thing I'd do is write a template function that just does numeric parsing.
05:16
<Mathieu Hofman>
My point is if we expect all code authored to go through some tool, just go all the way and make the output of that tool something completely different than what was authored. I believe all proposals around that have never considered making that a requirement. Aka that if the author wrote valid JavaScript, it can still be executed as is
05:16
<TabAtkins>
decmath${price} + ${tax}
05:16
<littledan>

and this proposal will constrain the growth of TS, e.g. no new satisfies-like keywords, which TS has wanted to add lots of historically

This part is definitely on the champions' radar. One path, at least at first, is that TS could add satisfies without it becoming JS, just they would initially have a worse developer experience (working only in .ts and not in .js) until a version of that feature (possibly with tweaked syntax) got added to JS grammar. But hopefully we can figure out how to reserve a larger space of grammar such that new features can fit into it. We haven't figured out what that space should be yet, though.

05:16
<Michael Ficarra>
we should just have primitive decimals, this makes no sense
05:17
<snek>
decmath${price} + ${tax}
it would get a lot of downloads
05:17
<ljharb>
no sense at all
05:17
<littledan>
we should just have primitive decimals, this makes no sense
This is something we can discuss in more detail if we get around to "operator overloading withdrawal"; let's focus on other parts of this presentation for now.
05:18
<ljharb>
it's pretty important to focus on this part now, i think
05:18
<msaboff>
Then we wouldn't have ES262.
05:19
<littledan>
it's pretty important to focus on this part now, i think
well, this isn't going for advancement... I agree it's an important topic but I wanted to give space for other people to give their presentations.
05:19
<littledan>
Lots of the stuff in this presentation is just orthogonal from whether it's a primitive
05:19
<ljharb>
my sense is that that stuff is irrelevant if it's not a primitive.
05:19
<nicolo-ribaudo>
Then we wouldn't have ES262.
(it was more fun before editing, because the ES262 spec is statically typed 😛)
05:19
<Michael Ficarra>
yes and lots of it doesn't matter at all if nobody is going to use it because it's so unergonomic
05:20
<msaboff>
(it was more fun before editing, because the ES262 spec is statically typed 😛)
Happy to make you laugh.
05:20
<Chris de Almeida>
IIRC there are still concerns about using IEEE 754 at all
05:21
<Michael Ficarra>
IEEE-754 Decimal128 is the obviously correct choice here
05:21
<littledan>
IIRC there are still concerns about using IEEE 754 at all
yeah, this is an example of something interesting to discuss which isn't about whether it's a primitive
05:21
Michael Ficarra
points to the sign: https://speleotrove.com/decimal/decifaq4.html#signif
05:21
<Jack Works>
D1
05:21
<snek>
if its not a primitive is there any benefit to it existing in the language itself
05:22
<ljharb>
that's my queue item
05:22
<snek>
just perf optimization?
05:22
<bakkot>

so let me try to summarize my thoughts:

  • I agree there is would be some benefit for at least some projects, including both beginners and complex projects

  • I am not convinced that in fact most people will be able to skip the build step: this proposal does not cover all of TS as-is, and definitely won't cover JSX

  • and this proposal will constrain the growth of TS, e.g. no new satisfies-like keywords, which TS has wanted to add lots of historically

  • the thing where you want to copy-paste code from your editor to the browser console is very real, but could be solved just in the browser console

  • the grammar in this proposal is massive, which will be a lot of work (and runtime cost) to implement, reason about, and maintain

  • given all that, the costs do not seem worth the benefits, to me.

also, it has been my experience that many of the reported problems with "having to compile typescript" are actually different problems than "I have to run another tool". they're things like "I want to dual-emit CJS and ESM" or "the module resolution in my version of typescript is different from in node" or "module interop is broken" or things like that. and to the extent that this proposal helps those problems, it does so only because it forces you to make TS's resolution-and-etc match your other tools. but that is a tooling problem, not a fundamental problem with having a strip-types step in your build process, and TS has lately been making some improvements in that area already
05:22
<TabAtkins>
(Matrix does not correctly handle multi-ticks, don't even bother trying.)
05:22
<ljharb>
TabAtkins: `yes it does` but you may have to "hold it correctly" tbf
05:22
<snek>
like i think performance can be a valid reason
05:22
<snek>
even if its not a primitive
05:22
<snek>
but i'd wanna see very very good performance
05:22
<TabAtkins>
correctness is the main reason (but i agree that ergonomics is p important)
05:23
<bakkot>
so I worry that a lot of people are experience "compiling with typescript is painful" and assuming that types-as-comments will fix their problems, when those are in fact different problems
05:23
<TabAtkins>
Like the "lossless division with fixed precision" isn't trivial to write yourself.
05:23
<ljharb>
luckily there's packages that implement it already, right?
05:23
<Michael Ficarra>
it would have perf footguns though, like I'd have to do manual interning to avoid repeated string parsing snek
05:24
<TabAtkins>
To the best of my knowledge, not really.
05:24
<Chris de Almeida>
so I worry that a lot of people are experience "compiling with typescript is painful" and assuming that types-as-comments will fix their problems, when those are in fact different problems
has this been a common refrain?
05:24
<bakkot>
"compiling with typescript is painful" is common, definitely
05:24
<bakkot>
and upon investigating that turning out to be down to moduleResolution differences or CJS/ESM or whatever is also common IME
05:24
<Chris de Almeida>
I mean the idea that types annotations is the panacea
05:24
<Christian Ulbrich>
Me as the poor guy, writing software, I do not care about performance, but about having something, that is so often needed, simply being built into the language, so I do not have to rely on third-party implementations. I think that is also the reason, way everyone is waiting for Temporal.
05:24
<TabAtkins>
I will say, even if we didn't get operator overloading, a literal syntax would be super helpful without invoking perf issues.
05:25
<TabAtkins>
Christian Ulbrich: YES. More batteries included please.
05:25
<ljharb>
temporal is about dates/times tho, not numbers
05:25
<TabAtkins>
Decimal("1.2") is so obnoxious vs 1.2m
05:25
<bakkot>
I mean the idea that types annotations is the panacea
ah, I don't know for sure. I expect that many of the random developers who are excited about the proposal are excited because they think it will fix their problems, but their problems are in fact other things like ESM or whatever..
05:26
<snek>
1.2m is pretty motivating i think
05:28
<Christian Ulbrich>
ljharb: Temporal is about giving JS proper date handling. There are other libraries out there, that at least could do theoretically the same, yet we do Temporal.
05:28
<Jack Works>
Decimal("1.2") is so obnoxious vs 1.2m
you forgot new, it's new Decimal("1.2")
05:28
<nicolo-ribaudo>
How do you feel about syntax for decimals but not primitives? Like regexs
05:28
<TabAtkins>
Jack Works: I mean, it doesn't need to be. We can define it to not require the new. ^_^
05:28
<bakkot>
How do you feel about syntax for decimals but not primitives? Like regexs
1.2m !== 1.2m will trip everyone up forever
05:29
<TabAtkins>
While I really want operator overloading, I'd accept it without. And ignoring that, I don't care about primitive vs object.
05:29
<bakkot>
regexps you don't do arithmetic / comparisons / etc on
05:29
<Jack Works>
Jack Works: I mean, it doesn't need to be. We can define it to not require the new. ^_^
I want to keep the no new version for primitives. browsers don't want it for now, but I want to keep the hope
05:29
<Jack Works>
1.2m !== 1.2m will trip everyone up forever
for the same reason, I also require 1.2m not a thing before primitive happens
05:30
<Jack Works>
if we have primitives in the future, what we have today becomes boxed primitive, and it will not create a breaking change
05:31
<TabAtkins>
1.2m !== 1.2m clearly just make valueOf throw, can't do it then.
05:31
<snek>
because temporal pulls in like 70mb of data
05:31
<Jack Works>
like we don't have number, but new Number('1').add(new Number('2'))
05:31
<bakkot>
1.2m !== 1.2m clearly just make valueOf throw, can't do it then.
!== does not use valueOf, thank goodness
05:32
<TabAtkins>
gosh dang it
05:32
<Jack Works>
1.2m !== 1.2m clearly just make valueOf throw, can't do it then.
a previously throwing code no longer throws is not a breaking change in tc39 IMO
05:32
<TabAtkins>
Right!
05:32
<TabAtkins>
I was just half-jokingly saying that having the equality test throw would avoid people accidentally relying on it being always-false.
05:33
<shu>
my brother are you asking me to add a typeswitch to === for certain object types and to add a throw
05:33
<nicolo-ribaudo>
I was just half-jokingly saying that having the equality test throw would avoid people accidentally relying on it being always-false.
Making === throw has the same perf impact as making it work properly: you have in both cases exactly one more branch
05:34
<snek>
is this proposing that we would use d128
05:34
<TabAtkins>
d128 vs bigdec is still an open question
05:34
<Jack Works>
(new Number(1) === new Number(1) also false today so I'm very optimistic that Decimal can add boxed primitive first then if the engine see it is widely used they can add primitives later)
05:34
<bakkot>
Making === throw has the same perf impact as making it work properly: you have in both cases exactly one more branch
it's actually worse I think - throwing operators are special, can't be optimized the same as non-throwing operations
05:35
<ljharb>
=== must never, ever throw, that's a hill i will gladly die on
05:35
<ljharb>
IE 8 or 9 did that on one thing and it's very very bad
05:35
<TabAtkins>
Ah, if it's cheaper to just do the comparison right then that's easy, ship it shu
05:35
<snek>
throwing doesn't really make surrounding code slower anymore. there are certainly some weird cases but in general everyone is on the unwinding train these days i think
05:35
<shu>
they constrain optimizability
05:36
<Michael Ficarra>
anyone questioning Decimal128, please read https://speleotrove.com/decimal/decifaq4.html#signif
05:36
<shu>
you can't DCE stuff that might throw as easily
05:36
<rkirsling>
consider https://en.cppreference.com/w/cpp/language/noexcept_spec
05:36
<shu>
(or do other analysis)
05:36
<Jack Works>
consider https://en.cppreference.com/w/cpp/language/noexcept_spec
how is noexcept in C++ be enforced?
05:36
<bakkot>
panic
05:36
<rkirsling>
?
05:37
<snek>
noexcept is enforced because unwinding is part of the spec
05:37
<Jack Works>
oh
05:37
<bakkot>
if a noexcept method throws, it panics the whole program
05:37
<Jack Works>
window.onerror = () => window.close()
05:37
<bakkot>
:D
05:37
<shu>
sick
05:37
<snek>
window.onerror = () => ud2
05:37
<shu>
this is the kind of performance trick i would like to subscribe to in a newsletter
05:38
<Mathieu Hofman>
ah, I don't know for sure. I expect that many of the random developers who are excited about the proposal are excited because they think it will fix their problems, but their problems are in fact other things like ESM or whatever..
Our problem is none of that. It's simply that writing JSDoc is terrible DX, but for our case is still preferable to dealing with generated code. And from what I've seen, we're not the only project with that experience.
05:39
<bakkot>
I agree that at least some projects would in fact benefit, yes
05:39
<bakkot>
I certainly do not claim that all of the random developers who are excited about the proposal in fact have a different problem
05:39
<bakkot>
just that I expect many of them do
05:39
<Jack Works>
try ts-node, ts-node can configured to skip type checking and even use swc for transpile
05:40
<snek>
i should poll at my company about this type annotation thing
05:40
<snek>
i don't think i've ever had the stated problem
05:41
<snek>
pasting typescript into devtools or whatever
05:41
<snek>
or maybe i'm so used to it i don't notice
05:41
<ljharb>
ts-node has a repl
05:41
<Michael Ficarra>
I agree, the correctness win here is the higher-order bit
05:41
<snek>
we did just switch to rspack this morning, so tool time is definitely a thing some people are thinking about
05:43
<Mathieu Hofman>
try ts-node, ts-node can configured to skip type checking and even use swc for transpile
Not all our code runs on node ;) there are other JS engines and environments!
05:44
<Michael Ficarra>
littledan: please don't jump the queue like that
05:44
<snek>
tschromium
05:45
<ljharb>
which other ones can't be made to transpile TS on the fly somehow?
05:46
<Michael Ficarra>
sffc: PLEASE, I beg you, read https://speleotrove.com/decimal/decifaq4.html#signif
05:46
<Mathieu Hofman>
Also saying use deno and tsnode is basically making the argument that the expected DX is for the JS environment to strip / ignore types. I'm just asking for that behavior to be standardized
05:46
<ljharb>
that's only widely expected DX tho in a repl
05:47
<Mathieu Hofman>
No those environments are used to run production code
05:50
<ljharb>
dminor: did adding bigint slow down every JS program?
05:51
<bakkot>
Also saying use deno and tsnode is basically making the argument that the expected DX is for the JS environment to strip / ignore types. I'm just asking for that behavior to be standardized
so I think a big disconnect here is the purported benefit is node users, but the cost is imposed on browser vendors and users of webpages
05:51
<sffc>
sffc: PLEASE, I beg you, read https://speleotrove.com/decimal/decifaq4.html#signif
Trailing zeros are important as part of the representation / interchange format. new Decimal("1.0").toLocaleString() and new Intl.PluralRules().select(new Decimal("1.0")) should just work. I don't have a position regarding the impact on arithmetic of these values.
05:51
<bakkot>
and node can at least make breaking changes, sometimes, like typescript does
05:51
<ljharb>
(imo the reason bigint doesn't get used is because it doesn't interop cleanly with Number, which is a mistake Decimal needn't repeat)
05:51
<bakkot>
so it's just a very... very different space
05:52
<bakkot>
(imo the reason bigint doesn't get used is because it doesn't interop cleanly with Number, which is a mistake Decimal needn't repeat)
Decimal would need to repeat that
05:52
<Michael Ficarra>
sffc: So you're expecting that new Decimal("1.0") and new Decimal("1.0000") would produce the same results when participating in any arithmetic expression?
05:52
<bakkot>
decimal interoperating with number would be insane
05:52
<ljharb>
why?
05:53
<ljharb>
number → decimal at least, even if not the other way
05:53
<bakkot>
1.3 + 1.3m === ?
05:53
<bakkot>
there is no sensible answer to that question
05:53
<ljharb>
2.6m? or whatever number 1.3 actually is, converted to decimal, added to 1.3m
05:53
<msaboff>
number → decimal at least, even if not the other way
Either Number -> Decimal or Decimal -> Number are inexact.
05:53
<bakkot>
2.6m?
yeah that's impossible
05:53
<bakkot>
which is why they can't interop
05:54
<bakkot>
(might be possible in simple cases, not possible in general)
05:54
<ryzokuken>
the implicit conversion would have to go number -> mathematical value -> decimal
05:54
<dminor>
dminor: did adding bigint slow down every JS program?
Adding a new primitive meanings adding a new branch for dynamic dispatch based upon type, so it will slow things down, I can't say whether it was a measurable difference for BigInt or not, but adding a primitive has the potential to slow down existing programs, not just things that use the new primitive.
05:54
<sffc>
sffc: So you're expecting that new Decimal("1.0") and new Decimal("1.0000") would produce the same results when participating in any arithmetic expression?
That's not something I said. I assume that spec authors who have gone into the weeds have defined what happens with trailing zeros when arithmetic is performed. The Temporal champions went into the weeds on similar types of problems in datetime arithmetic. All I'm saying is that new Decimal("1") and new Decimal("1.0") are absolutely distinct values for the purposes of internationalization and therefore from the Intl point of view it is important that the distinction be retained.
05:54
<ljharb>
so we've got 1) they're not primitives and operators don't Just Work, 2) it can't be made reasonable to interop with Number… i'm having a lot of trouble understanding the remaining value
05:55
<Michael Ficarra>
sffc: then you don't want decimal
05:55
<bakkot>
no use case of decimal known to me calls for them to inteorp with Number
05:55
<ljharb>
Adding a new primitive meanings adding a new branch for dynamic dispatch based upon type, so it will slow things down, I can't say whether it was a measurable difference for BigInt or not, but adding a primitive has the potential to slow down existing programs, not just things that use the new primitive.
sure. but this concern wasn't brought up for BigInt, so how is Decimal any different?
05:55
<bakkot>
so I don't know why interop with Number would matter
05:55
<dminor>
sure. but this concern wasn't brought up for BigInt, so how is Decimal any different?
Because we learned from the experience of implementing BigInt :)
05:55
<msaboff>
the implicit conversion would have to go number -> mathematical value -> decimal
How would you represent the intermediate mathematical value?
05:55
<Michael Ficarra>
ljharb: perf improvement is still possibly significant
05:56
<snek>
2.6m? or whatever number 1.3 actually is, converted to decimal, added to 1.3m
2.60000000000000004...
05:56
<ryzokuken>
within an engine? we have a way to represent mathematical values in the spec
05:56
<shu>
are you...kidding
05:56
<ljharb>
fine with me, as long as i can round/ceil/floor decimals :-)
05:56
<dminor>
Well BigInt, and looking at implementing records and tuples
05:56
<shu>
you think we have a way to represent, in memory, mathematical reals?
05:56
<shu>
like, as intermediate values?
05:57
<ryzokuken>
we definitely use mathematical values all over the spec
05:57
<Christian Ulbrich>
(potentially TDZ): I suggested trading undefined primitive for Decimal one. I guess no one will notice!
05:57
<ryzokuken>
I assume there's something corresponding to that within engines
05:57
<shu>
we definitely use mathematical values all over the spec
because we don't want to accidentally spec error accumulation in our internal arithmetic
05:57
<bakkot>
I assume there's something corresponding to that within engines
nope
05:57
<bakkot>
it just does IEEE arithmetic
05:57
<Michael Ficarra>
ryzokuken: they're a fiction
05:57
<bakkot>
and IEEE arithmetic corresponds to the right thing
05:57
<snek>
say 1/3rd to scare a js engine implementor
05:57
<ryzokuken>
welp, nvm then
05:58
<bakkot>
the use of reals in the spec is just a way to not write down IEEE semantics in the spec
05:58
<ljharb>
ps those occasional groups in the kitchen are quite loud, is there a polite way to address that?
05:58
<Luca Casonato>
re: string.prototype.decimalAdd() this sounds like coloring strings in some way? especially for typescript, you'd have to have some sort of colored string to make this not suck
05:58
<Michael Ficarra>
Luca Casonato: phantom types!
05:58
<msaboff>
What would be the proposed representation of such an intermediate mathematical value? Number and Decimal have different significant digits and exponent ranges. Number is inexact for decimal values. If we had such an exact mathematical value, we'd use that for Number.
05:58
<Michael Ficarra>
String<Decimal>
05:59
<nicolo-ribaudo>
We could color them by wrapping then in an object as a "marker"
05:59
<Luca Casonato>
new Decimal(str)!
05:59
<bakkot>
re: string.prototype.decimalAdd()

this sounds like coloring strings in some way? especially for typescript, you'd have to have some sort of colored string to make this not suck
if this proposal means TS makes it easier to color strings, I'm all for it!
06:00
<ryzokuken>
What would be the proposed representation of such an intermediate mathematical value? Number and Decimal have different significant digits and exponent ranges. Number is inexact for decimal values. If we had such an exact mathematical value, we'd use that for Number.
right. Sorry I was mistaken in assuming that engines might have some way to represent mathematical values within the spec but I now realize that they're pure fiction 😕
06:00
<shu>
hell yeah love mauve strings
06:00
<Luca Casonato>
if this proposal means TS makes it easier to color strings, I'm all for it!
actually yeah me too :D
06:00
<Jack Works>
if this proposal means TS makes it easier to color strings, I'm all for it!
'a'.blink() color strings?
06:00
<snek>
if you store it as a fraction of bigints and reduce after each step you get something that is both horribly slow and semantically bad
06:00
<Mathieu Hofman>
so I think a big disconnect here is the purported benefit is node users, but the cost is imposed on browser vendors and users of webpages
Our programs should be able to run on a multitude of environments, including browsers and non node/deno based "server" environments. That's the benefit of standardizing this in the language.
06:01
<bakkot>
Our programs should be able to run on a multitude of environments, including browsers and non node/deno based "server" environments. That's the benefit of standardizing this in the language.
I agree that is a benefit. I do not think the costs are commensurate with the benefits.
06:02
<bakkot>
I think that, in this case, having a variant of node which strips TS types has fewer costs in general, even though it does have that one additional cost.
06:02
<bakkot>
not least of which that it would allow breaking changes ever
06:02
<bakkot>
and JSX, and so on
06:03
<bakkot>
but also, more significantly, that it would not impose costs on users of webpages
06:03
<bakkot>
(by making parsing JS slower/more expensive)
06:06
<Mathieu Hofman>
I'll go back to my argument, why do browsers not require that code they execute be in the form of binary AST.
06:07
<ljharb>
how could they?
06:08
<ljharb>
if thats a hypothetical it requires too many "suppose"s for me to comprehend
06:09
<snek>
is this a serious question
06:09
<Mathieu Hofman>
I'm just trying to understand exactly how important the tradeoff between parsing performance and readability of executed code is
06:09
<Bradford Smith>
I'd love it if browsers could accept a binary AST format. I'm sure it would be much faster to execute.
06:09
<snek>
cuz there can be serious answers
06:10
<Bradford Smith>
and it could serve as a standard interchange format to move code from one environment to another whose source was different
06:10
<snek>
like if we could cut off legacy and restart from the ground up would js even exist? probably not... wasm is significantly better
06:11
<Mathieu Hofman>
And if there is actually a large cost to parsing if the type annotations are not present in the executed code.
06:12
<Mathieu Hofman>
Or if there are other avenues we could explore to mitigate the impact of engines being able to parse and ignore type annotations
06:12
<Mathieu Hofman>
In production scenarios where performance matters
06:20
<bakkot>
I'll go back to my argument, why do browsers not require that code they execute be in the form of binary AST.
we almost certainly would if the option existed
06:20
<bakkot>
that would be a better world
06:20
<bakkot>
but the option does not exist
06:20
<bakkot>
however, do note that when adding a new thing, wasm, we do require it to be compiled rather than accepting WAT
06:21
<msaboff>
snek: How do you version a BinaryAST? Many proposals add new node to an implementation's AST. How do you handle where the spec needs the source of an expression, e.g. RegExp or a stack trace? There are many more issues.
06:21
<shu>
recall that we went with binary ast as an option because we thought bytecode would be DOA
06:21
<shu>
binary ast is a worse option than bytecode in every way for shipping compiled code
06:21
<shu>
(not for tools, obviously)
06:22
<shu>
perhaps with wasm and the lessons there maybe we can think harder about a bytecode again
06:22
<snek>
if you go through and try to answer every question here you eventually end up at wasm or something similar to wasm... the point of everything is that we have an existing ecosystem here
06:23
<shu>
why would i end up at wasm? i'm talking about a JS bytecode that has 1:1 expressivity with source JS
06:23
<snek>
i mean mathieu's and msaboff's questions
06:23
<shu>
ah
06:23
<bakkot>
can we advance the queue
06:23
<msaboff>
But Wasm has issues as well. Dev tooling, HTML interop, standard library...
06:23
<ljharb>
i'd assume with semver, but maybe i'm missing something
06:24
<snek>
why does that ping me lol
06:24
<rkirsling>
because element
06:25
<snek>
anyway the reason i mentioned it was because wasm adds new bytecode ops without versioning at all (wasm is still on version 0x1 "mvp"). it just takes some time to plan everything out.
06:26
<nicolo-ribaudo>
Is it different from JS adding new syntax?
06:26
<snek>
at a theoretical level no it is not
06:26
<snek>
but humans find it easier to namespace words and symbols than numbers
06:28
<Mathieu Hofman>
i mean mathieu's and msaboff's questions
No I don't think compiling js programs to wasm makes sense. But instead of minifying and to allow making uncompiled JavaScript more useful, I would entertain an efficient source transform of JS that can be efficiently parsed by engines.
06:29
<ljharb>
i fail to see how "an infinite iterator of empty tuples" is a logical option but k
06:29
<littledan>
i fail to see how "an infinite iterator of empty tuples" is a logical option but k
I guess that'd have to be the tail of everything if it's what would happen for zero things
06:29
<bakkot>
it would be empty
06:29
<littledan>
not all that useful...
06:30
<ljharb>
producing an infinite iterator anywhere seems pretty harmful to me
06:30
<snek>
network streams are infinite async iterators
06:31
<msaboff>
I think the current version of Wasm lacks some of what is needed to cleanly compile JS to Wasm. Its Turing complete, but not optimal. Also you'd need to include your own garbage collector, etc.
06:31
<snek>
you should not compile js to wasm
06:32
<TabAtkins>
Yeah, I prefer Iterator.zip([...], mapper?) (omitted mapping just produces an array of the items)
06:33
<snek>
i got distracted, why does zip have a mapper
06:33
<Michael Ficarra>
snek: it's a combiner
06:33
<Michael Ficarra>
zipWith
06:33
<ljharb>
i don't like the idea of being forced to wrap multiple iterators in an iterator just to zip them, which is why i want varargs or an object
06:33
<TabAtkins>
I mean, it's a temp array
06:33
<ljharb>
yes, and forcing that imo is not a great design
06:33
<snek>
i don't understand what zipWith means, i should have paid better attention i'm sorry
06:33
<bakkot>
i don't like the idea of being forced to wrap multiple iterators in an iterator just to zip them, which is why i want varargs or an object
do you also not like the idea of being forced to wrap multiple promises in an iterable just to await all of them?
06:34
<Luca Casonato>
i don't understand what zipWith means, i should have paid better attention i'm sorry
Iterator.zipWith(Math.max, a, b, c)
06:34
<ljharb>
do you also not like the idea of being forced to wrap multiple promises in an iterable just to await all of them?
correct, i do not like that and do want a form that takes an object literal
06:34
<bakkot>
what is the difference between an object literal and an array here?
06:34
<bakkot>
the thing you asked for was the varargs version, which is not like an object literal
06:34
<snek>
Iterator.zipWith(Math.max, a, b, c)
i don't understand what this does
06:34
<TabAtkins>
I think it's easiest to understand zip as transposing an array of arrays, fwiw.
06:34
<ljharb>
the thing you asked for was the varargs version, which is not like an object literal
sure, Promise.all(a, b, c) would be fine too
06:35
<snek>
i am familiar with zip in other languages, i've never seen it paired with a function
06:35
<ljharb>
I think it's easiest to understand zip as transposing an array of arrays, fwiw.
nothing about that sentence reads "easy" to me ¯\_(ツ)_/¯
06:35
<littledan>
I think it's easiest to understand zip as transposing an array of arrays, fwiw.
Yeah, the variadic map thing can be called map
06:35
<littledan>
(just my intuition)
06:35
<rhysd>
I wonder how about using methods like it1.zip(it2).zip(it3) instead of function like Rust does.
06:35
<snek>
i don't understand what the function does
06:35
<bakkot>
i don't understand what this does
Iterator.zipWith([a, b], mapper) is the same as Iterator.zip([a, b]).map(vs => mapper(...vs))
06:35
<bakkot>
it just lets you skip making an intermediate array for results and instead immediately passes them to a function
06:36
<bakkot>
that's all
06:36
<snek>
rust has this, and it is used commonly, i think the separate func is less common tbh
06:36
<ljharb>
in this case there is no intermediate array tho, because of iterator helpers? it's not like Array.from
06:36
<bakkot>
in this case there is no intermediate array tho, because of iterator helpers? it's not like Array.from
there is an intermediate array? zip produces arrays
06:36
<bakkot>
an iterator of arrays
06:36
<snek>
it just lets you skip making an intermediate array for results and instead immediately passes them to a function
got it, ty
06:36
<rbuckton>
I wonder how about using methods like it1.zip(it2).zip(it3) instead of function like Rust does.
.NET-based languages like C# make this available as an extension method, so it can be used both as a static method and like an instance method: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable.zip?view=net-7.0
06:36
<ljharb>
ahhh right, so the mapper lets you skip that step, fair point. that is like Array.from then (in a good way)
06:37
<snek>
idk how convinced i am of zipWith existing but 🤷
06:37
<TabAtkins>
Yup, the mapper is exactly the same reasoning as Array.from
06:37
<littledan>
it just lets you skip making an intermediate array for results and instead immediately passes them to a function
Yeah this sounds useful but is zip the right name, or map? maybe zip should just take the iterators, and not a function, and you use map if you have a function
06:37
<TabAtkins>
Could be omitted but it's a common case (in my experience, at least)
06:37
<snek>
either way as long as prototype.zip exists and does the normal thing i'm fine
06:37
<bakkot>
Yeah this sounds useful but is zip the right name, or map? maybe zip should just take the iterators, and not a function, and you use map if you have a function
I am fine with that outcome personally
06:37
<rkirsling>
it is kind of wild that self-hosted JS in JSC (and presumably not just JSC) uses var exclusively to avoid the perf implications of TDZ
06:38
<bakkot>
the battle against intermediate objects is already hopelessly lost when using iterators
06:38
<bakkot>
so it may not be worth having a new method solely to allow you to skip some intermediate objects...
06:38
<rbuckton>
zip is a useful name because its what many people reach for in many languages, as well as many existing js libraries.
06:38
<Luca Casonato>
ryzokuken: can you move the queue to shu's topic?
06:39
<ryzokuken>
done
06:39
<ryzokuken>
apologies I missed it
06:39
<Luca Casonato>
thanks!
06:39
<TabAtkins>
Yeah, def shouldn't be named something other than zip imo
06:39
<littledan>
so it may not be worth having a new method solely to allow you to skip some intermediate objects...
sure but variadic map is probably also ergonomically motivated (just it'd be a weird name for zip itself)
06:39
<TabAtkins>
(or some variant like zipLongest)
06:39
<littledan>
or... no, it'd look weird as an iterator method... zip will just read better
06:40
<bakkot>
sure but variadic map is probably also ergonomically motivated (just it'd be a weird name for zip itself)
variadic map doesn't make as much sense in JS because map is a prototype method rather than being map(fn, iter)
06:41
<Jack Works>
anyone know website that chrome publish their telemetry data of language features?
06:41
<littledan>
https://chromestatus.com/metrics/feature/popularity
06:41
<littledan>
well, not everything has a telemetry thing
06:41
<bakkot>
either way as long as prototype.zip exists and does the normal thing i'm fine
would you also be OK with a static method?
06:41
<bakkot>
static method seems a little more natural to me...
06:42
<TabAtkins>
I think static n-iterator version, and prototype 2-iterator version, make sense together.
06:42
<snek>
as long as i can do zipping without ever running into the one that has the map arg
06:42
<TabAtkins>
arr1.zip(arr2, mapper)
06:42
<TabAtkins>
and Iterator.zip([arr1, arr2, arr3], mapper)
06:42
<snek>
what if you have an iterable function object
06:43
<TabAtkins>
what about it
06:43
<TabAtkins>
that example wasn't variadic
06:43
<TabAtkins>
either of them
06:43
<snek>
i think having to write zip([a, b]) instead of zip(a, b) would be unfortunate
06:43
<littledan>
yeah I'd prefer to keep this simple
06:43
<snek>
like not worth the mapper func
06:44
<TabAtkins>
I am fine without the mapper, especially since we do want an options bag.
06:44
<bakkot>
there are other options you'd want beyond the mapper function
06:44
<bakkot>
like "use this value to fill out the shorter one"
06:44
<TabAtkins>
Bc longest is necessary in my experience.
06:44
<bakkot>
though of course we could just have a bunch of different methods, instead of an options bag
06:44
<littledan>
isn't that what we designed array holes for?
06:44
<snek>
like "use this value to fill out the shorter one"
isn't that zip(a, chain(b, repeat(v)))
06:44
<snek>
you don't need an options bag for that
06:45
<bakkot>
only if you know which one is longer up front
06:45
<TabAtkins>
ugh tho
06:45
<TabAtkins>
also that, yes
06:45
<bakkot>
if you don't know which one, you can't do it with zip
06:45
<bakkot>
or like you can but it's really really hard
06:45
<snek>
fair
06:45
<snek>
things to think about i guess
06:45
<bakkot>
previous discussion at https://matrixlogs.bakkot.com/TC39_General/2021-09-11#L3
06:46
<snek>
zip(a, b) and zipAdvanced([a, b], options, mapper)
06:47
<bakkot>
I think just making the wrapper array is better
06:47
<bakkot>
to match Promise.all
06:47
<ljharb>
zip(a, b) and knit([a, b], options, mapper) :-p
06:47
<ljharb>
the wrapper array is "fine" but super annoying to have to write
06:47
<snek>
velcro
06:47
<littledan>
what do other languages do about this options thing?
06:47
<snek>
yeah i'm not gonna die on this hill but its unfortunate
06:48
<snek>
python has kwargs
06:48
<snek>
i wish js had kwargs
06:48
<TabAtkins>
littledan: Python just has several variant functions, with each taking extra args as necessary.
06:48
<TabAtkins>
and yeah, kwargs man
06:48
<TabAtkins>
kwargs
06:48
<Jack Works>
in sujitech we ship es2022 to users
06:49
<snek>
if we had a variant of js
06:49
<snek>
without hoisting
06:49
<snek>
would that fix this tdz stuff
06:49
<rbuckton>
kwargs
we just need real named arguments in JS, that's all
06:49
<bakkot>
we only this year stopped supporting IE9...
06:49
<snek>
same!
06:50
<msaboff>
Is @shu saying we should have stopped with ES5 😉
06:51
<rkirsling>
can't become unmodern if the definition of modern never changes :rollsafe:
06:57
<snek>
why does the bytecode not rewrite itself into a direct lookup once it observes that the slot is initialized
06:57
<Rob Palmer>
For Bloomberg Terminal we ship ESnext (Stage 4) to users as much as possible, which is also capped by the min-support of each piece of the toolchain that needs to transform that code, because relatively modern Chromium gets embedded.
06:59
<rbuckton>
why does the bytecode not rewrite itself into a direct lookup once it observes that the slot is initialized
Isn't that overhead part of the problem?
07:00
<snek>
which overhead
07:00
<rbuckton>
You're not patching one place. You're patching every place the variable is referenced.
07:00
<bakkot>
why does the bytecode not rewrite itself into a direct lookup once it observes that the slot is initialized
you have a bunch of different versions of the function also
07:00
<bakkot>
it's not like functions only get compiled once
07:00
<rbuckton>
And that involves GC as well
07:01
<snek>
ah yeah same underlying sfi 🥲
07:08
<bakkot>
danielrosenwasser: re: "we assume the runtime will catch it" - does that mean your type analysis treats let and var differently?
07:11
<rbuckton>
depending on transpilers to inform engines about eliding TDZ checks doesn't benefit developers not using transpilers.
07:13
<danielrosenwasser>
danielrosenwasser: re: "we assume the runtime will catch it" - does that mean your type analysis treats let and var differently?
well I certainly thought so
07:14
<danielrosenwasser>
But no, we just take the optimistic path on that
07:14
<danielrosenwasser>
you can observe an undefined-initialized var or let in downlevel emit
07:15
<bakkot>
ok cool, that's what I thought
07:15
<bakkot>
it would make using var super annoying if it were treated differently
07:17
<snek>
interestingly v8 implements tdz checks as two separate bytecodes
07:18
<snek>
load variable and then throw reference error if loaded value is uninitialized
07:18
<bakkot>
the_hole
07:18
<snek>
yeah that one
07:18
<bakkot>
or whatever it's called
07:18
<bakkot>
love to have another kind of null
07:18
<snek>
i'm feeling like, if this was a single bytecode, it would be a single branch instruction instead of an entire bytecode clock cycle, which is significantly cheaper?
07:18
<snek>
but idk maybe i'm missing something obvious
07:19
<bakkot>
surely most of the cost is the branch, not the bytecode clock cycle?
07:19
<bakkot>
branches are expensive
07:19
<rbuckton>
What Mark is suggesting (moving let above all of its uses), is exactly what we had in our codebase when we discovered this regression.
07:20
<snek>
the branch exists either way, the bytecode dispatch is a jmp+lea+maybe smth else idr
07:20
<bakkot>
rbuckton: you should say that out loud, mark doesn't read chat
07:21
<rbuckton>
Mark's point is that the engines need to optimize for that, which they don't.
07:21
<Bradford Smith>
So, Mark is saying "lets encourage people to rewrite their JS into something that will not force engines to do TDZ checks", right?
07:22
<Christian Ulbrich>
Hold it the right way!™
07:22
<rkirsling>
it really sounds like "if engines never have to encounter a let, then let won't be unperformant"
07:22
<bakkot>
right, and also suggesting the engine do more/smarter checks than they currently do
07:22
<bakkot>
but of course more/smarter checks are also expensive...
07:23
<bakkot>
at the very least this sounds like it would take at least two passes
07:23
<Bradford Smith>
I cannot imagine a developer who wants const being willing to rewrite it as a let with no assignment and a later assignment.
07:23
<rbuckton>
I think Mark is saying "fix this one case, and depend on linters to suggest you use that case", which doesn't help for const since it may depend on calculations and can't be hoisted.
07:24
<rbuckton>
Linters also generally don't do inter-procedural analysis either, so this sounds like a big ask.
07:25
<ljharb>
is it difficult to figure out that a variable is not referenced inside a closure, and also not referenced prior to the declaration?
07:25
<Richard Gibson>
more like "encourage everyone, particularly transpilers, to emit code in which let/const binding references are dominated (i.e., their TDZ is empty), and encourage implementations to statically detect such dominated bindings and eliminate any corresponding runtime penalty for them"
07:25
<rbuckton>
is it difficult to figure out that a variable is not referenced inside a closure, and also not referenced prior to the declaration?
I think those are the cases that are optimized today.
07:25
<rbuckton>
At least, when they are run in a hot code path such that they get optimized.
07:26
<ljharb>
what about if it's a function statement vs a declaration?
07:26
<rbuckton>
As Shu said, not all code gets optimized.
07:26
<bakkot>
I am almost certain the not-closed-over bit gets checked literally during parsing
07:26
<ljharb>
like can the perf issue be avoided simply by not using function declarations? (which many linter configs discourage anyways due to it adding exposure to hoisting and use-before-define)
07:26
<danielrosenwasser>
slight correction, the closed-over bindings are generally not top-level
07:26
<danielrosenwasser>
they act as state intentionally captured by closures for major components
07:27
<rkirsling>
yeah I think this discussion is basically centered (or at least, can be centered) on code which is not reaching top-tier JIT
07:27
<ljharb>
so like, in the example on the screen, if the function is written as an expression and not a declaration, then the resulting reorganization it would force seems like it'd avoid this perf hit?
07:27
<bakkot>
in the simple cases engines skip the TDZ checks even in the lowest tier
07:27
<bakkot>
because the simplest cases can be checked during parsing, in a single pass
07:28
<rkirsling>
right. simple cases are fine
07:28
<ljharb>
if the only codebases suffering are ones that use function decls and rely on hoisting i'm much less interested in trying to address a problem
07:28
<waldemar>
The main reason we have this mess is that function definitions can hoist, even if most of them may not be used that way.
If one turns a function statement into a const binding initialized with a function expression, that is known never to hoist, and thus the implementation can trivially see that it's dominated by any prior bindings in the scope.
07:28
<ljharb>
^ that suggests to me another reason to just avoid using declarations
07:28
<waldemar>
Lint rules can do that, or transpilers might be able to do that.
07:29
<snek>
this is why i asked above if adding a no-hosting mode would help. ultimately i'd like to look at a lower level investigation of this problem space from v8, assuming they've written something up somewhere
07:30
<nicolo-ribaudo>
no-hoisting means that let x = 1; { x; let x = 2 } sees 1?
07:30
<bakkot>
Unfortunately a lot of people are attached to a style which relies heavily on hoisting, so I think it is unlikely we'll convince people to move away from that
07:30
<ljharb>
i agree, it's very unfortunate
07:30
<bakkot>
no-hoisting means that let x = 1; { x; let x = 2 } sees 1?
it's only hoisting of function declarations which we're talking about I believe
07:31
<snek>
no-hoisting means that let x = 1; { x; let x = 2 } sees 1?
no i mean a mode where you can't call functions before their declaration
07:31
<rkirsling>
oh no, dminor, you removed your topic from the queue? I was gonna +1 it
07:31
<snek>
it would also let us have good decorators
07:31
<nicolo-ribaudo>
Oh ok I got confused by the previous discussion about manually hoisting let
07:32
<bakkot>
it would also let us have good decorators
using const x = function(){} also lets you have good decorators
07:32
<rbuckton>
no i mean a mode where you can't call functions before their declaration
Function decorators would also require a non-hoisting mechanism, even if that mechanism is "a decorated function doesn't hoist"
07:32
<bakkot>
where "good decorators" of course means normal function calls instead of a special syntactic form for calling functions in a different way
07:33
<snek>
where "good decorators" of course means normal function calls instead of a special syntactic form for calling functions in a different way
snek will remember this
07:33
<rkirsling>
shu: this sounds like a fun investigation, one way or another
07:34
<ljharb>
ryzokuken: advance queue?
07:34
<snek>
queue is advanced for me
07:34
<ryzokuken>
I think TCQ got stuck for me
07:34
<ljharb>
oh, i see the topic changed but the queue isn't empty
07:34
<rbuckton>
where "good decorators" of course means normal function calls instead of a special syntactic form for calling functions in a different way
That would conflict with the feedback I received that function decorators had to exist for parameter decorators to advance beyond stage 1, and is the reason that parameter decorators currently only covers class methods.
07:34
<Michael Ficarra>
TCQ is down :-(
07:34
<ryzokuken>
oh, i see the topic changed but the queue isn't empty
that is another weird TCQ bug
07:35
<snek>
shu: is there a doc from v8 that explores tdz at a lower level? i'd be interested in seeing analysis of the bytecode and assembly and such without profiling it myself if such a thing already exists.
07:35
<dminor>
I removed myself from the queue, but I'll say it here, SpiderMonkey is cautiously interested in removing TDZ and we've started investigating the impact on our engine.
07:35
<shu>
why does the bytecode not rewrite itself into a direct lookup once it observes that the slot is initialized
you're patching bytecode of a function that can be called multiple times?
07:35
<shu>
a closure that can be allocated multiple times?
07:35
<snek>
yeah i forgot that bytecode is shared
07:35
<bakkot>
That would conflict with the feedback I received that function decorators had to exist for parameter decorators to advance beyond stage 1, and is the reason that parameter decorators currently only covers class methods.
correct, I personally do not think there is a viable path for function decorators or parameter decorators to exist
07:36
<shu>
shu: is there a doc from v8 that explores tdz at a lower level? i'd be interested in seeing analysis of the bytecode and assembly and such without profiling it myself if such a thing already exists.
alas no, lower level like at the cycle-counting level?
07:36
<snek>
yeah
07:37
<nicolo-ribaudo>
you're patching bytecode of a function that can be called multiple times?
Uhm, if the TDZ check does not trigger the first time it will never throw
07:37
<shu>
for any particular activation of a function
07:37
<shu>
but not for subsequent activations, especially if it's a newly allocated closure
07:37
<shu>
so you can't patch the bytecode
07:37
<nicolo-ribaudo>
Oh right
07:37
<snek>
if bytecode was not shared it would work but then you'd have big memory usage
07:37
<shu>
i want to have a general gripe
07:38
<shu>
when engines complain about something, it is not before we do a lot of work to try to make it a non-problem
07:38
<shu>
i am not a fan of being told, "have you tried being smarter"
07:38
<nicolo-ribaudo>
Yep sorry
07:38
<shu>
it wasn't towards you per se
07:39
<shu>
more towards mark
07:39
<rbuckton>
correct, I personally do not think there is a viable path for function decorators or parameter decorators to exist
I'm not sure why parameter decorators are not viable, at least for class elements. They're extremely valuable, and we have plenty of evidence for that in the TypeScript ecosystem.
07:39
<rkirsling>
right, that tone really surprised me
07:39
<Bradford Smith>
As engineers, we mostly can't help ourselves. We see a problem and immediately think "I bet I could fix that..."
07:39
<Michael Ficarra>
shu: you could try complaining about things before you do such work
07:39
<shu>
I removed myself from the queue, but I'll say it here, SpiderMonkey is cautiously interested in removing TDZ and we've started investigating the impact on our engine.
has the analysis i wrote been improved upon?
07:39
<snek>
i bet i could fix that with the "javascript load and branch if the hole" instruction
07:39
<Christian Ulbrich>
In all fairness, one might add, that YOU, might not mean YOU, personally. :)
07:39
<shu>
shu: you could try complaining about things before you do such work
but then "have you tried being smarter" would be a valid criticism!
07:40
<Michael Ficarra>
exactly, I've fixed it
07:40
<bakkot>
I'm not sure why parameter decorators are not viable, at least for class elements. They're extremely valuable, and we have plenty of evidence for that in the TypeScript ecosystem.
I do not think parameter-decorators-but-only-for-class-elements is an reasonable outcome, as I believe I said when they were presented
07:40
<bakkot>
also I do not think function decorators make sense, though it is possible I could be convinced of that
07:41
<bakkot>
if in addition someone believes parameter decorators cannot advance without function decorators, then there is no option which satisfies all of these constraints
07:41
<littledan>
correct, I personally do not think there is a viable path for function decorators or parameter decorators to exist
aww, I really want function decorators... they aren't viable because of hoisting?
07:41
<snek>
i get so annoyed that js doesn't have decorators on functions whenever i come back to it from python
07:41
<Christian Ulbrich>
bakkot: Couldn't we achieve some poor woman's function composition with function decorators and isn't this, what everyone wants in lie of pipeline operator?
07:42
<rbuckton>
We could easily have function decorators for function expressions and arrow functions, if we wanted them.
07:42
<rbuckton>
We've always been stuck on the hoisting behavior of function declarations.
07:42
<snek>
i don't think either of those should have decorators on them :P
07:43
<ljharb>
surely any use case to decorate any kind of function applies to decorate any other kind of function.
07:44
<rbuckton>
i don't think either of those should have decorators on them :P
I'm confused by this and your previous statement, they seem to be in conflict.
07:45
<snek>
i think function declarations should have decorators. i don't think function expressions or arrow arrow functions should have decorators
07:45
<bakkot>
i don't think either of those should have decorators on them :P
"You don't believe in 2,999 gods. And I don't believe in just one more"
07:45
<snek>
well maybe function expressions that are not in a variable declaration are ok
07:45
<rbuckton>
i think function declarations should have decorators. i don't think function expressions or arrow arrow functions should have decorators
It may be that function expressions are the only way to have function decorators, unless we are willing to disable hoisting of decorated function declarations.
07:46
<snek>
yeah i'm aware
07:46
<ljharb>
didn't we just lightly establish that function declarations are bad because hoisting (altho a non-hosting function declaration sounds great!)
07:46
<rbuckton>

I would be fine with having this mean the function doesn't hoist:

@dec
function f() {}
07:46
<shu>
in this particular lens, but i actually don't think they're bad
07:46
<shu>
i think implicit function declaration hoisting behavior is actually much better than TDZ
07:46
<shu>
implicit letrec without special syntax is great
07:46
<snek>
i am also very ok with decorators making functions not hoisted
07:47
<ljharb>
can i just do @ function f() {} to have a non-hoisting function declaration without any decorator? i'm all on board with that
07:47
<snek>
most of my code is actually written with a rule that prevents using hoisting anyway
07:47
<rbuckton>

At one point I was considering something like this:

let function f() {}
const function g() {}
07:47
<snek>
fn
07:47
<ljharb>
i smell more tdz
07:47
<snek>
def
07:47
<rbuckton>
Please no.
07:48
<snek>
PROCEDURE
07:48
<rbuckton>
Yes, function is long, but fn, impl, def are not friendly to the reader.
07:48
<snek>
i do wish everything was just function. i always mix them up when switching languages
07:49
<ljharb>
good function f() {}
07:49
<rkirsling>
'use good';
07:49
<rbuckton>
But, let function f() {} seemed like a good alternative to let f = function () {} when it came to decorators.
07:50
<snek>
l'function f() {}
07:50
<rkirsling>
@let
07:50
<rbuckton>
Especially since decorators expose the name of a thing, thus let f = @dec function() {} seems like it wouldn't get the name.
07:54
<littledan>
i am also very ok with decorators making functions not hoisted
yes, this is what I was hoping would happen. Of course, then we need to decide whether they get tdz...
07:54
<snek>
i'll let shu decide
07:55
<rbuckton>
var function f() {}

hoist the name, like a 'var', but not the value :)

07:56
<ljharb>
why not all three, var functions have no tdz, let functions are reassignable and have tdz, const functions aren't reassignable :-p
07:57
<rbuckton>
why not all three, var functions have no tdz, let functions are reassignable and have tdz, const functions aren't reassignable :-p
I mean, that's what I'm suggesting, mostly unironically.
07:57
<snek>
const functions can be used in constexprs
07:57
<littledan>
Especially since decorators expose the name of a thing, thus let f = @dec function() {} seems like it wouldn't get the name.
well, class decorators can find themselves in the same position, and get undefined
07:57
<rbuckton>
well, class decorators can find themselves in the same position, and get undefined
And that's exactly why I would want to be able to decorate function declarations as well.
07:58
<littledan>
yes, I think we should focus on decorating function declarations, and live with the non-hoisting (and presumably starting at undefined, given Shu's presentation)
07:59
<rbuckton>
yes, I think we should focus on decorating function declarations, and live with the non-hoisting (and presumably starting at undefined, given Shu's presentation)
I would be happy with this. The function hoisting issue held up decorators for years, and was why it was split off from the original proposal, IIRC.
08:00
<Michael Ficarra>
Luca Casonato: I'm not convinced by snapshotting, just dump everything you care about yourself
08:03
<Michael Ficarra>
eemeli: I just wanted to make sure you have understood that a solution here would look more like "give me the ISO-8601 representation of this" or "sort this according to some defined non-locale-specific Unicode operation", etc, not a "give me the stable string repr of this"
08:04
<Michael Ficarra>
that's the thing I was trying to say that I couldn't put to words in the moment
08:04
<Michael Ficarra>
but it was basically what bakkot had said
08:04
<ryzokuken>
Luca Casonato: I'm not convinced by snapshotting, just dump everything you care about yourself
but if your app has localization features that are subject to change, you cannot do snapshot testing unless you make some kind of Intl stub
08:04
<ryzokuken>
in this case you could just test with the zxx locale and get it working
08:05
<bakkot>
but if your app has localization features that are subject to change, you cannot do snapshot testing unless you make some kind of Intl stub
why can't you just read all of the properties you care about?
08:05
<Michael Ficarra>
ryzokuken: I think that is an unrelated problem
08:05
<bakkot>
if the answer is "they are not exposed", we should fix that
08:05
<ryzokuken>
eemeli: I just wanted to make sure you have understood that a solution here would look more like "give me the ISO-8601 representation of this" or "sort this according to some defined non-locale-specific Unicode operation", etc, not a "give me the stable string repr of this"
this is how I understood the motivation myself (not the latter case I mean)
08:06
<ryzokuken>
why can't you just read all of the properties you care about?
no I literally mean if you have a website that is rendered localized, you cannot do snapshot testing unless you have some sort of null locale to use in the testing environment
08:07
<ryzokuken>
or work on engines without Intl support
08:07
<bakkot>
say more about why?
08:07
<snek>
why can't you stub intl.whatever to just return "AAAAA"
08:07
<bakkot>
why can you not do your own serialization, of all of the properties you care about?
08:08
<ryzokuken>
why can't you stub intl.whatever to just return "AAAAA"
yeah, that's one solution I mentioned
08:08
<bakkot>
oh you mean snapshotting the entire website
08:08
<ryzokuken>
yeah
08:08
<bakkot>
yeah I am not convinced that use case is worth solving, stubbing out intl seems fine for that case
08:09
<bakkot>
you also need to stub Math.random and so on
08:09
<bakkot>
new Date
08:09
<bakkot>
lots of stuff
08:09
<ryzokuken>
right, true
08:09
<bakkot>
(off to dinner, back later)
08:09
<ryzokuken>
but there's other cases apart from that
08:09
<ryzokuken>
I was just elaborating on the one Luca mentioned
08:09
<snek>
like i see this the same way as if we said "we need a special deterministic url for using fetch() in snapshot tests"
08:09
<ryzokuken>
same, talk to you all after lunch!
08:09
<ryzokuken>
like i see this the same way as if we said "we need a special deterministic url for using fetch() in snapshot tests"
what about engines without Intl support
08:10
<snek>
not sure what you mean
08:10
<ryzokuken>
if you design localized interfaces that use Intl, you need to guard every call or again, use an Intl stub
08:11
<ryzokuken>
but if there were a null locale that was supported on all implementations, you would have a clean fallback
08:11
<ryzokuken>
like how the iso8601 calendar works in Temporal.
08:11
<snek>
i think a null locale would be an explicitly bad thing
08:12
<ryzokuken>
Ideally you'd want to use gregory for many real use cases, but in the absence of any of these Intl calendars, you can always use the programmatic one
08:12
<snek>
we used to have something similar in nodejs
08:12
<snek>
and it caused so much pain that we now ship node with full locale data by default
08:12
<ryzokuken>
well, except we shipped an awkward stripped down locale data
08:12
<ryzokuken>
that was like en-US or something IIRC
08:13
<snek>
no matter what you put in you got out en-US but it wasn't real en-US because there wasn't even cldr data for en-US bundled
08:13
<ryzokuken>
damn
08:13
<ryzokuken>
but that's not what is being proposed here
08:14
<ryzokuken>
although I guess motivations could be similar
08:14
<snek>
i think my intuition is still that this is not a good thing and i'd rather code polyfilled intl in some way that makes sense to them
08:15
<ryzokuken>
anyway, I don't want to cause any damage and will let Eemeli explain the merits of the proposal, I was hoping to add a supportive voice but I'm not involved yet
08:43
<Luca Casonato>
Luca Casonato: I'm not convinced by snapshotting, just dump everything you care about yourself
If it’s a simple, stable, easy to read, string base representation of the value, I’d prefer to use that over a structured representation for debugging & snapshot testing
08:43
<Luca Casonato>
you also need to stub Math.random and so on
Not unless your runtime provides a way for you to run tests deterministically at a fixed fake time (which many test runners do)
08:45
<Luca Casonato>
like i see this the same way as if we said "we need a special deterministic url for using fetch() in snapshot tests"
Why, url already has a simple and stable human readable serialization of the contained properties? Dates for example, may not
14:03
<Jesse (TC39)>
92 unread messages in a thread (or threads?) I cannot identify. Thanks, Matrix.
14:41
<eemeli>
I think one necessary next step for Stable Formatting is to enumerate the cases where Intl APIs are currently being used badly or dangerously because they're providing the most ergonomic solutions to developer problems.
20:13
<Kris Kowal>
i am not a fan of being told, "have you tried being smarter"
I have no context on the particulars here, but I can say that around the Agoric virtual water cooler, we have tremendous respect for your intelligence. I have no doubt we could be kinder.
20:29
<Chris de Almeida>
are you a fan of being told "have you tried being kinder" ? /s
20:30
<Kris Kowal>
It’s always welcome feedback, regardless.
22:52
<shu>
I have no context on the particulars here, but I can say that around the Agoric virtual water cooler, we have tremendous respect for your intelligence. I have no doubt we could be kinder.

thanks.

the context is i would appreciate the assumption that we have done our due diligence on performance problems, which involves judgments in a tradeoff space. disagreements on the judgment call that was made can be productive, but i would like the intellectual humility to appreciate our tradeoff space rather than making an opposite call in plenary like "i don't think you're trying hard enough"

23:38
<Mathieu Hofman>
shu: I just went through the transcript, I wasn't online anymore then, but putting aside the debate over due diligence and tradeoff judgements, I actually do not understand in which scenario a let declaration (without initialization) that occurs as the first statement of a scope can ever execute after any other code declared in that scope.
23:57
<bakkot>
littledan: re disposable AbortController, https://github.com/whatwg/html/issues/8557#issuecomment-1724193449