00:55
<Justin Ridgewell>
danielrosenwasser / rbuckton: The new What's Changed Since RC/Beta in the TS release notes are πŸ‘
01:47
<sirisian>
Question. Possibly I'm not searching the right terms, but when async/await was added to ECMAScript, why was threading never pulled into the core language away from Web Workers? (As others I've done the blob thing for years with workers when doing heavily threaded things). I kind of expected that one would be able to just call an async function and have it execute on another thread by now with intuitive shared state, but that was never proposed. Why not?
01:55
<bakkot>
"intuitive shared state" is a contradiction in terms
01:55
<bakkot>
the thing you're proposing would be incredibly complicated to specify or implement, and we're just now getting to the point where we're fleshing out the building blocks which might let us get there someday
01:56
<bakkot>
or, well, not there precisely, but somewhere like it
02:19
<Jack Works>
Question. Possibly I'm not searching the right terms, but when async/await was added to ECMAScript, why was threading never pulled into the core language away from Web Workers? (As others I've done the blob thing for years with workers when doing heavily threaded things). I kind of expected that one would be able to just call an async function and have it execute on another thread by now with intuitive shared state, but that was never proposed. Why not?

we're on the route of that. πŸ€”
search for those proposals:

  • struct (shared struct section)
  • module block
02:41
<bakkot>

can someone verify my assessment that https://github.com/mishoo/UglifyJS/issues/5370 represents a deviation of V8 from other major implementations in FunctionDeclarationInstantiation with respect to non-simple parameter lists when VarDeclaredNames includes "arguments"?

$ eshost -se '[].concat(...["function arguments(){}", "var arguments"].map(occluding => ["()", "(..._)", "(_=0)"].map(params => { const r="return typeof arguments; ", f=Function(params.slice(1,-1), r+occluding); return `${f().padEnd(9)} // function${params}{${r+occluding}}`; }))).join("\n")'
#### ChakraCore, engine262, JavaScriptCore, Moddable XS, SpiderMonkey
function  // function(){return typeof arguments; function arguments(){}}
function  // function(..._){return typeof arguments; function arguments(){}}
function  // function(_=0){return typeof arguments; function arguments(){}}
object    // function(){return typeof arguments; var arguments}
object    // function(..._){return typeof arguments; var arguments}
object    // function(_=0){return typeof arguments; var arguments}

#### V8
function  // function(){return typeof arguments; function arguments(){}}
function  // function(..._){return typeof arguments; function arguments(){}}
function  // function(_=0){return typeof arguments; function arguments(){}}
object    // function(){return typeof arguments; var arguments}
undefined // function(..._){return typeof arguments; var arguments}
undefined // function(_=0){return typeof arguments; var arguments}
Tracing through the full machinery would take me a while, but I can at least confirm that there should not be a difference between simple and non-simple arguments lists in this case, so V8 is definitely wrong somewhere, and it seems quite likely to be wrong in the cases where it differs from other implementations
02:42
<sirisian>
Well intuitive as in all closed over variables and they automatically behave like SharedArrayBuffer items without the bloat. (And in scope functions can just be called without any module syntax). Creating a variable and using an atomic to increment it or other operation would just "work" without stuff like shared structs or shuffling stuff into TypedArrays like it's some separate API.
02:45
<bakkot>
SABs are the single most "handle with care" part of the entire language, especially when used without atomics; making it so that every single thing in the language behaved like that would be... not an idea I'd endorse, to put it lightly, and I imagine that's a common sentiment among the committee
02:46
<bakkot>
Like, just getting the memory model right for SABs was incredibly complicated, and not without bugs; see e.g. https://github.com/tc39/ecma262/issues/1680 https://github.com/tc39/ecma262/issues/2231 https://github.com/tc39/ecma262/pull/1511
02:46
<bakkot>
and that's the simple case, where you're just dealing with raw bytes; it gets more complicated when you get more complicated data structures involved
02:49
<bakkot>
(also blocking atomics don't work on the main thread, so the "using an atomic to increment it" thing doesn't really make sense, at least not on the main thread)
03:04
<sirisian>
I meant incrementing in other threads. My general thinking is I'd like for SharedArrayBuffer to be deprecated such that a variable and thread system works more like C++. In this case everything is handled with care implied. Was talking to someone about my type proposal/notes and they commented that you can't just create say an integer in the main thread and increment it in multiple threads (with atomics). In this setup you'd be able to do things like swap two object references atomically or set a variable object atomically. It definitely would be very complex to implement, but for the user they could just call functions to create threads and implement parallelism without any extra sugar (wrapping of objects, functions, variables).
03:05
<ljharb>
also, not everyone on the committee :cough: is convinced that threads are "not incredibly harmful" :-)
03:08
<sirisian>
I completely get that. I'm migrating over to WebGPU for my current toy projects. Most of my applications were more "spin up 8 threads because I can't use the GPU to compute this" situation. Still for simple projects for demos it would be nice to write a few lines of code to say run a pathfinding algorithm on multiple threads. Though the module block fits those kind of applications cleanly where I'm not sharing state between threads.
03:09
<bakkot>
As a rule we don't usually introduce features only intended to be used in toy projects, particularly when they have sharp edges
03:10
<bakkot>
and shared-memory parallelism isn't just a sharp edge, it's an entire box of rusty razor blades
03:10
<Jack Works>
and shared-memory parallelism isn't just a sharp edge, it's an entire box of rusty razor blades
so how u think about the shared structs proposal?
03:12
<bakkot>
Jack Works: as with SABs it's something which will be useful to build safe-to-use libraries on top of, but not something I'd expect users to touch in everyday life
03:13
<bakkot>
it's carefully designed so that the shared memory parts are constrained to the struct and its references, and doesn't get out into the rest of your program, which is the only thing which makes it even conceivably a good idea
03:13
<bakkot>

that is, I agree with the readme:

Like other shared memory features in JavaScript, it is high in expressive power and high in difficulty to use correctly. This proposal is both intended as an incremental step towards higher-level, easier-to-use (e.g. data-race free by construction) concurrency abstractions as well as an escape hatch for expert programmers who need the expressivity.

03:13
<Jack Works>
Jack Works: as with SABs it's something which will be useful to build safe-to-use libraries on top of, but not something I'd expect users to touch in everyday life
once it is available, it will be used in everyday life by programmers that has c++/rust/java/... background
03:14
<bakkot>
well
03:14
<bakkot>
seems bad
03:14
<Jack Works>
lol
03:14
<bakkot>
if we actually think that's going to happen, it's probably not worth putting in the language
03:15
<bakkot>
that said, I have a background in all of those languages and still wouldn't touch structs without thinking extremely carefully about it
03:15
<Jack Works>
no one use SAB+worker because it's too hard to create one than just following JS style of multi-thread programming
03:15
<bakkot>
I use SABs...
03:16
<bakkot>
but, you know, only after thinking extremely carefully about it
03:16
<bakkot>
and emscripten uses them to good effect as well
03:16
<bakkot>
I think module blocks will make workers more popular in general, tbh
03:17
<Jack Works>
I completely get that. I'm migrating over to WebGPU for my current toy projects. Most of my applications were more "spin up 8 threads because I can't use the GPU to compute this" situation. Still for simple projects for demos it would be nice to write a few lines of code to say run a pathfinding algorithm on multiple threads. Though the module block fits those kind of applications cleanly where I'm not sharing state between threads.
πŸ€” for a toy project maybe you can try a toy runtime. I've heard that the structed proposal has a demo implementation in V8. maybe u can contact v8 team to get a demo build and play around
03:17
<ljharb>
if something is going to encourage wider usage of multi-threaded programming in JS, that sounds like a huge detriment to the language
03:17
<bakkot>
multi-threading is good!
03:17
<ljharb>
"being single-threaded" is a feature, not a bug
03:17
<sirisian>
Well WebGPU makes it not a toy project technically since it's identical to an existing piece of software. The performance issue of doing the project with web workers made it very suboptimal compared to usual approaches (like much slower with limitations a GPU approach wouldn't have). The main idea though is taking a data structure and passing it through a pipeline where each operation is expensive. One could imagine say using the pipeline proposal and each function just calls a thread? yeah, that's probably close, but simplified.
03:17
<bakkot>
shared-memory multithreading is bad
03:17
<bakkot>
but multiple threads are good
03:17
<bakkot>
CPUs have many cores
03:17
<ljharb>
i can agree that things that are observably the same as "being single-threaded" is good
03:18
<ljharb>
the thing i value is that things must act as if they're single-threaded. they can be faster than that if i can't tell the difference, and that's a good thing
03:18
<bakkot>
deliberately limiting your programming language so that it can't use more than 1/16th of the CPU seems like... bad
03:18
<sirisian>
I have a 12900k for reference. :|
03:19
<bakkot>
we shouldn't be optimizing for people with 12900ks
03:19
<Jack Works>
"being single-threaded" is a feature, not a bug
this goes too far. i support multi-thread by message passing, not memory sharing
03:19
<bakkot>
but even the cheapest android phones available have 4 cores these days
03:21
<pokute>
We could simultaneously introduce manual memory management as a viable alternative to GC in ECMAScript since users will appreciate the freedom. 🚎
03:22
<Jack Works>
actually I'm curious about, if Record&Tuple are shipped and highly optimized by the engine, does that make life easier?
03:22
<bakkot>
records and tuples are immutable so it doesn't much matter
03:22
<Jack Works>
we can pass immutable object/arrays with 0 serialization cost (engine can share the memory)
03:22
<bakkot>
you'd still have to postMessage them, and you can do that with a plain object
03:22
<Jack Works>
you'd still have to postMessage them, and you can do that with a plain object
yeah, but that need a clone
03:22
<bakkot>
yeah that's fair
03:23
<bakkot>
my impression is that the expensiveness of the clone is rarely a limiting factor, but it might be for some projects
03:23
<Jack Works>
We could simultaneously introduce manual memory management as a viable alternative to GC in ECMAScript since users will appreciate the freedom. 🚎
WeakMap[@@iterator]!
03:24
<bakkot>
that's been proposed...
03:24
<bakkot>
you can do it yourself with weakrefs if you really want to
03:24
<bakkot>
but, like
03:24
<bakkot>
don't
03:25
<Jack Works>
my impression is that the expensiveness of the clone is rarely a limiting factor, but it might be for some projects
πŸ‘€ what's the common limit?
03:25
<bakkot>
workers are annoying to create, mostly
03:26
<bakkot>
and postmessage is annoying to use
03:26
<bakkot>
you can't just await stuff without building some wrappers
03:27
<sirisian>
if something is going to encourage wider usage of multi-threaded programming in JS, that sounds like a huge detriment to the language
My thinking is people should be able to use it without thinking much or from other libraries. Like years ago I wrote a small game server in C++ for web sockets then converted it to node.js with WS. In order to speed up things to support thousands of players again I moved the packet deserialization stuff to another "thread" with cluster. Creating simple producer/consumer systems in worker threads for processing packets should be like super simple. (In C++ I was using I think boost fibers for something similar and it was very elegant).
03:27
<Jack Works>
I think requiring memory-sharing in JS is like requiring imperative style programming in haskell πŸ€”
03:28
<ljharb>
you've got a lot of "shoulds" in there that seem pretty informed by C++ experience, which isn't something most JS programmers have or will ever have
03:30
<ljharb>
i'm pretty confident that in the fullness of time, the majority of JS devs won't have ever used something besides JS :-) no way to prove it either way, ofc.
03:30
<bakkot>
ljharb I feel like "it is good to do expensive compute off the main thread" is like... not a principle I would expect to find disagreement with?
03:30
<Jessidhia>
I haven’t used Go in over 5 years but I miss goroutines and channels
03:31
<sirisian>
Well I'm just saying there are situations where a JS programmer runs into an issue and the real solution of just calling a thread and passing work over is much more complicated than it should be.
03:31
<pokute>
Well, I would be interested in what kind of API would be super simple for a JS developer for writing "multi-threaded" code.
03:31
<ljharb>
bakkot: yeah i'm not disputing that. something that worked identically single-threaded as multi-threaded, so engines could unobservably execute them across multiple cores, would be amazing
03:31
<bakkot>
yeah but that's not... possible
03:31
<bakkot>
the whole point is that the compute is happening while other compute is happening
03:31
<bakkot>
which is inherently observable
03:31
<ljharb>
ay, there's the rub
03:32
<ljharb>
personally i would prefer a world where everything is eternally single-threaded, and parallelism is done via processes, to a world where JS is ruined by bringing in all the problems of threading. i'm quite sure there are those who violently disagree with me, ofc.
03:32
<bakkot>
threading is already a thing in JS
03:33
<ljharb>
sadly, that is true
03:33
<bakkot>
doesn't seem to have been ruined
03:33
<bakkot>
... at least, not by that
03:33
<ljharb>
that's because it's unapproachable and not super usable :-)
03:33
<Jack Works>
doesn't seem to have been ruined
because that api is tooooo hard to use
03:33
<ljharb>
i'm content to keep it that way, so that advanced niche use cases can leverage it, but regular JS devs aren't tempted to
03:34
<Jack Works>
you could only manipulate a number array
03:34
<bakkot>
you can postmessage
03:34
<bakkot>
which is the only multithreading in some languages
03:34
<bakkot>
message passing is a totally normal way of doing multithreading
03:35
<Jack Works>
which is the only multithreading in some languages
(including JS before we have SAB)
03:35
<bakkot>
ljharb I stand by "it is good to do expensive compute off the main thread"
03:35
<bakkot>
if that is good in general, then it is good for "regular JS devs"
03:35
<ljharb>
i think the goodness of that is far outweighed by the badness of threaded programming gotchas.
03:35
<ljharb>
slowness >>>>>> race conditions, always
03:36
<bakkot>
basically all threaded programming gotchas are about shared memory
03:36
<pokute>
Well I'm just saying there are situations where a JS programmer runs into an issue and the real solution of just calling a thread and passing work over is much more complicated than it should be.
We can compare this to a early 2000s C++ programmer runs into an issue where they are waiting for a long operation (like disk read) and want to do something in the meantime. The default solution of threads and passing work over is a lot more complicated to wrap around than most other C++ code, especially with the special and hard-to-intuit considerations of thread-safeness.
03:36
<bakkot>
race conditions are almost always a thing in shared memory, but are no more a thing with message passing than they are with async functions
03:36
<ljharb>
i would be happy to be convinced that there's a threading model that has no shared memory yet has a deterministic way to communicate
03:36
<bakkot>
postmessage
03:36
<bakkot>
postmessage is the thing you are talking about
03:36
<bakkot>
also channels in go
03:36
<ljharb>
postMessage takes objects too
03:36
<bakkot>
it's like a very normal way of doing multithreading
03:36
<ljharb>
and to be fair, i'm not familiar with go
03:36
<bakkot>
the objects are cloned, not shared, in postMessage
03:37
<ljharb>
sure, but structured cloning is its own pile of problems :-)
03:38
<bakkot>
right but whatever problems structured clone has, those problems aren't inherent to multithreading
03:38
<Jack Works>
We can compare this to a early 2000s C++ programmer runs into an issue where they are waiting for a long operation (like disk read) and want to do something in the meantime. The default solution of threads and passing work over is a lot more complicated to wrap around than most other C++ code, especially with the special and hard-to-intuit considerations of thread-safeness.
don't know c++, but is io_uring some kind of async without block threading?
03:38
<bakkot>
being against shared memory parallelism is very reasonable, but it seems wrong to generalize this to being against multithreading in general
03:39
<sirisian>
That reminds me of someone's finance app I saw they made for a company. Kind of lagged. Was doing stuff with hundreds of thousands of records client-side. This was before web workers (and Power BI I think). Fascinating what people try to do in JS single page applications. Granted computers are faster now, so I don't think it's as huge of an issue.
03:39
<Jack Works>
sure, but structured cloning is its own pile of problems :-)
then message passing by R&T β„’!
03:42
<pokute>
That reminds me of someone's finance app I saw they made for a company. Kind of lagged. Was doing stuff with hundreds of thousands of records client-side. This was before web workers (and Power BI I think). Fascinating what people try to do in JS single page applications. Granted computers are faster now, so I don't think it's as huge of an issue.
I bet in a hundred years people will still be able to write lagging apps, even with thousands of items and even with multithreading. Can't underestimate people. (Though if AI will write the software......)
03:44
<Jack Works>
another question. now we reified Realms and some host hooks (compartment proposal) as something we can control. is it impossible to have Agent/Agent Clusters reified to run suspious code?
03:44
<bakkot>
depends on what you mean by "suspicious code"?
03:45
<Jack Works>
untrusted code
03:45
<bakkot>
completely untrusted? no
03:46
<bakkot>
spectre is going to sit there haunting you
03:46
<bakkot>
if timing attacks are outside your thread model, though, what do you want that realms don't already give you?
03:46
<Jack Works>
oh... but you can turn off high resolution timer right?
03:46
<bakkot>
no
03:46
<bakkot>
https://gruss.cc/files/fantastictimers.pdf
03:47
<Jack Works>
if timing attacks are outside your thread model, though, what do you want that realms don't already give you?
while (true);
03:47
<bakkot>
do it in a worker?
03:48
<Jack Works>
well maybe i dont need to worry about while true so much
03:52
<pokute>
I would be really interested in any threading model that would allow something like myThread.shareReference(globalThis); that wouldn't completely break every JS coder's expectations.
03:55
<bakkot>
I don't think you can simultaneously have "shared memory" and "doesn't completely break every JS coder's expectations"
03:55
<bakkot>
at least not without adding in the whole of Rust's ownership model
03:55
<bakkot>
... which is going to break every JS coder's expectations anyway, for that matter
04:00
<pokute>
I think "multi-threading" in JS is a bad term, since "threads" imply certain stuff like shared memory that is practically impossible with JS.
04:01
<pokute>
threads are (I think) OS feature leveraging CPU capabilities that nothing of JS actually relies on. Workers could run on a separate process or even a remote server.
04:04
<sirisian>
Again shared memory already exists. SharedArrayBuffer has allowed this for a long time now.
04:06
<pokute>
It's explicitly shared memory. That's very different from implicitly shared memory where all of a process' memory is shared between threads.
04:24
<pokute>
@sirisian I was thinking about your initial question, about the async/await. Async/await is just a different way to write some function calls (callbacks). For it to use multiple threads, every function call would have to be possibly using threads. For most callbacks, they just only call other functions. The functions that actually do heavy computation is a vanishingly small percentage. Creating a new costly thread for each function call would immediately erode any performance benefit gained from parallelism. This might be fixed by JS engines inspecting code and spinning only such heavy computation functions into separate threads if it recognizes it to be safe. But that completely up to engine whether to do it and isn't a language issue at all.
04:25
<sirisian>
The threading would be explicit when making a call.
04:25
<pokute>
There's nothing that prevents existing engines to add a feature that they run heavy computation parallelly in separate threads if they can handle the possible side effects.
04:27
<pokute>
So it would be a normal function that is called?
04:27
<sirisian>
yes
04:29
<pokute>
A normal function has access to all globals and closures and is free to modify them as it sees fit. If it was run as a thread, this would mean implicit shared memory. How would other code that runs parallel to that be safe from variables changing their values suddenly?
04:29
<sirisian>
You could for example call multiple functions then await Promise.allSettled on them if you wanted to join back in an async task. I haven't thought about this hard at all, but like foo.callThread(..., args); which returns a promise. Ideally we'd have cancellable promises by then. >_>
04:30
<sirisian>
It wouldn't be safe at all. Using threads would have an assumed level of complexity just like using SharedArrayBuffer stuff.
04:32
<pokute>
From what I've understood of SharedArrayBuffer is that it's always completely safe to use due to how extremely narrow and restricted it's features are. It's not even complex. It's cumbersome.
04:32
<sirisian>
I should mentioned it would be my hope that with this we'd all get concurrent data structures and standard library stuff in the far future. I noticed that state of JS mentioned some data structure stuff. Not sure the context of that. Concurrent queue at the least. heh.
04:35
<pokute>
Also, people should take their promises more seriously. :-) Don't ask for promises that you can't receive later.
04:39
<pokute>
Cancelable promises are a terrible term since it never reliably cancels any work a promise does. It only cancels the receiving of results. That's not what people expect.
04:41
<sirisian>
Good point.
04:49
<pokute>
Which is why code should rather gracefully receive and discard out-of-order and obsolete data received from promises. Instead of canceling promises (which is quite simple to do), you could expect a version number from a REST call. You could add metadata to a REST call that you receive with the result in a way to see that the result is for the current context you're viewing. Etc...
04:50
<sirisian>
Kind of surprised there isn't a simple way to throw at the next await kind of language design that could be made for that. I have a few cancellable systems where there's stuff like await Promise.race([cancelPromise, workPromise]); multiple lines.
04:54
<pokute>
You could do it with generators.
04:55
<pokute>
But it's very cumbersome.
04:56
<sirisian>
Consume the generator unless the cancel flag is set kind of thing and in the worker just yield all the work?
04:57
<sirisian>
Not a bad idea. I could see that being kind of elegant in some of my code. I think it was written before async generators.
04:57
<pokute>
Yeah. redux-saga is one pretty well known example of that.
05:00
<pokute>
I really liked redux-saga at one point. I tried to use it in everything non-simple. Now I removed it from my own hobby project.
05:01
<pokute>
Now I write mega-reducers. :-)
05:02
<pokute>
Well, I'm in the process of removing most of the complex sagas.
17:27
<Richard Gibson>

Tracing through the full machinery would take me a while, but I can at least confirm that there should not be a difference between simple and non-simple arguments lists in this case, so V8 is definitely wrong somewhere, and it seems quite likely to be wrong in the cases where it differs from other implementations

@bakkot it looks to me like nobody is following the spec here but V8 comes closest. Absent overlap between VarDeclaredNames and parameter names, FunctionDeclarationInstantiation steps 27 and 28 (the former when any parameter has an initializer, the latter otherwise) should both create a binding for each variable and initialize it to undefinedβ€”even if that variable is named "arguments".

17:41
<shu>
that does not sound like a thing i want to implement
18:01
<Richard Gibson>
my primary concern on this is alignment between spec and implementations, and resolution by changing the former to match the latter seems expedient
18:29
<bakkot>

Richard Gibson: step 22.f:

Let parameterBindings be the list-concatenation of parameterNames and Β« "arguments" Β».

step 27:

 [...]
 b. Let instantiatedVarNames be a copy of the List parameterBindings.
 c. For each element n of varNames, do
   i. If n is not an element of instantiatedVarNames, then
     1. Append n to instantiatedVarNames.
     2. Perform ! env.CreateMutableBinding(n, false).
     3. Perform ! env.InitializeBinding(n, undefined).
18:30
<bakkot>
so no, step 27 should not create/initialize the arguments binding, to my reading
18:31
<bakkot>
step 28 works a little differently but has the same practical effect for the purposes of the code in question
18:32
Richard Gibson
sighs with relief