00:18 | <Steve Hicks> | That's not necessarily true with implicit scopes. If any function call is made an implicit scope then breaking out would be impossible. |
00:18 | <Steve Hicks> | (but I could be wrong about that) |
00:20 | <Steve Hicks> | i'm a little more confident that it's an unacceptable overhead for polyfilling/transpilation |
00:20 | <Steve Hicks> | (which, admittedly isn't a reason not to do it... but it would probably cause us to give up on using AsyncContext any time in the foreseeable future) |
00:21 | <Stephen Belanger> | It actually shouldn't because then your "scope" can be held as a stack variable and just rely on stack semantics to manage where sets/gets route to. |
00:24 | <Stephen Belanger> | It's actually faster to put it in the function header as otherwise you're always doing heap operations whereas treating it as a stack variable lets you both locate the stack slot to modify it and also do captures on async tasks or nested scopes as stack operations. |
00:30 | <Steve Hicks> | polyfilling would require adding wrapping every function body (or every function call), which is just too expensive. Also, this wouldn't help with the beforeEach situation, nor would it help achieve flow-through semantics. So it's just a (dubious) ergonomics win. |
01:45 | <Stephen Belanger> | It doesn't help beforeEach on its own. It does help with flow-through semantics though. |
01:46 | <Steve Hicks> | It doesn't help |
01:47 | <Stephen Belanger> | By linking the scopes to propagate between them. That's the whole point of raising out the scopes like that--you have both something you can propagate from but also something to propagate to using the same construct. |
01:48 | <Stephen Belanger> | It's not exactly a boundary around the function though, it's a boundary around a sync segment of code. As with async/await boundaries, you need to produce a graph around calls just as you would with awaits. |
01:49 | <Stephen Belanger> | Which, again, also benefits from happening on the stack. |
01:50 | <Stephen Belanger> | When a call would be made, you can capture the current state, inside the function you can continue that context. On exiting a function you can allow the context to flow out to the caller and then the caller stack can decide if it wants to retain that context or restore what it captured before the call. |
01:50 | <Stephen Belanger> | And you can do all that with the stack frames. |
01:51 | <Steve Hicks> | how is that decision expressed? |
01:53 | <Stephen Belanger> | We can generate additional code around any call instruction to do the capture before and after it can decide what to do with the context value as it is at the time of that call exiting--that could be to keep that value if we want flow-through, or it could be to grab the reference out of the stack from before the call and restore that. |
01:53 | <Stephen Belanger> | To be clear, I mean putting all that in the generated bytecode, not user-controllable code. |
01:54 | <Steve Hicks> | What I'm unclear on is what determines whether it flows through or restores? Is it something about the call? Something about the variable? |
01:56 | <Stephen Belanger> | A snapshot is stored in the captured before the call, and the current state just gets left as-is when the nested function exits. We can then send that current state to some API to tell it if we want to continue flowing that forward, and we can send the snapshot to another API for anything that wants to flow around. We can also have some logic to detect if there even is any active stores to delete the code that does that unless there actually is a store. |
01:57 | <Stephen Belanger> | That's already done for PromiseHooks--it only injects the lifecycle event code if there's actually registered hooks. |
02:04 | <Steve Hicks> | But who decides? Is it user code that runs somewhere (which is a non-starter performance-wise)? I'm trying to understand concretely what you're suggesting, so I'll give a (wrong) concrete example.
Suppose the first one should flow through and the second call should flow around. Then there's gotta be something different somewhere - can you give a mock-up of what that difference might look like? |
02:06 | <Stephen Belanger> | No, this is all purely runtime code. If we have two variables, as has been suggested, the snapshot and the current state can get passed to the machinery that handles the flows for those two variables and does whatever it needs to do--which is generally just a pointer-copy. |
02:07 | <Stephen Belanger> | This is all generated code I'm talking about. Nothing user-facing. We'd still be applying our own flow semantics, whatever those happen to be. Just we could do that on the stack generally, if we design the system carefully. |
02:07 | <Steve Hicks> | ok so now I think you're saying there's two different types of variables that have built-in different propagation strategies. IIUC merges need to be O(1) in the number of variables. |
02:08 | <Steve Hicks> | which means O(1) different semantics |
02:09 | <Stephen Belanger> | Merges are a separate problem, which I don't think is actually all that important. We just need a correct branch to link back to. Building merge contexts would be the ideal for APMs, but we can work around their absence well enough. We can't work around having no branch to link to though. |
02:10 | <Stephen Belanger> | Ideally merges would be solved too, but that's much lower priority to me. |
14:19 | <Matteo Collina> |
|
15:55 | <Andreu Botella> | Now that I think some more about the web needs for cross-realm keys in the mapping, I think those keys could be mapped to values which aren't JS values but spec-internal records 🤔 |
15:55 | <Andreu Botella> | how do we encode that into the spec? |
15:56 | <Andreu Botella> | I guess we could just assert on .get() that the value is a JS value |
16:03 | <Steve Hicks> | Unfortunately that does not work to "fix" the use case. I don't see how you can think it would. You lose any sense of "globality" of the variable. |
16:12 | <Steve Hicks> | Are you thinking about 5.6.3.4-5.a where it says "if SameValueZero(p.[[AsyncContextKey]], asyncVariable) is true, ..."? As far as I'm aware, there's no idea of reference equality for records anywhere else in the spec, so that might be a new concept that we probably don't want to have to define. |
16:13 | <Andreu Botella> | I was thinking on the values being records, the comparison on keys would not change |
16:13 | <Steve Hicks> | maybe PrivateName does something like that, though? |
16:13 | <Andreu Botella> | it'd be an assertion after that line, checking that the value is an ECMAScript language value |
16:15 | <Andreu Botella> | well, it'd be turning that line into a multiline if |
16:15 | <Andreu Botella> | and having the assertion inside the then, before returning |
16:16 | <Steve Hicks> | I just checked and PrivateName is indeed a record that's compared by reference, e.g. https://tc39.es/ecma262/#sec-privateelementfind |
16:16 | <Steve Hicks> | (but maybe I'm a step or two behind you here) |
16:17 | <Steve Hicks> | what does mapping to records do to help with cross-realm keys? |
16:19 | <Andreu Botella> | well, the reason I was thinking of cross-realm keys was because of web specs using AsyncContext internally, and their keys might need to be cross-realm |
16:20 | <Andreu Botella> | but the same kind of use case might also need records as values, since those keys would not ever be exposed to JS |
16:20 | <Andreu Botella> | it's not directly to help with cross-realm keys, it's to help with the problem that solves |
16:21 | <Andreu Botella> | it's better than having to wrap those records into a JS object somehow |
16:21 | <Steve Hicks> | ah gotcha, so if the web spec isn't exposing the AsyncContext.Variable at all but just storing a random record as the value type - that makes sense |
16:22 | <Steve Hicks> | That seems worth a textual note somewhere |
16:22 | <Steve Hicks> | since otherwise just reading the ES spec would make it very confusing why the assertion is there |
16:22 | <Andreu Botella> | definitely |
16:45 | <nicolo-ribaudo> | it's better than having to wrap those records into a JS object somehow |
16:46 | <nicolo-ribaudo> | An object with an internal slot holding the record |
16:48 | <Andreu Botella> | What is the problem with doing this? |
16:49 | <Andreu Botella> | I don't expect a lot of specs to use this, but it wouldn't only be used by, say, HTML |
16:49 | <Andreu Botella> | scheduler.yield() does seem to have a clear use case for it |
16:50 | <Andreu Botella> | but maybe that's niche enough that specs that use it can do the extra work |
16:52 | <nicolo-ribaudo> | For specs there could be an AO StoreRecordInAsyncContext and GetRecordFromAsyncContext, that abstracts away the wrapping |
16:53 | <nicolo-ribaudo> | This is more of a decision for the spec editor though, let's discuss it during the editors stage 2.7 reviews :) |
17:34 | <Andreu Botella> | I'm currently working on an explainer that would describe how web specs would have to deal with AsyncContext |
17:35 | <Andreu Botella> | I'll go with "this is the complicated way that you have to do this currently, but we expect to add AOs to make things more ergonomic in the futuure" |
21:40 | <Justin Ridgewell> | I don’t get it; isn’t this what AsyncContext.Snapshot is for? |
21:47 | <Justin Ridgewell> | for tests, if you have set/enterWith, you could set the context in Snapshot instance that test will be run within. |
22:02 | <Justin Ridgewell> | Something like:
|
22:22 | <Stephen Belanger> | That seems to kind of defeat the point of AsyncContext which is to not need to change APIs and pass things around manually. 😅 |
22:26 | <Justin Ridgewell> | I think beforeEach is already a weird API, becuase it’s purpose is to do setup work that would be done by the user’s code already by the time the call is being made. Having a setSnapshot API allows the user’s real code to not change and still keep a strong encapsulation of the variable. |
22:29 | <Stephen Belanger> | That also doesn’t solve that this problem is not unique to test frameworks, it’s a generalized pattern of wanting to share data across disconnected but sequential async systems. This whole scenario is what most use of ALS in Node.js has been. |
22:30 | <Stephen Belanger> | You get the same problem with middleware layers in routing frameworks. |
22:31 | <Stephen Belanger> | And also most lifecycle event plugin systems. |
22:31 | <Stephen Belanger> | And of course most APM instrumentation, which is the source of my concern specifically. |
22:32 | <Stephen Belanger> | Through flow is what most existing context users expect. |