00:00 | <James M Snell> | I'd be curious as to why. Even if the answer is
|
00:01 | <littledan> | yeah, I guess so, I'd just rather this be based on analyzing how our codebase works today to double-check, rather than just hand-waving |
00:02 | <James M Snell> | yep, understood |
00:02 | <littledan> | (and I feel like we need an analogous analysis for the Node ecosystem) |
00:21 | <Justin Ridgewell> | It would indeed be special, but Yoav found that, in DOM, some events naturally wanted to restore the context when they were registered, whereas others should not do so and leave in place the context that triggered them |
00:22 | <Justin Ridgewell> | Chengzhong Wu I think is setting up a meeting with Yoav already? |
00:22 | <James M Snell> | Node.js specifically introduced EventEmitterAsyncResource for those kinds of use cases |
00:26 | <James M Snell> | by default, emitter.emit('...') will propagate the context where emit() is called. With an EventEmitterAsyncResource , however, the context is captured when the emitter is created and that's what is propagated when emit() is called |
00:27 | <littledan> | but, unhandledRejection is a third behavior, right? |
00:28 | <James M Snell> | yep, because it's a singular event on an emitter that otherwise acts like the default |
00:29 | <James M Snell> |
|
00:29 | <James M Snell> | same object, two very different behaviors |
00:31 | <littledan> | makes sense. Ultimately, whether we go with 123 or 321 , unhandledRejection is still special, right? |
00:31 | <James M Snell> | yep |
03:05 | <Kris Kowal> | This is a bit of a mischaracterization. Mark asked what Justin thought of the idea of using his work as the explainer. Justin (in my opinion rightly) responded that Mark’s reexplanation is better suited for a different audience. I believe Mark found that argument compelling. |
03:13 | <Kris Kowal> | Posted https://youtu.be/KKOn5SepxYI |
03:24 | <Justin Ridgewell> | Oh, Mark also said we should probably use per-Agent storage for the contexts |
03:24 | <Justin Ridgewell> | https://youtu.be/KKOn5SepxYI?t=1950 |
07:01 | <Justin Ridgewell> | Would be cool to see if someone can build AsyncContext as a Node native module just based on that, or if there are limitations |
08:30 | <Andreu Botella> | Is it special because the mere fact of rejecting causes a macrotask, as opposed to other macrotasks which have to be triggered from a call to a runtime API? |
13:21 | <littledan> | Here's a simple package implementing the proposal: https://github.com/jridgewell/async-context-native |
13:22 | <littledan> | So the magic V8 API does literally all of the work? |
13:23 | <littledan> | Oh, Mark also said we should probably use per-Agent storage for the contexts |
13:26 | <littledan> | I think the specialness here is unrelated to it being a macro task vs micro or sync, and more that we turn out to want this somewhat weird state restored for unique reasons |
13:27 | <littledan> | Anyway, as Daryl from my team wrote on the local issue, it turns out our use case works fine whether the answer is 123 or 321 (though Daryl shares my intuition that 123 feels nicer, maybe related to the fact that he has spent much more time than me on the current 123 implementation) |
16:31 | <James M Snell> | I think in the overwhelming majority of cases the promise is going to reject in the appropriate context. The only case where I've seen this issue so far is the deferred reject where the reject is called in a different context. If the model were set up to handle returning 321 in that case, the more usual cases would still resolve correctly. |
16:31 | <James M Snell> | and it would still be possible to implement it so that 123 was returned if that's really what you wanted |
18:06 | <Justin Ridgewell> | So the magic V8 API does literally all of the work? queueMicrotask not working) |
18:07 | <Justin Ridgewell> | It's also not possible to hook into Node's unhandledRejection without completely reimplementing the event queue, but that's a Node issue and not a spec issue. |
18:08 | <littledan> | Yup. The magic V8 API is the parallel for the hooks the spec already implements (though they don't faithfully implement it everywhere it's needed, hence the |
18:09 | <Justin Ridgewell> | 👍️, I'll add more details later tonight |
18:27 | <littledan> | So the 321 semantics significantly simplify the implementation? |
18:27 | <littledan> | and this is part of the motivation? |
18:28 | <littledan> | it's hard for me to see the part of it which is more intuitive with 321, but if it's easier to implement (avoiding the kInit hook?) then I'm sold |
18:28 | <littledan> | do we need anything more than the small V8 API? e.g., the resolve hook |
18:28 | <littledan> | err, reject hook |
18:29 | <littledan> | or this is handled by the relevant V8 API being synchronous? |
18:29 | <Justin Ridgewell> | V8 could implement efficiently (all promises will require another slot to hold the context pointer). It's inefficient for embedders to do this |
18:29 | <littledan> | yeah, I'm convinced that either option could be implemented efficiently; this is more about curiousity |
18:29 | <littledan> | what is it that V8 would have to implement, if all we want is 321? |
18:31 | <Justin Ridgewell> | V8 wouldn't need to implement anything, it already has the necessary hooks for us. Embedders will need to capture the context when V8 triggers the ProjectReject hook. |
18:32 | <Justin Ridgewell> | If we want 123 behavior, then V8 needs to store the context on each promise allocation, and provide a method for embedders to access that when they V8 triggers the PromiseReject hook. |
18:36 | <littledan> | right, OK |
18:40 | <littledan> | so this works because the V8 API's SetPromiseRejectCallback is invoked synchronously (giving us all the context we need) |
18:46 | <Justin Ridgewell> | Correct |
18:53 | <James M Snell> | Yeah, the promise kinit hook is unnecessary if v8 will capture the context for us on promise creation and give an API to retrieve it |
18:53 | <littledan> | right but we're thinking that we don't even need that, if we build consensus around 321 |
18:53 | <littledan> | right? |
18:54 | <James M Snell> | Correct |
18:55 | <James M Snell> | There is one additional bit of complexity to support multiple AsyncContext instances. That is, for ALS, the context stored is actually a map of ALS instances to the current values associated with each. |
18:55 | <Justin Ridgewell> | Why's that a problem? |
18:55 | <James M Snell> | That's minor tho. But it would be convenient if the v8 API allowed me to store an embedder value directly rather than a v8::Value |
18:56 | <James M Snell> | Not a problem, just a detail |
18:56 | <littledan> | yeah, that's exactly what Justin's code does, I think |
18:57 | <littledan> | definitely necessary! |
19:44 | <littledan> | oh yeah I guess basically V8 should remove the existing API and just provide a higher AsyncContext API (or make it an assertion failure to use them both at once) |
19:44 | <littledan> | for Bloomberg's use case, we really want a C++-accessible API for all of this. I think that would be necessary for the browser-internal cases as well. |
19:59 | <littledan> | yeah, that's exactly what Justin's code does, I think |
23:17 | <littledan> | Justin Ridgewell Chengzhong Wu : For the TC39 slides, I want to suggest that we talk about OpenTelemetry rather than React Cache. It is a very real client-side use case, and it's also easy to understand based on the context from the previous example (it's just a more sophisticated version). I honestly still don't understand the React Cache case, and I don't think getting into a self-contained explanation of React would play well with theorists in the audience if we had to go down that rabbit hole. OpenTelemetry will sound very convincing to serious people, on the other hand :) and there are intuitive reasons for why you want to split a span on the client side, which Chengzhong Wu 's library shows. |
23:18 | <Justin Ridgewell> | Sure |
23:20 | <littledan> | could also be fun to explain the relationship to Justin's awesome library above and/or James's ongoing work, but this can also just be in the air without slides |
23:48 | <Kris Kowal> | OpenTel would speak to me. I used to work with Yuri Shkuro. |
23:58 | <Justin Ridgewell> | Oh god, thenables are broken! |
23:59 | <Justin Ridgewell> |
|
23:59 | <Justin Ridgewell> |
|