00:54 | <littledan> | Justin Ridgewell: Do you have a link to the reference site? I couldn't find it |
00:55 | <littledan> | I definitely expected that Yoav's work would use this |
00:55 | <Justin Ridgewell> | Reference site? |
00:57 | <littledan> | err, callsite |
00:58 | <littledan> | just trying to find Yoav's patches or design docs or anything like that |
00:59 | <Justin Ridgewell> | third_party/blink/renderer/modules/scheduler/task_attribution_tracker_impl.cc |
00:59 | <littledan> | ah I see thanks!! |
01:00 | <Justin Ridgewell> | They're using continuation API, and there's no special handling for enqueueMicrotask that I can find |
01:00 | <Justin Ridgewell> | So they're falling into the same thenable bug |
01:01 | <littledan> | how is setTimeout handled here? |
01:01 | <littledan> | does it work? |
01:01 | <littledan> | Shu is skeptical of putting too much work into thenables. I don't understand the issue in enough detail to know how it would be solved, or whether it should be solved. |
01:11 | <Justin Ridgewell> | Looks like setTimeout is handled on the Blink side |
01:11 | <Justin Ridgewell> | TaskAttribution users: https://source.chromium.org/search?q=%22GetTaskAttributionTracker()%22&ss=chromium%2Fchromium%2Fsrc |
01:13 | <littledan> | OK, good, this is all useful context |
01:13 | <littledan> | thank you |
01:14 | <littledan> | Shu will look into this and be in some level of contact with Yoav |
01:14 | <littledan> | Chengzhong Wu: How is it going with proposing a meeting between us and Yoav? |
01:18 | <littledan> |
|
01:19 | <Justin Ridgewell> | I still need to take a look at them |
01:19 | <Justin Ridgewell> | I think this is how setTimeout works: |
01:33 | <littledan> | Cool, though I am having trouble tracing it |
01:33 | <littledan> | Would be awesome if it is possible to hack up a PR to Chrome to make a pair of functions to get and set this context variable somehow |
01:34 | <Justin Ridgewell> | Different than the Get /SetContinuationPreservedEmbedderData ? |
01:35 | <Justin Ridgewell> | Or do you mean specific to timers? |
02:17 | <littledan> | I mean that plus this plumbing through all the web APIs (not specific to timers) |
02:17 | <littledan> | So like the node extension you made but it restores across web APIs including timers in exactly the way yoav’s work does |
02:44 | <James M Snell> | Currently we (workerd) has to capture the context manually for timers and queueMicrotask web apis. These are still fundamentally based on the v8 API tho. Basically when a timer is set we call GetContinuationPreservedEmbedderData to get the current storage context and enter that manually when the callback function is called. Pretty simple |
02:45 | <James M Snell> | queueMicrotask will be handled for us once v8 fixes the current limitations |
02:45 | <Justin Ridgewell> | Do you maintain patches onto V8 for workerd? |
02:45 | <James M Snell> | Oh, we also capture manually for wrapped functions and AsyncResource |
02:45 | <littledan> | queueMicrotask will be handled for us once v8 fixes the current limitations |
02:46 | <James M Snell> | We do but none for this yet |
02:46 | <James M Snell> | The limitation that v8 is not attaching the context to EnqueueMicrotask |
02:51 | <James M Snell> | We certainly could float a patch that covers the queueMicrotask issue but we'd rather that just be addressed upstream. For now we're fine with it not working for thenables but would like that not to be permanent |
02:55 | <Justin Ridgewell> | The thenable bug is solved by copying https://source.chromium.org/chromium/chromium/src/+/main:v8/src/builtins/builtins-microtask-queue-gen.cc;l=238-247;drc=bf9ffddf05af0586dc82cd2a320508cd5dcfbc3c into https://source.chromium.org/chromium/chromium/src/+/main:v8/src/builtins/builtins-microtask-queue-gen.cc;l=191-221;drc=bf9ffddf05af0586dc82cd2a320508cd5dcfbc3c |
02:57 | <Justin Ridgewell> | The generic microtask bug is in that same function, but it's more complicated |
02:59 | <James M Snell> | Oh nice. Since we have a workaround for queueMicrotask we might be able to just float the change for thenables then wait for the rest |
03:00 | <Justin Ridgewell> | I'm assuming you're not using any Blink code for your timers? |
03:00 | <James M Snell> | Correct |
03:03 | <Justin Ridgewell> | Perfect |
03:14 | <Justin Ridgewell> | Sorry, we'll also need these to store the data in the first place
|
03:15 | <Justin Ridgewell> | So it's the same amount of work to fix generic microtask (I thought those last 3 already existed in thenable tasks, but they didn't) |
03:17 | <Justin Ridgewell> | Those same 4 changes are all that's necessary to add continuation data to CallbackTask and CallableTask (I don't really know the difference between them…) |
03:18 | <Justin Ridgewell> | As far as I can tell, though, nothing in V8 uses the generic microtask queue except promises, it's only if you call into it via the public API |
03:32 | <Andreu Botella> | The web specs do enqueue microtasks for various things, and in Blink they do seem to use EnqueueMicrotask |
03:33 | <Andreu Botella> | I'm not yet confident that if those microtasks run any user code, or queue tasks that do, that they would want wrap-like behavior |
03:33 | <Andreu Botella> | I'll look through them though |
15:32 | <Chengzhong Wu> | Chengzhong Wu: How is it going with proposing a meeting between us and Yoav? |
15:33 | <littledan> | Sadly, yoav hasn't responded to my email with the meeting doodle. |
15:44 | <Chengzhong Wu> | thanks! |
17:01 | <Justin Ridgewell> | @benjamn just did the patches for thenables into his deno impl: https://github.com/benjamn/deno-v8/commit/d72a3dd59139242385203b3aa0167451220d0bdf |
17:42 | <Ben Newman (Apollo, @benjamn on GH)> | 👋 |
17:43 | <Ben Newman (Apollo, @benjamn on GH)> | nice to see some new/familiar faces |
17:46 | <Ben Newman (Apollo, @benjamn on GH)> | I recently scraped together this Docker image with a custom build of Deno that supports the AsyncContext proposal in native async/await code: https://github.com/benjamn/deno/pull/2 |
17:47 | <Ben Newman (Apollo, @benjamn on GH)> |
should give you a Deno REPL where you can play with |
17:52 | <Ben Newman (Apollo, @benjamn on GH)> | I don't plan to support this Deno fork in the long run, but I thought it might be useful to share, so others can check their intuitions, suggest test cases, identify context loss pitfalls, etc |
18:02 | <Ben Newman (Apollo, @benjamn on GH)> | (that Docker image is "only" 162MB, thanks mostly to Deno being a single standalone 90MB binary) |
18:15 | <littledan> | Was the SES meeting cancelled today? |
18:15 | <Justin Ridgewell> | Nothing on the agenda |
19:17 | <James M Snell> | Justin Ridgewell: ... what's your sense on how stable the current definition of the AsyncContext interface is? Like if we were to do something silly and implement a polyfill in workerd now would we likely be shooting ourselves in the foot |
19:18 | <Justin Ridgewell> | I don't think the names are stable |
19:19 | <Justin Ridgewell> | If you're not against it, I would just implement AsyncLocalStorage and provide maybe a JS wrapper around it to conform to the current AsyncContext API |
19:19 | <Justin Ridgewell> | Like a import AsyncContext from "workerd:async-context/v0" would give you the current interface |
19:20 | <James M Snell> | Yeah we're just trying to decide whether we really want to encourage too much dependency on ALS while AsyncContext is coming. |
19:20 | <James M Snell> | I know it's really difficult to estimate standards logistics machinery, but what's your expectation on when AsyncContext could find its way to stage 3? |
19:20 | <James M Snell> | (if any) |
19:20 | <Justin Ridgewell> | You could choose not to expose the ALS API at all, but implement a public AC interface on top of your private ALS interface |
19:21 | <James M Snell> | we've got customers currently asking specifically for node.js compat with ALS so we're pretty much committed to supporting both apis |
19:21 | <Justin Ridgewell> | I think the most likely thing to change is wrap into an object like AsyncResource |
19:21 | <Justin Ridgewell> | get /run seem stable |
19:37 | <littledan> | Justin Ridgewell: ... what's your sense on how stable the current definition of the |
19:37 | <littledan> | I'd instead encourage us all to define a subset of AsyncLocalStorage that should be supported across environments |
19:37 | <littledan> | Once AsyncContext is stable, we can polyfill exactly that subset on top of AsyncContext |
19:37 | <James M Snell> | heh, I'm not wanting to. We had a question come up about whether we could and I wanted it to not just be me saying "No!" |
19:38 | <littledan> | great, well, you can cite me as well if it's useful (I can give a more quotable explanation if needed) |
19:38 | <littledan> | let's actually write down what this subset is though |
19:42 | <James M Snell> |
we can do that on tonight's wintercg call probably |
19:46 | <Ben Newman (Apollo, @benjamn on GH)> | apologies for the vague question (I'm new)… is there a good term for an API like AsyncContext … loosely immutable (get only), but overridable/censorable (with run ) if you're calling new code? |
19:47 | <Ben Newman (Apollo, @benjamn on GH)> | something like a "call stack-hugging environment variable" |
19:47 | <Justin Ridgewell> | Mark Miller (TC39 delegate) has been calling this a Fluid Variable |
19:49 | <Ben Newman (Apollo, @benjamn on GH)> | are we collecting use cases somewhere? maybe the proposal repo? |
19:50 | <Justin Ridgewell> | https://github.com/legendecas/proposal-async-context/issues/5 |
19:54 | <Justin Ridgewell> | I tried looking for references for this, and it's all esoteric computer science papers and MIT's Scheme |
19:59 | <Ben Newman (Apollo, @benjamn on GH)> | I vaguely remember there used to be a set method along with get , but that's been removed… is set gone for good? (My bias: I hope so, for predictability and efficiency) |
20:00 | <Ben Newman (Apollo, @benjamn on GH)> | (of course if you want mutability you can store an object of your own design in the AsyncContext and provide an API around that) |
20:01 | <Justin Ridgewell> | I like that it's gone, because it prevents a memory leak |
20:01 | <Justin Ridgewell> | The Node folks mentioned that it's helpful for APM usage, though, so you don't need to redo the app |
20:02 | <Justin Ridgewell> | Even my company makes use of it, but I know how to update it once we get a real API |
20:03 | <Ben Newman (Apollo, @benjamn on GH)> | with no set method, I believe you're guaranteed the result of calling ctx.get() will always return the same value anywhere you call it in a given function body, which is a very useful invariant |
20:03 | <Ben Newman (Apollo, @benjamn on GH)> | so you don't have to guess arbitrarily about how often you should poll the context for a new value within a given function |
20:09 | <James M Snell> | there's some work ongoing to eliminate the need for set in the APM cases |
20:12 | <littledan> | I vaguely remember there used to be a set , but rbuckton has argued in favor of including it |
20:13 | <littledan> | set usages can sometimes be replaced by using a single mutable object in the AsyncContext variable, and setting a property in that. Other times, that doesn't work, though. |
20:13 | <littledan> | It'd be great to do a deeper analysis of use cases of set |
20:16 | <James M Snell> | we could implement set , I'd prefer not to in general. just really dislike that model |
20:17 | <Ben Newman (Apollo, @benjamn on GH)> | agreed |
20:17 | <Ben Newman (Apollo, @benjamn on GH)> | I'd at least like to be able to disable set , so AsyncContext.prototype.set.call(<arbitrary context obj>, <new value>) does not work |
20:20 | <littledan> | the thing I don't understand about set is how far it applies. Ron seemed to expect it to only take affect in the current function and "inwards", whereas another, simpler model would be for it to take effect across the whole innermost run where that particular variable was set. The latter can be implemented in terms of AsyncContext; the former requires some fancier core behavior |
20:21 | <Ben Newman (Apollo, @benjamn on GH)> | since Ron is also working on the resource management proposal, I wonder if he's interested in the region of time from when the resource is created to when it goes out of scope |
20:22 | <littledan> | well, yes, Ron has talked about how it's unfortunate that run forces you to put everything in a callback (which using avoids) |
20:23 | <littledan> | but... I'm not personally convinced that this is so bad. I think people shouldn't be using run anywhere near the same order of magnitude of frequency as using . |
20:24 | <littledan> | if we had set , it'd be (potentially) a solution to this problem of forcing everything to be inside the callback (especially if it had the behavior Ron is arguing for, that it shouldn't leak out of the current function) |
20:24 | <littledan> | (it may be that I misunderstood/misrepresented this suggestion; corrections welcome rbuckton ) |
20:25 | <littledan> | I think of run as something you call "once per thread" (conceptually) |
20:25 | <littledan> | (if you conceptualize async/await being a weird kind of threading with tons of forks) |
20:27 | <Ben Newman (Apollo, @benjamn on GH)> | if the only way to change the value of the AsyncContext is to run new code, then there's an especially convenient representation for the underlying collection of all contexts |
20:29 | <Ben Newman (Apollo, @benjamn on GH)> | roughly: an immutable map, where you can "add" elements in constant time by creating a new map in terms of old ones, and lookups are amortized O(1) |
20:29 | <Ben Newman (Apollo, @benjamn on GH)> | I know Justin Ridgewell was working on a similar/better(?) optimization in https://github.com/legendecas/proposal-async-context/pull/15 |
20:29 | <Justin Ridgewell> | Yup |
20:30 | <Justin Ridgewell> | We can optimize it further if we know whether the current map was "frozen" by a wrap |
20:30 | <Justin Ridgewell> | If it is, we need to reallocate to modify. If not, we can directly mutate |
20:31 | <Ben Newman (Apollo, @benjamn on GH)> | I believe there's even a way to use WeakMap here, so all values for a given AsyncContext go away if you discard that object |
20:32 | <Ben Newman (Apollo, @benjamn on GH)> | I know this may sound like premature optimization, but it's going to be hot code (in terms of CPU and memory) if we're not careful |
20:32 | <Justin Ridgewell> | From the JS side, unfortunately not, but as an actual implementation, the values should be weakly held by the keys |
20:32 | <Ben Newman (Apollo, @benjamn on GH)> | keys being AsyncContext instances? |
20:32 | <Justin Ridgewell> | Yah |
20:33 | <Justin Ridgewell> | In pure spec terms, there's no reason we can clone a WeakMap |
20:33 | <Justin Ridgewell> | It's just not an API currently offered |
20:37 | <Ben Newman (Apollo, @benjamn on GH)> | true! funny how useless the data structure ImmutableWeakMap sounds, but something like that would be useful for this |
20:38 | <littledan> | it really doesn't need to be weak |
20:38 | <littledan> | but yes an immutable map structure would be useful here |
20:38 | <Ben Newman (Apollo, @benjamn on GH)> | private would be good enough, yes |
20:38 | <littledan> | yes, this is definitely private |
20:39 | <littledan> | but WeakMap doesn't make sense because there's no practical way that the AsyncContext variable can be garbage-collected while in the middle of a run with it. |
20:39 | <littledan> | Map is the better one |
20:39 | <littledan> | ImmutableMap could avoid the linear cloning cost, while preserving the fast lookup, but ultimately this is an optimization available to implementations |
20:40 | <littledan> | nothing visible in the API about whether it's implemented with Map or WeakMap or ImmutableMap |
20:40 | <Ben Newman (Apollo, @benjamn on GH)> | right |
20:40 | <Ben Newman (Apollo, @benjamn on GH)> | my point is, it's one of those optimizations that works best (for the implementations that choose to take advantage) if there are some restrictions in the API |
20:41 | <littledan> | Yeah I think we should just avoid using terms like "dynamic scoping" or "fluid variable" in the presentations and docs, since they just get us into debates about wording, and lose most of the audience |
20:41 | <rbuckton> | (it may be that I misunderstood/misrepresented this suggestion; corrections welcome rbuckton ) |
20:41 | <littledan> | my point is, it's one of those optimizations that works best (for the implementations that choose to take advantage) if there are some restrictions in the API set would definitely add complexity to the potential use of ImmutableMap (but not invalidate it I think) |
20:42 | <littledan> | it would definitely be something to think through before concluding that we should have set |
20:43 | <Ben Newman (Apollo, @benjamn on GH)> | rbuckton: can you speak to what set can provide that mutable top-level context values (as an implementation detail) are not adequate for? |
20:43 | <rbuckton> | Is AsyncContext threaded through generators as well, or just async operations? If it isn't threaded through generators, then you could potentially write a generator function that yield s execution and upon returning control to the generator, evaluates inside of that context. |
20:44 | <rbuckton> | Can you clarify what you mean by "mutable top-level context values"? I haven't caught up on the conversation |
20:45 | <Justin Ridgewell> | It’s not captured by gens, but it’d be easy to implement in userland |
20:45 | <Ben Newman (Apollo, @benjamn on GH)> | (edited my text above a bit) if you ctx.run([/*mutable array*/], () => { ... }) and code within that block has access to ctx , then they can do ctx.get().push(...) as if the context value was mutable |
20:45 | <Justin Ridgewell> | Once the gen.next call starts, inner code would not effect the outer context |
20:46 | <Ben Newman (Apollo, @benjamn on GH)> | AsyncContext is decidedly not just async operations (I sort of think the name undersells the sync side of the coin, where it also helps greatly) |
20:46 | <littledan> | Should it be captured by generators? It would be coherent to do so |
20:47 | <Ben Newman (Apollo, @benjamn on GH)> | there are two possible flows: the creation of the generator, when the whole thing could be bound to the current context, or each .next() call |
20:48 | <littledan> | oh, yeah, I guess I was imagining that, when the generator is created, it'd capture the current async context, and on each .next() call, it would restore that |
20:48 | <Ben Newman (Apollo, @benjamn on GH)> | I think it's important to decide what happens in generators, because async generators are also a thing |
20:48 | <rbuckton> | So ctx.run(foo, () => gen.next()) wouldn't resume the generator inside of the context? |
20:48 | <Justin Ridgewell> | The case I’m thinking of is koa, where an APM would be a middleware with a yield |
20:49 | <littledan> | right, since basically the thing that next() calls out to would be "wrapped" internally |
20:50 | <Justin Ridgewell> | We should move this into a GH issue, though |
20:52 | <Ben Newman (Apollo, @benjamn on GH)> | (edited my text above a bit) if you |
20:55 | <Ben Newman (Apollo, @benjamn on GH)> | no rush to answer btw; I'm just trying to get my thoughts sorted out before the meeting next week |
21:03 | <Ben Newman (Apollo, @benjamn on GH)> | I think of run gets called behind the scenes once per subtask, setting the current subtask object for that execution subtree, say |
21:05 | <Ben Newman (Apollo, @benjamn on GH)> | (which is mostly to say run could happen fairly often in normal code) |
21:07 | <Andreu Botella> | would wrap affect V8's async stack tracing? |
21:07 | <littledan> | Subtask is a better word than thread, yes |
21:08 | <littledan> | Do you mean the one that exists if you check the box in devtools? |
21:09 | <littledan> | I think not too often that, for example, it's a problem that run requires some allocation, and using a mutable object would mean even more allocation |
21:12 | <Ben Newman (Apollo, @benjamn on GH)> | right now I guess you'll see the immediately calling callstack when the wrapped function runs, not the one from wherever it was first wrapped (which could also be useful) |
21:13 | <Ben Newman (Apollo, @benjamn on GH)> | if the immediate caller is the event loop, maybe there's room to restore the previous async stack instead of showing nothing there |
21:14 | <Ben Newman (Apollo, @benjamn on GH)> | (not sure if this even remotely answers the question) |
21:14 | <littledan> | Do you mean the one that exists if you check the box in devtools? |
21:14 | <Andreu Botella> | I had to check whether there was a checkbox |
21:14 | <littledan> | the problem is that it'd be a lot of implied run calls |
21:15 | <Andreu Botella> | it's enabled by default, at least in M111 |
21:15 | <Andreu Botella> | or rather, the checkbox says "disable" |
21:15 | <littledan> | huh, really? |
21:15 | <littledan> | well, it may be default-on when you open up the devtools panel, for example |
21:15 | <Andreu Botella> | it's also been shipping in Deno and Node for a while |
21:16 | <littledan> | always-on async stack traces have been a long-time feature request; I hadn't heard that that was granted, but if so, great |
21:16 | <littledan> | they ultimately run into the sort of buffer overrun issue that was raised in response to Yoav's talk at BlinkOn |
21:16 | <littledan> | but also, before then, there's a whole lot of bookkeeping |
21:16 | <littledan> | that's why, originally, it wasn't default-on |
21:17 | <littledan> | anyway if they made this cheaper: that's great! |
21:17 | <Andreu Botella> | https://docs.google.com/document/d/13Sy_kBIJGP0XT34V1CV3nkWya4TwYx9L3Yv45LdGB6Q/edit |
21:18 | <littledan> | Ah yes that's a different thing |
21:18 | <littledan> | that's why I started by asking "which one" |
21:18 | <Andreu Botella> | I didn't realize there were more than one |
21:18 | <littledan> | that is more limited to just async/await and sometimes doesn't work |
21:19 | <littledan> | I think that always-on mechanism will not interact with wrap or anything like that |
21:23 | <littledan> | Oh, I guess they got rid of the previous version and there has only been one for a while |
21:24 | <littledan> | the previous mechanism also worked for, e.g., setTimeout |
21:27 | <littledan> | Here's a description of the old version, clearly working without promises: https://developer.chrome.com/blog/async-call-stack/ |
21:28 | <littledan> | and here's the release notes for the change: https://developer.chrome.com/blog/new-in-devtools-60/#async-stacks |
21:28 | <littledan> | I think, with the previous version, you'd expect wrap to continue the async stack trace. With the new version, you would not--it is specific to Promises. |
22:39 | <Justin Ridgewell> | I think they want you to manually tag your stacks now: https://developer.chrome.com/blog/devtools-modern-web-debugging/#:~:text=The%20Async%20Stack%20Tagging%20API%20introduces%20a%20new%20console%20method%20named%20console.createTask().%20The%20API%20signature%20is%20as%20follows%3A |