2024-02-06 [11:46:56.0429] Are there any updates for this proposal? I don't think we've had a meeting in a few weeks (or I missed it), and I don't see it on the agenda [11:47:48.0290] there were going to be some API updates, e.g., adding AsyncContext.snapshot.wrap. I guess we didn't make a presentation this meeting, but we should have one next meeting. [11:48:03.0702] There is a lot to do in terms of benchmarking, advancing implementations, design documents, etc. [11:49:04.0408] Great! Happy to help with some of that if you need; though, I'm inexperienced so happy to continue being a fly-on-the-wall and keep learning. [11:50:36.0013] Chengzhong Wu and Andreu Botella are working on those benchmarking/implementing/design doc parts; maybe they can share the relevant links with you if you want to get involved? [11:52:25.0193] oh, I forgot to mention this, but I talked over with Shu, and it looks like they're concerned about memory usage, so I'm currently investigating a linked list implementation of the AsyncContext snapshot [11:52:38.0158] * oh, I forgot to mention this here, but I talked over with Shu, and it looks like they're concerned about memory usage, so I'm currently investigating a linked list implementation of the AsyncContext snapshot [11:52:56.0588] did you consider the data structure that I was suggesting, which also avoids quadratic memory usage? [11:53:13.0365] I'm going to build implementations of both and compare [11:53:27.0622] and continue with the design doc afterwards [11:53:32.0235] I'm pretty confident that linked list is incomplete, and that we need something that gets the benefits of both [11:53:49.0594] maybe a design doc would be a good place to discuss various data structures? [11:53:55.0984] I guess maybe I don't know what you mean by linked list [11:54:22.0053] (I definitely agree with V8 people that the clone-a-map-all-the-time implementation is not great) [11:54:24.0164] we were thinking of a LIFO stack [11:54:27.0323] which you could use a linked list for [11:54:47.0488] but the point is that you'd have a cursor into the LIFO stack to propagate instead of a map clone [11:54:53.0085] anyway design doc would be great [11:55:11.0191] and that could get flattened when the lookup cost becomes big enough [11:55:19.0189] * and that could get flattened into a map when the lookup cost becomes big enough [11:55:20.0691] yeah, this seems pretty simplistic and would perform poorly if you need to read or .run on something that's further down in the stack (.run because I assume you'd deduplicate) [11:56:32.0152] as far as I can tell, a map lookup in my map implementation is O(N) in the map's capacity, and the stack could be flattened into a map if the lookup for any variable would be greater than O(N) [11:56:42.0242] and poor performance would limit the applicability of the mechanism (e.g., for incumbent realms, or priorities, or task attribution, or other things in the browser) [11:57:19.0050] * as far as I can tell, a map lookup in my map implementation is worst-case O(N) in the map's capacity, and the stack could be flattened into a map if the lookup for any variable would be greater than O(N) [11:57:46.0114] I think you'd want, at a minimum, to have a segment which is just an array of a fixed maximum size, which is just copied, and then some purely functional mechanism on top (maybe linked list up to a size maximum, but then become a persistent map beyond a certain size?) [11:57:55.0912] i don't understand that yet [11:58:02.0954] which part? [11:58:04.0932] but all the more reason for a design doc i guess [11:58:30.0267] > <@littledan:matrix.org> yeah, this seems pretty simplistic and would perform poorly if you need to read or .run on something that's further down in the stack (.run because I assume you'd deduplicate) this thing [11:58:49.0484] you think the on-demand flattening is what would perform badly? [11:59:01.0811] (because at that point you lose the ability to just propagate a pointer into the stack?) [11:59:14.0072] oh I didn't understand the flattening part [11:59:41.0005] I wasn't even thinking of flattening on demand, I was thinking of flattening when pushing onto the stack [11:59:49.0157] * I wasn't even thinking of flattening on demand on lookup, I was thinking of flattening when pushing onto the stack [12:00:05.0122] yeah let's not confuse each other further and let's get a doc started [12:00:14.0056] our assumption was flattening on demand [12:00:24.0448] where that "demand" is may be art [12:00:49.0819] do you imagine de-duplicating only during that flattening operation? then this could have GC implications [12:02:29.0093] good question, dunno [12:05:17.0683] the way I was thinking about it, `.get()` needs to be a fast operation, and if you flatten there, with amortization it can't be faster than a map lookup [12:05:42.0254] whereas `AsyncContext.Variable.p.run()` is not necessarily expected to be fast [12:15:04.0473] but yeah, I hadn't considered those GC implications [12:24:33.0501] I think `.get()` should be fast, and we can slow down `.run()` [12:25:29.0497] What does a LIFO stack really give us for memory? [12:25:41.0317] Is it just the `.run()` operation being faster? [12:26:17.0491] (There's a suggested impl in the https://github.com/tc39/proposal-async-context/tree/master/src which doesn't perform a clone unless necessary) [12:30:57.0077] * (There's a demo impl in the https://github.com/tc39/proposal-async-context/tree/master/src which doesn't perform a clone unless necessary) [13:00:30.0370] > <@jridgewell:matrix.org> I think `.get()` should be fast, and we can slow down `.run()` I think both of these should be somewhat fast and memory-efficient, and you're imagining an either-or tradeoff where we can really do well in both ways [13:01:19.0695] task attribution involves lots of .run's. I think we'll run into more cases like this over time. I understand that your case doesn't involve .run as frequently, though. [13:03:58.0830] Doesn’t attribution involve at least one get for every run? [13:15:12.0232] yes, so if .get is fast and .run is really slow, the result is really slow... [13:26:26.0026] > <@littledan:matrix.org> task attribution involves lots of .run's. I think we'll run into more cases like this over time. I understand that your case doesn't involve .run as frequently, though. they're working to not require that many run's [13:27:24.0849] https://docs.google.com/document/d/1hZ1FdFtHoPk7h9mwTPJSlF83T7YnTpmfa0CEQbPn8Ks/edit#heading=h.h6xaqbodqfo3 2024-02-07 [05:53:08.0999] > <@ethanarrowood:matrix.org> Are there any updates for this proposal? I don't think we've had a meeting in a few weeks (or I missed it), and I don't see it on the agenda You can find the notes at https://docs.google.com/document/d/1WJpNPg9j7h9CKSd3NmlNmivOHDtQ-LGP7mJcwDvNLa8/edit. The meeting is biweekly and we cancelled the call this week for conflicting with the TC39 planery. [05:58:10.0966] Please feel free to submit an idea for a benchmark case at https://github.com/legendecas/Speedometer-asynccontext, like using WebAPI that captures async context in a web app. I'm integrating an OpenTelemetry Web app into the suite. Since the speedometer was not designed for asynchronous test steps, this specific derivation of the speedometer was adapted to run async benchmarks. 2024-02-12 [20:36:50.0218] I've been thinking about this, and there's a lot of caveats to doing flattening on-demand. [20:37:35.0781] You can't really do it during `.get()` or `.run()`, because there's a possibility that a snapshot has already been taken of the context that can still observe the shadowed Var value. [20:38:20.0643] And, worse, there's a **future** snapshot after the current `.get()` or `.run()` which could observe it. [20:38:44.0214] The only time you can flatten is when there is no active context [20:38:57.0502] * The only time you can flatten is when there is no active context, eg, as part of GC passes [21:14:56.0431] It shouldn't be too much to implement: https://gist.github.com/jridgewell/ef8a674291f8f7419a2bea0448c3b0eb [22:23:33.0928] > <@jridgewell:matrix.org> You can't really do it during `.get()` or `.run()`, because there's a possibility that a snapshot has already been taken of the context that can still observe the shadowed Var value. I think that's only the case on your implementation that mutates in-place. Purely persistent implementations don't have that issue [22:27:27.0759] That would only solve already existing chains, it wouldn’t solve future snapshots of the current chain [22:28:26.0888] At least during an active run, you could (but why) do it during get inside an empty chain [22:29:01.0673] Or you push it into snapshot restore, which it the most performance critical [22:42:04.0780] I'm having trouble conceptualizing this, but I haven't had my coffee yet [22:43:12.0008] the way I was thinking of it was that, you could just replace the current snapshot from a linked list to a map, without mutating anything else [22:43:55.0956] you could even mutate the current entry in the linked list by making it store a map and make its `next` pointer be null [22:46:09.0202] if you imagine a linked list representation of a map, where each entry in the list is an update on the map represented by the rest of the linked list, flattening an entry into a map would not change the semantics, for that entry as the snapshot, or for anything that has that entry as part of the list [22:47:46.0064] now, flattening on get is probably not a good idea because of the performance considerations – even amortized, there is an implementation of flattening on run that has the same asymptotic runtime on get [22:48:35.0731] * now, flattening on get is probably not a good idea because of the performance considerations – even amortized, there is an implementation of flattening on run that has the same asymptotic runtime on get, if not better [22:49:25.0346] I'm working on a doc comparing various possible implementations [22:58:16.0425] My concern is with: ``` a.run(1, () => { b.run('b', () => { a.run(2, () => { // flatten here a.get(); }) // Still observe 1 here a.get(); }) }); ``` [22:58:57.0292] If we flatten inside the `a.run(2, …)`, we can still observe the `1` value afterwards [23:00:12.0021] So either the chain still exists fully after `run()` ends (in which case, every first `.get()` is in a run is expensive?), or we've mutated the previous context and lost the value [23:00:26.0804] It just doesn't seem efficient to do on demand [23:01:02.0765] And we don't really free any memory compared to the map approach [23:01:40.0422] If we do the flatten during GC, we can at least maximize savings from all chains at once 2024-02-23 [15:14:14.0980] Thanks for getting those new `wrap` tests up so quickly