19:46
<ethanarrowood>
Are there any updates for this proposal? I don't think we've had a meeting in a few weeks (or I missed it), and I don't see it on the agenda
19:47
<littledan>
there were going to be some API updates, e.g., adding AsyncContext.snapshot.wrap. I guess we didn't make a presentation this meeting, but we should have one next meeting.
19:48
<littledan>
There is a lot to do in terms of benchmarking, advancing implementations, design documents, etc.
19:49
<ethanarrowood>
Great! Happy to help with some of that if you need; though, I'm inexperienced so happy to continue being a fly-on-the-wall and keep learning.
19:50
<littledan>
Chengzhong Wu and Andreu Botella are working on those benchmarking/implementing/design doc parts; maybe they can share the relevant links with you if you want to get involved?
19:52
<Andreu Botella>
oh, I forgot to mention this here, but I talked over with Shu, and it looks like they're concerned about memory usage, so I'm currently investigating a linked list implementation of the AsyncContext snapshot
19:52
<littledan>
did you consider the data structure that I was suggesting, which also avoids quadratic memory usage?
19:53
<Andreu Botella>
I'm going to build implementations of both and compare
19:53
<Andreu Botella>
and continue with the design doc afterwards
19:53
<littledan>
I'm pretty confident that linked list is incomplete, and that we need something that gets the benefits of both
19:53
<littledan>
maybe a design doc would be a good place to discuss various data structures?
19:53
<littledan>
I guess maybe I don't know what you mean by linked list
19:54
<littledan>
(I definitely agree with V8 people that the clone-a-map-all-the-time implementation is not great)
19:54
<shu>
we were thinking of a LIFO stack
19:54
<shu>
which you could use a linked list for
19:54
<shu>
but the point is that you'd have a cursor into the LIFO stack to propagate instead of a map clone
19:54
<shu>
anyway design doc would be great
19:55
<Andreu Botella>
and that could get flattened into a map when the lookup cost becomes big enough
19:55
<littledan>
yeah, this seems pretty simplistic and would perform poorly if you need to read or .run on something that's further down in the stack (.run because I assume you'd deduplicate)
19:56
<Andreu Botella>
as far as I can tell, a map lookup in my map implementation is worst-case O(N) in the map's capacity, and the stack could be flattened into a map if the lookup for any variable would be greater than O(N)
19:56
<littledan>
and poor performance would limit the applicability of the mechanism (e.g., for incumbent realms, or priorities, or task attribution, or other things in the browser)
19:57
<littledan>
I think you'd want, at a minimum, to have a segment which is just an array of a fixed maximum size, which is just copied, and then some purely functional mechanism on top (maybe linked list up to a size maximum, but then become a persistent map beyond a certain size?)
19:57
<shu>
i don't understand that yet
19:58
<littledan>
which part?
19:58
<shu>
but all the more reason for a design doc i guess
19:58
<shu>
yeah, this seems pretty simplistic and would perform poorly if you need to read or .run on something that's further down in the stack (.run because I assume you'd deduplicate)
this thing
19:58
<shu>
you think the on-demand flattening is what would perform badly?
19:59
<shu>
(because at that point you lose the ability to just propagate a pointer into the stack?)
19:59
<littledan>
oh I didn't understand the flattening part
19:59
<Andreu Botella>
I wasn't even thinking of flattening on demand on lookup, I was thinking of flattening when pushing onto the stack
20:00
<shu>
yeah let's not confuse each other further and let's get a doc started
20:00
<shu>
our assumption was flattening on demand
20:00
<shu>
where that "demand" is may be art
20:00
<littledan>
do you imagine de-duplicating only during that flattening operation? then this could have GC implications
20:02
<shu>
good question, dunno
20:05
<Andreu Botella>
the way I was thinking about it, .get() needs to be a fast operation, and if you flatten there, with amortization it can't be faster than a map lookup
20:05
<Andreu Botella>
whereas AsyncContext.Variable.p.run() is not necessarily expected to be fast
20:15
<Andreu Botella>
but yeah, I hadn't considered those GC implications
20:24
<Justin Ridgewell>
I think .get() should be fast, and we can slow down .run()
20:25
<Justin Ridgewell>
What does a LIFO stack really give us for memory?
20:25
<Justin Ridgewell>
Is it just the .run() operation being faster?
20:26
<Justin Ridgewell>
(There's a demo impl in the https://github.com/tc39/proposal-async-context/tree/master/src which doesn't perform a clone unless necessary)
21:00
<littledan>
I think .get() should be fast, and we can slow down .run()
I think both of these should be somewhat fast and memory-efficient, and you're imagining an either-or tradeoff where we can really do well in both ways
21:01
<littledan>
task attribution involves lots of .run's. I think we'll run into more cases like this over time. I understand that your case doesn't involve .run as frequently, though.
21:03
<Justin Ridgewell>
Doesn’t attribution involve at least one get for every run?
21:15
<littledan>
yes, so if .get is fast and .run is really slow, the result is really slow...
21:26
<Andreu Botella>
task attribution involves lots of .run's. I think we'll run into more cases like this over time. I understand that your case doesn't involve .run as frequently, though.
they're working to not require that many run's
21:27
<Andreu Botella>
https://docs.google.com/document/d/1hZ1FdFtHoPk7h9mwTPJSlF83T7YnTpmfa0CEQbPn8Ks/edit#heading=h.h6xaqbodqfo3