| 18:55 | <kriskowal> | I seem to have bludgeoned the Join button enough times for Matrix to acknowledge my presence. |
| 18:57 | <nicolo-ribaudo> | Welcome |
| 18:58 | <rbuckton> | kriskowal: I tried to DM you the link I mentioned, but there are two matrix identities for you in the delegates chat and I may have sent them to the wrong one. Here's what I sent, in any case: For reference, this Gist contains a brief analysis of an earlier proposed handshaking mechanism: https://gist.github.com/rbuckton/08d020fc80da308ad3a1991384d4ff62 |
| 18:59 | <kriskowal> |
|
| 19:04 | <kriskowal> | By a glance, it looks like this design direction hasn’t been considered or was dismissed out-of-hand:
Wherein, |
| 19:05 | <kriskowal> | And, of course Foo.prototype captures the behavior side. |
| 19:09 | <shu> | what would this union do |
| 19:09 | <shu> | one of the primary goal here is to actually share the objects, not just share the payload |
| 19:10 | <shu> | this is because application state, by volume, is a lot of pointers. recreating your object graph from payloads per thread means you are not scaling with the number of threads |
| 19:10 | <shu> | so any solution that requires creating wrapper objects per-thread is a nonstarter |
| 19:10 | <kriskowal> | i’m assuming you mean sharing the backing memory, since the objects are necessarily in different realms |
| 19:10 | <shu> | yes, payload and backing memory are interchangeable for what i said |
| 19:11 | <shu> | a related question is "why not create object overlays on top of SABs" |
| 19:11 | <nicolo-ribaudo> | i’m assuming you mean sharing the backing memory, since the objects are necessarily in different realms |
| 19:11 | <kriskowal> | oh, right, we’re talking about a new primitive. |
| 19:12 | <nicolo-ribaudo> | This would be different from SharedArrayBuffer, where there is a per-thread wrapper pointing to the same memory |
| 19:13 | <shu> | oh, right, we’re talking about a new primitive. |
| 19:13 | <shu> | but yes, internally you can think of them as new primitives |
| 19:13 | <shu> | they're objects with special behavior |
| 19:13 | <kriskowal> | alright, so the crux of this is that the value capturing the union of the shared memory and behavior must also be a primitive. |
| 19:14 | <shu> | sorry, having trouble parsing that sentence |
| 19:14 | <kriskowal> | i now better understand how we arrive at the prototype walk algorithm |
| 19:15 | <shu> | there's another alternative that was dismissed, which is actually thread-safe functions |
| 19:15 | <kriskowal> | there’s no per-worker object that points to the local prototype and shared data |
| 19:15 | <shu> | that is just too much a can of worms, and nobody wants a new callable type |
| 19:15 | <shu> | there’s no per-worker object that points to the local prototype and shared data |
| 19:15 | <kriskowal> | this seems cursed |
| 19:15 | <shu> | heh, in what way? |
| 19:16 | <kriskowal> | i can see why this design direction forces today’s debate about where to put the global state |
| 19:16 | <shu> | ah, yeah |
| 19:17 | <iain> | For the record, I think it would be good to explore the thread-safe functions option at least a little bit. |
| 19:18 | <shu> | i agree, as long as in parallel |
| 19:18 | <shu> | but identity discontinuity makes it really difficult |
| 19:18 | <kriskowal> | the Moddable folks would be good to involve in a conversation about thread-safe functions. |
| 19:19 | <kriskowal> | XS is a bit unique in its design constraints, but does have thread-safe functions and can sense when a subgraph can be safely captured in ROM. |
| 19:19 | <iain> | There are many parts of this proposal that make big changes; I'm not convinced that new callables would be worse than some of the other proposed changes |
| 19:19 | <iain> | What do you mean by identity discontinuity? |
| 19:20 | <shu> | There are many parts of this proposal that make big changes; I'm not convinced that new callables would be worse than some of the other proposed changes |
| 19:20 | <shu> | i'm gonna need something more specific than "this is already large, therefore it has room for other large changes" |
| 19:20 | <kriskowal> | Identity discontinuity is const source = '{}', eval(source) !== eval(source). |
| 19:21 | <kriskowal> | Which is not an interesting example, but const source = 'class Foo { #p }' is more interesting. |
| 19:21 | <shu> | What do you mean by identity discontinuity? |
| 19:22 | <shu> | JS functions are closures very deep down, not just the JS user code closed-over stuff |
| 19:22 | <shu> | so we'd have to answer the question of what does that mean for thread-safe functions |
| 19:22 | <shu> | do they become more dynamically scoped? |
| 19:24 | <iain> | Without having thought it through much, I would want to say that they can't capture anything other than global variables, and global variables are always looked up in the local global. |
| 19:24 | <shu> | okay, so dynamically scoped to the caller global |
| 19:25 | <iain> | Yes. If there were a clean way to distinguish between shared function foo() { return Math; } and var x; shared function foo() { return x; }, I would also like to prohibit the latter. |
| 19:26 | <shu> | is it basically this thing i wrote up a while ago |
| 19:27 | <shu> | https://github.com/tc39/proposal-structs/blob/main/CODE-SHARING-IDEAS.md |
| 19:27 | <iain> | So that modulo monkey-patching, all the "dynamically scoped" stuff you're closing over is basically the same between realms |
| 19:27 | <iain> | That looks like a better thought out version of my vague notion, yes |
| 19:29 | <shu> | my conclusion is that i think it's a lot of work and a lot of complexity for the language for not as much gain as you might think |
| 19:29 | <shu> | like, people are gonna want to close over state in a thread-local way during computation |
| 19:29 | <shu> | and i think that's fine |
| 19:30 | <kriskowal> | It it imagined that every get/set of an individual property on a shared struct is implicitly an atomic on that individual field? |
| 19:30 | <shu> | and i believe more and more that we actually get more mileage out of letting people use the functions we have today on shared data, but make that ergonomic |
| 19:30 | <iain> | My hope is that doing something like this would let us significantly simplify the prototype problem |
| 19:31 | <shu> | It it imagined that every get/set of an individual property on a shared struct is implicitly an atomic on that individual field? |
| 19:31 | <shu> | the guarantee is the same as bare reads/writes on SABs via TypedArrays: if you have races, you can observe any of the written values, but they shall never tear |
| 19:32 | <shu> | i.e. you can't observe half of one write composed with half of another write |
| 19:33 | <kriskowal> | I gather from the requirements, that it’s imagined that incrementally replacing a class with a shared struct is a delicate-but-possible performance improving refactor that doesn’t require changes from the consuming code |
| 19:34 | <kriskowal> | I’m much more familiar with these shenanigans in other languages. I see that CAS is Atomics.compareExchange. Alright. |
| 19:34 | <shu> | that's not a hard requirement for me, but is certainly a goal. i believe that's harder requirement for Ron perhaps |
| 19:34 | <shu> | i said in the beginning of the call, before you joined, that i can live in a world where we don't solve the correlation problem because on net there's still enough value here for the power apps |
| 19:35 | <shu> | but if we can solve the correlation problem, we unlock things like incremental adoption that helps a larger amount of apps |
| 19:35 | <kriskowal> | So, you’d personally be satisfied with the data-only subset? |
| 19:35 | <shu> | i wouldn't equate "satisfied" with "can live with" |
| 19:35 | <kriskowal> | Or rather, on behalf of the economic interests your represent. |
| 19:35 | <shu> | by "can live with" as, if it was the only thing holding the rest of the proposal up, i'd drop it |
| 19:35 | <shu> | and iterate on it after the initial proposal |
| 19:37 | <shu> | speaking only for myself, yes. i still think we'd be doing a disservice to the language for many of the reasons we've gone into with Mark in the past. the most salient of which, i think, is that without ergonomically correlated methods, we're inviting people to use free functions, and it becomes harder to encapsulate |
| 19:37 | <shu> | which ron and i think will result in higher likelihood of thread unsafe code being written |
| 19:42 | <kriskowal> | So, the cursed fork has tines:
|
| 19:43 | <shu> | what's difference between 3 and 4? |
| 19:44 | <kriskowal> | An outcome to avoid is order-dependence of evaluations of the same source, or a composition hazard where a system explodes if you evaluate a struct definition twice. |
| 19:46 | <kriskowal> |
|
| 19:47 | <kriskowal> | In the prototype hook, the “identifier” would necessarily come from the text of the module source. |
| 19:48 | <shu> | given that neither of those things exist today i don't really know the proposals well enough to understand |
| 19:49 | <shu> | i just wanna do something naive and simple man |
| 19:49 | <kriskowal> | I think we’re unlikely to converge on either 3 or 4. |
| 19:51 | <shu> | can we pass a nonce to Realm (Worker?) construction that determines which registry they'll use? |
| 19:51 | <kriskowal> | shared memory parallelism |
| 19:51 | <shu> | like script nonces, these are supposed to be generated afresh per load |
| 19:51 | <shu> | but you'd express constraints like, these workers are conceptually in the same package and should share the nonce |
| 19:51 | <shu> | and these other workers aren't |
| 19:52 | <shu> | and those with the same nonce have the same implicit, ambiently available registry |
| 19:52 | <shu> | so by default you don't get any correlation at all |
| 19:52 | <shu> | shared memory parallelism |
| 19:52 | <iain> | Yes |
| 19:53 | <kriskowal> | i think it goes without saying that shared structs have very narrow field of applicability. not even “all notions of worker” and certainly “not every postMessage” |
| 19:55 | <kriskowal> | clarifying question: is new Worker() consistently an OS thread or sometimes an OS process across all browsers? Do we currently have a place to stand to say “this worker must be in another process to mitigate pipeline sidechannels”? |
| 19:55 | <shu> | are you inviting me to defend the motivation or...? |
| 19:56 | <kriskowal> | No, just clarification, I’m wondering whether this proposal implies other web changes like distinguished Worker constructor signatures. |
| 19:57 | <kriskowal> | Did all browsers follow V8’s lead with isolates? |
| 19:58 | <kriskowal> | I like that pure functions obviate the correlation and identity discontinuity problems. |
| 19:58 | <shu> | so while the HTML spec doesn't define threads vs processes, it follows 262's lead in "agent" and "agent cluster", with the understanding that an agent is a thread, and an agent cluster constitutes an abstract process boundary |
| 19:59 | <kriskowal> | And that’s a design direction you can follow from the just-data subset. |
| 19:59 | <shu> | i'd rather just not have it for the initial proposal then |
| 20:00 | <shu> | No, just clarification, I’m wondering whether this proposal implies other web changes like distinguished Worker constructor signatures. |
| 20:00 | <iain> | Doing shared memory GC between threads is already scary enough; I don't think anybody would be especially interested in implementing cross-process GC between different-process worker threads. |
| 20:01 | <shu> | there is already the notion of "agent cluster" being the set of agents that can access the same shared memory, which is currently SABs |
| 20:01 | <kriskowal> | So, I gather it’s the case that Worker is always agent and there isn’t a mechanism for a process boundary. |
| 20:01 | <shu> | there is the notion of an agent cluster, but it is not reified |
| 20:01 | <shu> | no, a Worker is an agent |
| 20:01 | <shu> | a set of workers + the main page constitutes an agent cluster |
| 20:01 | <shu> | because they are in the same cluster, they can pass SABs to each other |
| 20:02 | <kriskowal> | alright, so there isn’t a web API for a process boundary. |
| 20:02 | <shu> | correct |
| 20:02 | <kriskowal> | thanks, that helps my grokery |
| 20:02 | <shu> | the decision Chrome took was, roughly, to put each tab into its own individual process |
| 20:04 | <shu> | the more important process boundary is between "content" or "renderer" processes that run JS and display web content, and the "browser" process that is much more higly privileged |
| 20:04 | <shu> | but as far as user content goes on the web, they all run in renderer processes |
| 20:09 | <shu> | alright, so there isn’t a web API for a process boundary. |
| 20:09 | <kriskowal> | there’s a very old cartoon about a company that relocates to the north pole so they can hire penguins for cheap labor. my mental model for plugin systems on the web was until this moment that you get to choose whether to endow your plugins with either timers or confine them in a worker so you get a process boundary. |
| 20:09 | <shu> | i think you're still misunderstanding. Workers are not a process boundary |
| 20:09 | <shu> | Workers are threads |
| 20:09 | <kriskowal> | no, i’m following you. |
| 20:10 | <shu> | was responding to "confine them in a worker so you get a process boundary" |
| 20:10 | <kriskowal> | thank you for correcting my mental landscape :-) |
| 20:10 | <shu> | so, since the web security model is built on same-origin |
| 20:10 | <kriskowal> | yeah, i’m the MBA who thought there are penguins at the north pole in this metaphor. |
| 20:11 | <shu> | there are these headers that let the page say "i want different origins to be isolated (read: process boundary)" |
| 20:11 | <kriskowal> | oh, alright, i see i was not wrong, just new Worker isn’t sufficient. You need a separate origin and maybe COOP COEP, which I ought to learn more about before I make a web platform. |
| 20:11 | <shu> | and if you serve your page with these headers, then we enable shared memory |
| 20:12 | <kriskowal> | kk, same page. |
| 20:12 | <shu> | this proposal of course follows that policy, not that we have a choice :) |
| 20:13 | <kriskowal> | this is coherent. |
| 20:14 | <kriskowal> | I assume postMessage that crosses a process boundary would be in a position to throw if you attempted to share a struct. |
| 20:14 | <shu> | exactly right, same as for SABs |
| 20:14 | <rbuckton> |
@esfx/struct-type, which is more like the old "typed objects" proposal and uses objects backed by SharedArrayBuffer. In the end, this doesn't work unless the objects are typed as you must know the type of everything in the object graph. new Foo(event.data.foo) just isn't sufficient on its own. |
| 20:15 | <kriskowal> | Right, I assumed new Foo would entrain its transitive reachable struct definitions. |
| 20:15 | <kriskowal> | In any case, it’s moot because you have a stated position that you don’t want per-agent wrapper objects. |
| 20:16 | <kriskowal> | And my proposal was specifically to enable >1 wrapper object per agent. |
| 20:16 | <kriskowal> | Such that different compartments might have different prototypes for the same struct. |
| 20:17 | <kriskowal> | With that off the table, I can see the appeal of not having to solve sub-realm isolation. |
| 20:18 | <snek> | this is what I suggested before but it was dismissed because shared structs may be nested, so it's hard to wrap each layer |
| 20:18 | <shu> | let me expand on the stated position: the goal is that the order of the number of wrapper objects should either not grow with the number of threads, or grow very slowly with the number of threads |
| 20:18 | <shu> | per-realm prototypes of course already violates "not grow with the number of threads", but it seems okay because O(number of struct types) should be << O(number of struct instances) |
| 20:18 | <shu> | sub-realm prototypes is probably also fine, so long as that inequality roughly holds |
| 20:19 | <kriskowal> | sub-realm prototypes is probably also fine, so long as that inequality roughly holds |
| 20:19 | <rbuckton> | I gather from the requirements, that it’s imagined that incrementally replacing a |
| 20:20 | <shu> | cause you buy into the pain of shared memory for two reasons, right: one is CPU time, one is to actually share memory and save memory than duplicating per-thread. so our solution can't preclude the second reason |
| 20:20 | <rbuckton> | In the long term, it would be better to have the nodes be actually immutable, but that would require either shared private fields or some operation to atomically freeze a shared struct instance. |
| 20:21 | <shu> | yeah, if we think of structs as "declarative sealing", we should also have "declarative freezing" |
| 20:21 | <shu> | though i don't want to bite off that now |
| 20:22 | <kriskowal> | I really like the design direction where you start with data and work your way out to thread-safe shared behavior too. |
| 20:22 | <kriskowal> | It would be limiting, but also enabling. |
| 20:22 | <kriskowal> | You wouldn’t even need to replicate the prototypes. |
| 20:23 | <shu> | in a different life, without this being demand-driven, i would love to agree |
| 20:23 | <shu> | demand-driven meaning we started from actual partners wanting more performance and expressivity out of the web platform |
| 20:25 | <shu> | I really like the design direction where you start with data and work your way out to thread-safe shared behavior too. |
| 20:25 | <shu> | we all wanted to go that way, for the same reasons |
| 20:25 | <shu> | but we're here now because of experience |
| 20:26 | <rbuckton> | One direction I'd suggested for shared functions was to entertain the notion of a single frozen shared realm that all shared things live in, but then shared structs cannot close over or use anything in the current realm, only other shared things, but that's a lot to bite off and wasn't well received. |
| 20:26 | <kriskowal> | Yeah, please pardon me for replaying a great deal of history to catch up. It was my hope not to get drawn in :-) |
| 20:28 | <shu> | bbl, will catch up after labor day |
| 20:28 | <kriskowal> | Down the pure behavior road is also the possibility of JIT to shader. |
| 20:28 | <rbuckton> | Data-only shared structs would be fine for green field projects, but for something like TypeScript we'd essentially need to create a wrapper/proxy layer over a shared AST that we would have to rehydrate in every thread, which eats up all of the memory/performance gains you would hope to gain. |
| 20:29 | <iain> | Without making the claim that this is actually taking place: in the abstract, I think it would be unfortunate if we locked in a suboptimal design for shared behaviour out of an urge to have something that we can ship sooner. Specifically: if we think we could eventually work out a design for thread-safe shared behaviour, and it would be more performant than the current thread-local prototype approach, then it would be better not to lock ourselves into a local maximum. |
| 20:29 | <kriskowal> | But I digress, it seems that the remaining options both involve more elaborate mitigations for sub-realm confinement and I’ll have to be here to help evaluate those options. |
| 20:30 | <shu> | Without making the claim that this is actually taking place: in the abstract, I think it would be unfortunate if we locked in a suboptimal design for shared behaviour out of an urge to have something that we can ship sooner. Specifically: if we think we could eventually work out a design for thread-safe shared behaviour, and it would be more performant than the current thread-local prototype approach, then it would be better not to lock ourselves into a local maximum. |
| 20:31 | <shu> | i think it's the right choice to have something that can ship in fewer years than a decade...? |
| 20:31 | <shu> | like we're not talking about rushing out something next week vs next quarter |
| 20:32 | <rbuckton> | I have considered approaches to eventually layer on actual shared functions in the future, at least from the syntax/DX side of things. |
| 20:34 | <iain> | I mean, I think we could ship data-only in N-K years for some positive K |
| 20:35 | <kriskowal> | i think it's the right choice to have something that can ship in fewer years than a decade...? |
| 20:36 | <iain> | But my point is not that we should ship data-only, it's that in the time it takes to actually implement that part, I hope that we have thoroughly examined the design tradeoffs of thread-safe functions |
| 20:36 | <kriskowal> | (I remain salty about proposing a Module constructor 14 years ago) |
| 20:36 | <rbuckton> | The biggest problem I see with that is that the big applications that are requesting this feature are also requesting behavior, so they're going to have to wait K. If we can find a solution that lets us gradually get to actual shared functions later while still having behavior now, that's preferrable. |
| 20:36 | <iain> | The ability to avoid shared prototypes and thread-local hashtables seems like a win |
| 20:38 | <iain> | Shu said above that it's less of a win than you might think |
| 20:38 | <rbuckton> | The ability to avoid shared prototypes and thread-local hashtables seems like a win |
| 20:39 | <shu> | ESM enters the chat |
| 20:39 | <kriskowal> | now you also know what i think about ESMs, at least natively |
| 20:41 | <iain> | We can't avoid thread-local hashtables if we want to put shared structs in weak maps, which we would need to do if we need to create proxies to emulate the ability to attach behavior. That doesn't go away. |
| 20:43 | <iain> | The proposal very nearly avoids the need to support shared->local edges. The incremental cost of having to overhaul GC is significantly larger than the incremental benefits for the few use cases. |
| 20:43 | <iain> | Roughly speaking, I think you can either have shared structs without weakmaps in N years and then weakmap support K years later, or you can wait N+K years for the whole thing. |
| 20:44 | <iain> | I would like to find a point in design space that maximizes the value to users while minimizing the implementation time necessary to ship it to them |
| 20:45 | <shu> | have you had time to noodle on "collect main thread cycles only, let the rest leak"? |
| 20:45 | <shu> | v8 is banking on that basically |
| 20:45 | <rbuckton> | Shared structs in weakmaps effectively requires full stop-the-world cross-worker GC. If you want to include that in the MVP, you are adding an extra K to the N years. |
| 20:46 | <shu> | yeah iain, the counterfactual is literally that more things will leak |
| 20:47 | <shu> | i don't really see what the benefit of it is. sure, you and i can say "well, technically, the engine is doing exactly the right thing here" but... why would the users care about who's the blame for the leaks? |
| 20:47 | <iain> | If the code is guaranteed to leak, then they will hopefully not write that code |
| 20:48 | <shu> | no that is literally not true |
| 20:48 | <iain> | And yeah, there will be some cases where that is difficult and there will be pressure on us to do something better |
| 20:48 | <shu> | i have explicitly asked, what are you (tools, apps) going to do if there's no weakmap support |
| 20:48 | <shu> | the answer is always "we will use maps" |
| 20:48 | <iain> | I'm not saying that collecting cross-thread cycles is objectively bad |
| 20:48 | <shu> | this is not going off a hunch here |
| 20:49 | <shu> | no, you're saying it'll take too long and difficult to build |
| 20:50 | <shu> | And yeah, there will be some cases where that is difficult and there will be pressure on us to do something better |
| 20:50 | <iain> | I'm saying that I think there is value in shipping the easy part first, and warning people using the power-user feature that they have to be careful about memory leaks |
| 20:50 | <shu> | man i don't know why i'm not getting across |
| 20:50 | <shu> | this isn't about "you have to be more careful" |
| 20:51 | <iain> | I understand what you're saying |
| 20:51 | <shu> | this is about making the choice "leak forever or use free()" |
| 20:51 | <shu> | what is "more careful"? |
| 20:52 | <iain> | What are the cases in which you have to be putting these in a map? |
| 20:52 | <iain> | Like, it is clearly not the case that every possible usage of shared structs relies on local weakmaps |
| 20:53 | <shu> | no, it isn't, but to associate thread local data you need it |
| 20:53 | <shu> | and that's a pretty common thing to do |
| 20:55 | <shu> | spreadsheet model, suppose naively that each cell is a struct. the model is shared. there's some main-thread local stuff that's associated with the cell (event handlers, dom nodes, whatever) |
| 20:55 | <rbuckton> | What are the cases in which you have to be putting these in a map? WeakMap if we can, or Map if we can't. there's no way around it if we want to incrementally adopt, and a full rewrite just isn't plausible. |
| 20:57 | <iain> | I'm looking at this V8 design doc discussing shared-to-unshared references |
| 20:58 | <rbuckton> | Also, while our AST nodes are essentially immutable, we do need to conditionally attach additional information to them, such as symbol and type information, source map information, etc. Generally these mappings only live as long as a given TypeChecker does, but we also incrementally parse and reuse entire subtrees of an AST if possible, so some information may need to live longer. |
| 20:59 | <iain> | And the options appear to be "leak some stuff" and "here's a speculative idea with 'non-negligible complexity' that we think should probably work" |
| 21:01 | <iain> | Er, and "make the entire heap global" |
| 21:03 | <iain> | The distributed global heap idea seems like the most promising idea in the long run |
| 21:04 | <iain> | But that's the one with non-negligible complexity |
| 21:05 | <shu> | we're going to prototype this and try it with our partners |
| 21:05 | <shu> | there's no request for you to agree and commit to something right now for either the JS proposal nor the Wasm proposal |
| 21:05 | <shu> | the skepticism is warranted, that's why we're actually going to build something... |
| 21:06 | <shu> | what i disagree with is all the arguing against having V8 try it, and for "have userspace leak everything" |
| 21:08 | <iain> | I think that there are significant webcompat concerns in cases where engines systematically disagree on whether particular operations leak |
| 21:09 | <shu> | i... did not say we're going to build it and ship it |
| 21:09 | <shu> | i said we are going to prototype this and try it out with our partners |
| 21:10 | <shu> | like, OTs, or dev trials |
| 21:10 | <shu> | I think that there are significant webcompat concerns in cases where engines systematically disagree on whether particular operations leak |
| 21:11 | <iain> | It would be if such disagreements existed |
| 21:11 | <shu> | they... do? |
| 21:11 | <iain> | On this scale? |
| 21:11 | <shu> | like webkit does not have a cycle collector afaiu |
| 21:11 | <shu> | they have some insane 'object group' thing afaiu |
| 21:11 | <iain> | Wat |
| 21:11 | <shu> | well, i'd love to be corrected. you all have a cycle collector for DOM nodes, blink has Oilpan |
| 21:11 | <shu> | WK has... ad-hoc stuff |
| 21:12 | <iain> | Huh. TIL |
| 21:12 | <shu> | but the common cases are handled |
| 21:12 | <shu> | that seems pretty analogous to me |
| 21:13 | <shu> | the thesis is: main thread cycles are the common ones, and those are handleable. so if that's what the OT shows on both counts, then great. if it isn't, then our thesis was wrong and we need to start over |
| 21:14 | <iain> | Basically: I think SM would be willing to ship a version of shared structs with no shared-to-local edges if they were left out of the proposal, and then add them in later. If shared-to-local edges are part of the MVP, then we are unlikely to ship until we have reached rough memory-leak-parity with V8. |
| 21:15 | <iain> | I think that the former scenario provides value for users sooner. |
| 21:16 | <shu> | the decision here must be made in unison with shared wasmgc |
| 21:16 | <shu> | either js shared structs and wasmgc shared structs are both usable as weakmap keys or neither are, in the MVP |
| 21:17 | <shu> | given where staffing is right now i imagine shared wasmgc to be the proposal that makes the decision first |
| 21:18 | <shu> | who's the gc lead? jonco? |
| 21:18 | <iain> | jonco is the GC lead. Ryan Hunt is the wasm lead. The position I'm taking here is based on talking it over with Ryan. jonco would prefer if we didn't have to do any of this. |
| 22:54 | <shu> | iain: it may be helpful for us to chat, both the JS and the Wasm side together |
| 22:54 | <shu> | parallel convos have been happening |
| 22:55 | <iain> | Awkwardly Ryan just went on parental leave this week, and I will be going on leave myself any day now. |
| 22:56 | <shu> | ah, well, in no world is computers more important than children |
| 22:57 | <iain> | I think we should both be back by early November |
| 22:58 | <iain> | So if you ping us then, we will probably still remember some of this |
| 23:00 | <shu> | well, others on the team are still around, i assume |
| 23:01 | <shu> | anyway, process-wise, do you agree that this is a decision to be made during stage 2 (i.e. entry into stage 3)? which certainly won't come before November |
| 23:01 | <shu> | i doubt it'll even be baked enough for stage 3 november next year but we'll see... |
| 23:19 | <iain> | I agree that the decision that we are going to do shared structs at all (eg stage 1 to stage 2) comes before pinning down the details of exactly which parts we think belong in the MVP (stage 2 to stage 3). |