01:57 | <Steve Hicks> |
|
02:37 | <littledan> | When is the null zone what you want? |
02:37 | <littledan> | The registration time zone is often what you want, on the other hand, eg for onload, setTimeout, etc. I don’t see how we could have any sort of consistent principle that you never get the registration context. |
03:02 | <Steve Hicks> | It's a spectrum. "Never registration" is one extreme end, and I don't think it's tenable. But for any _given_ API (e.g. events as a whole, or possibly split-out per event emitter/type) I'm very wary of a zalgoesque situation where the callback might run in one of two different contexts depending on unpredictable future conditions. I think that in the same way that it's important to know precisely whether a callback will run synchronously or not, it's similarly important to know ahead of time which context it will run in, and having a fallback muddies it and is (in my view) a worse trade-off than running in a "useless" context by default. |
03:06 | <littledan> | For something like onclick, the normal case is that it was triggered by the mouse, and the exceptional case is where JS dispatched the event. What should happen for that? |
03:08 | <littledan> | I would prefer to never expose a null context. If we can avoid that, then the value of all of your variables is always derived from previous code which triggered this one. Sorry, that is a theoretical argument and not a use-case-driven one, but it feels like an important property to preserve. |
03:08 | <littledan> | I can relate to the vague Zalgo concern but am not sure if that is the overriding, most important thing to drive the decision |
09:55 | <Matteo Collina> | Hello! Chengzhong Wu told me to join here :D |
12:11 | <Stephen Belanger> | The exit is the end of the scope function enterWith(...) is not the same thing as a correctly formed set /get interface. The enterWith(...) API is a hack and the docs explicitly advise people not to use it as it does not have any any scope end and context is not guaranteed to derive from any sort of root context, so it's an incorrect interface. It only exists because sometimes it's the only way to do something in certain cases, but you need to really understand the implications. It basically only exists as a tool for APM vendors that needed the capability even if it was unsafe. |
12:12 | <Stephen Belanger> | And yes, there most certainly is a way to know when execution ends: any time the runtime would become idle and/or transition to microtask processing it knows it has reached the end of the current selection of synchronous code. We can have the same sort of scoping mechanism we have already, just remove the value setting part from it. So users can use exactly the example I shared above:
|
12:25 | <Stephen Belanger> | However, you don't actually need to know when an end occurs if all execution descends from a root at the beginning of execution as then the start of any execution would be propagating and therefore swapping out the context value anyway You can know synchronously if the value still needs to be held. If an async task is scheduled in a sync tick where that context is set then it is captured to be propagated. This creates a GC reference that holds it open. If a sync tick doesn't create any further async tasks then it knows at the end of that sync tick that it created no new references. In branching scenarios you would get each branch flowing up to exactly where it stops directly causing async code and then would have no more references. Each sync tick would only hold the reference while running, and then each async task would hold a reference until it would run. After the task runs it can discard its reference, but that would happen after new references were created for any children. Thus the GC would just function as-normal. Now it is the case that data might live quite a long time sometimes, but this is intentional as it will only live long if descending execution of that point in context continues for long so it should be holding that value as anywhere in that descending code should be able to retrieve that data. The risk of things living super long though is also easily mitigated by just emptying the context in some way, such as setting it to My point is that in a correctly formed |
20:05 | <littledan> | Hello! Chengzhong Wu told me to join here :D |
21:09 | <Justin Ridgewell> |
await 0 before entering ’bar’ , the outside caller sees different behavior. It feels like Zalgo-lite. |
21:12 | <Justin Ridgewell> |
defineScope(() => {}) solves this by definining an exit point that cleans the global context. |
21:15 | <Justin Ridgewell> | The risk of things living super long though is also easily mitigated by just emptying the context in some way, such as setting it to undefined when you've decided you're done with it. In a flows-through system, I think you also need to free every cached promise that holds that context? They would strongly hold their resolution context. The engine wouldn’t mutate user’s context automatically, and without a library API to know when the current exeuction is finalized, you’re left guessing when you can mutate the context or drop all promises. |
21:16 | <Justin Ridgewell> | Hello! Chengzhong Wu told me to join here :D |
21:17 | <Justin Ridgewell> | (I also have a conflict for next Tuesday’s meeting, so I won’t be able to show up until at least halfway through) |