15:59 | <shu> | it's T-1 hours and still no VC link? |
16:14 | <Michael Ficarra> | @shu VC link is up now |
16:19 | <snek> | we have to sign into webex? |
16:19 | <Chris de Almeida> | * Draft schedule is up: https://github.com/tc39/Reflector/issues/536 |
16:19 | <Chris de Almeida> | no |
16:21 | <snek> | oh i see. the screen saying sign in with existing account got me š |
16:22 | <Chris de Almeida> | and you don't have to use the app either -- you can use browser |
16:48 | <Rob Palmer> | The meeting will begin in 11 minutes. There are seven of us in the call now. |
16:48 | <Aki> | i'm here! hi! |
16:49 | <Aki> | i'm in the self-imposed lobby bc i'm on the phone |
16:49 | <Rob Palmer> | Is the lobby music good? Do you need any any help progressing? |
16:50 | <Aki> | lol thank you webex, i noticed: https://snaps.akiro.se/2407_ua0hr.jpg |
16:50 | <snek> | could just have the meeting in the lobby |
16:57 | <Rob Palmer> | 3 minutes... encounting |
16:57 | <Aki> | encounting ? |
16:58 | <Rob Palmer> | (I will find you the relevant movie clip soon) |
17:03 | <snek> | rob you're roboting a lot |
17:04 | <snek> | maybe turn your video off? |
17:10 | <Rob Palmer> | I am getting a new router delivered tomorrow. |
17:17 | <shu> | my webex cuts out for like 5s every 2 mins or so |
17:17 | <shu> | i wonder if it's a web version problem or a corp wifi problem... |
17:17 | <jkup> | Wow 4 new members š |
17:18 | <snek> | i think its a web version problem |
17:20 | <shu> | i don't want to install the native client but all right |
17:40 | <kriskowal> | But we can say, Welcome Back Editor Ben. |
17:41 | <Michael Ficarra> | Chip had me in the first half |
17:41 | <shu> | i got a new coin you gotta hear about then |
17:52 | <nicolo-ribaudo> | fomo |
17:54 | <nicolo-ribaudo> | I wanted to say something funny but my mic is not working |
17:54 | <nicolo-ribaudo> | Thanks everybody! |
17:55 | <nicolo-ribaudo> | Somebody say that I say thanks :P |
18:13 | <peetk> | should there be a String.prototype.chars for when you really do want to iterate over a string's characters |
18:13 | <bakkot> | "characters" is a wildly overloaded term |
18:14 | <bakkot> | there should be a String.prototype.codepoints to iterate over a string's codepoints |
18:14 | <bakkot> | I'm less convinced of a .graphemes because that's a 402 thing but it could perhaps be done |
18:14 | <bakkot> | I am not sure there's any use cases for a .codeunits but if you have a strong use case that could be done as well |
18:15 | <snek> | my brain assumed it would be .values |
18:15 | <snek> | but i guess Symbol.iterator is a better way to spell that |
18:16 | <peetk> | yea mb i meant codepoints |
18:24 | <nicolo-ribaudo> | My mic keeps not working in webex even with different browsers and after rebooting š |
18:28 | <nicolo-ribaudo> | Oh it works on my ipad |
18:28 | <nicolo-ribaudo> | Good |
18:28 | <keith_miller> | When we decide that we want to change the behavior of spec operations like toFixed do we comment or rename them to say that they're legacy? |
18:29 | <keith_miller> | If we don't, would it be worthwhile to do that so we don't accidentally slip in new uses in the future? Or we could have a spec linter rule, idk if such a tool exists? |
18:30 | <bakkot> | yeah, probably we should rename the existing ones and add new stricter ones |
18:30 | <bakkot> | or add a "looseness" parameter, I guess |
18:32 | <littledan> | Are there other changes that we would make to Temporal if we wanted to apply all the conventions? |
18:32 | <bakkot> | not doing any coercions at all :) |
18:32 | <bakkot> | which would save a lot of tests, among other things |
18:32 | <littledan> | I meant for the conventions that we already adopted |
18:32 | <bakkot> | we already adopted "don't do any coercions" |
18:33 | <littledan> | Oh right |
18:33 | <littledan> | I am in favor of applying this to Temporal |
18:34 | <littledan> | I think it is much more likely that we can apply this change to 402. |
18:34 | <littledan> | We arenāt talking about 262 overall, just those three methods |
18:35 | <ptomato> | FWIW for Temporal.Duration we already rejected non-integral inputs because of people putting in values like Temporal.Duration.from({ hours: 1.5 }) |
18:37 | <ptomato> | I'm open to going back and changing other things like roundingIncrement , fractionalSecondDigits , and Temporal.PlainDate.from({ year: 2024.5, month: 7.5, day: 29.5 }) to reject non-integral inputs but I'd like us to explicitly approve that now, rather than me coming back to a following meeting with a PR |
18:38 | <littledan> | I'm open to going back and changing other things like |
18:38 | <ptomato> | I don't know off the top of my head how much deviation there is from the other conventions, I'd have to check |
18:39 | <rbuckton> | IMO, it makes sense that year, month, and day require integral inputs, but not hour/minute/second. Given one of the goals of Temporal is to make date and time math easier, not being able to write Temporal.Duration.from({ hours: 1.5 }) is unfortunate. |
18:41 | <ptomato> | should Temporal.Duration.from({ hours: 1.15 }) be 1.15 hours or 1.1499999999999999112 hours? those are distinct nanosecond values |
18:41 | <bakkot> | What about the other conventions? |
18:42 | <littledan> | Well I was wondering if ptomato had an opinion on the ānot coercing at allā one with respect to Temporal |
18:43 | <rbuckton> | I regularly want to be able to easily scale a duration to implement backoff, and there's already no convenient way to scale a fixed duration (e.g., hours and smaller). |
18:44 | <ptomato> | Well I was wondering if ptomato had an opinion on the ānot coercing at allā one with respect to Temporal |
18:45 | <littledan> | I don't have much of an opinion, but what I don't want is to have to come back to the next meeting with a PR |
18:45 | <rbuckton> | for backoff I'm more likely to convert a Duration input into ms or ns via total and just work with the single unit that I can scale appropriately. |
18:46 | <littledan> | Rbuckton: you are making a very different feature request from this coercion discussion. Probably this is best for the temporal v2 bucket |
18:47 | <ptomato> | rbuckton: we considered a Temporal.Duration.prototype.multiply at one point but decided it was out of scope. if you have an idea for how it should work, could you open an issue on https://github.com/js-temporal/proposal-temporal-v2/? I think the use case would be helpful |
18:47 | <rbuckton> | Rbuckton: you are making a very different feature request from this coercion discussion. Probably this is best for the temporal v2 bucket |
18:48 | <ptomato> | oh actually we already have one: https://github.com/js-temporal/proposal-temporal-v2/issues/7 |
18:51 | <bakkot> | littledan ptomato this meeting has some extra time if you want to put together a last-minute item for dropping coercion from temporal, though it is very far past the deadline so it would need to be conditional approval |
18:55 | <littledan> | Why not stage 4 today for import attributes? |
18:55 | <littledan> | littledan ptomato this meeting has some extra time if you want to put together a last-minute item for dropping coercion from temporal, though it is very far past the deadline so it would need to be conditional approval |
18:57 | <bakkot> | no one subsequently raising an objection at the next meeting once they'd had time to review, I guess |
18:58 | <shu> | waldemar: what's the issue with the spec text? |
18:58 | <littledan> | OK, this is a different way of using conditional advancement than we usually do, which is more about conditions which can be met async |
20:01 | <Ben> | I'm having trouble with my microphone, but I'll get in on notes again |
20:03 | <saminahusain> | Hat's are still available |
20:03 | <Justin Ridgewell> | I can help in 5ish min |
20:04 | <Andreu Botella> | I can help until AsyncContext |
20:07 | <Chengzhong Wu> | How can I register for one? š¤ |
20:08 | <Michael Ficarra> | I don't recall a linearity consensus |
20:08 | <Michael Ficarra> | monotonicity only, which is what we have |
20:10 | <littledan> | I thought exponential backoff was waiting longer and longer (I don't have background in this area) |
20:11 | <nicolo-ribaudo> | I also thought the same, |
20:11 | <snek> | depends on what you're talking about |
20:11 | <nicolo-ribaudo> | For some reason I always assumed that pause was monotonically increasing, and not monotonically decreasing |
20:11 | <snek> | in this case it gets shorter and shorter until falling back to the slow path |
20:11 | <Chris de Almeida> |
|
20:12 | <Michael Ficarra> | wait, that's still not talking about the relationship |
20:12 | <Michael Ficarra> | yes the input can increase linearly, but that doesn't mean it is linearly proportional to the time waited |
20:15 | <littledan> | saying "the spec is not intelligible" is not an effective way to communicate because it is very unspecific. |
20:15 | <littledan> | It would be better to separate the discussions of what should happen, from how we encode this in the words |
20:15 | <Michael Ficarra> | also there is no editorial issue with the spec as far as I can tell |
20:15 | <Michael Ficarra> | we were very careful with the phrasing in this proposal |
20:17 | <Rob Palmer> | This is reminding me of the coffee filter in/out discussion in Nov 2019. Folk are seeing the pause N from both sides. Shu wants small N to mean a longer pause. |
20:17 | <peetk> | i agree with justin that note 3 does not say what shu was saying it should say |
20:18 | <littledan> | This is reminding me of the coffee |
20:18 | <Michael Ficarra> | note 1 literally says that this means a pause instruction on such architectures lol |
20:18 | <littledan> | so it is confusing; we need to separate and order them |
20:20 | <nicolo-ribaudo> | My reading of step 2 is the opposite of what shu is saying, and it's not just the note being the opposite. Assuming that "a signal is sent" means "wait a little bit", for larger Ns it waits for a little bit more times |
20:20 | <littledan> | I am also confused by the wording, but let's first focus on, what should the thing do, and then we can fix/disambiguate the wording |
20:21 | <Rob Palmer> | I think we have agreement on the normative: the pauses between spins get bigger and bigger before going to sleep. |
20:21 | <littledan> | no it was the opposite :) |
20:21 | <nicolo-ribaudo> | I think we have agreement on the normative: the pauses between spins get bigger and bigger before going to sleep. |
20:21 | <littledan> | also we don't have agreement; Justin is disagreeing on substance |
20:22 | <littledan> | what is it that Shu is proposing? |
20:22 | <kriskowal> | We must now decide whether to spin in session or yield to the agenda :P |
20:23 | <kriskowal> | Context switches are expensive |
20:23 | <Justin Ridgewell> | Having the smaller-i-longer-wait sematnics is fine with me if itās really how other implemenations have done it. |
20:23 | <snek> | pauses get bigger and smaller in mature implementations. ideally you want to have the behavior that kris explained, but implementations like linux for example also have a signal of "starvation" which can cause the delay to get longer as well. |
20:23 | <Justin Ridgewell> | The current semantics with reworded text and note would be fine with me. |
20:24 | <Justin Ridgewell> | The current semantics reads as the opposite of the current spec/note to me. |
20:24 | <saminahusain> | will you be in Tokyo? I will bring a bunch. |
20:24 | <littledan> | pauses get bigger and smaller in mature implementations. ideally you want to have the behavior that kris explained, but implementations like linux for example also have a signal of "starvation" which can cause the delay to get longer as well. |
20:25 | <snek> | yeah i think it would be best if we simply don't constrain it |
20:25 | <littledan> | this will need an overflow item, there's too much overflow to get through it |
20:25 | <littledan> | we have a lot to say |
20:26 | <bakkot> | if we don't give it semantics, then someone will ship (wlog) "longer iteration number is short wait", and then some application be written in such a way that depends on that behavior for performance, and now it is web-reality and can't be changed without negative performance impact |
20:26 | <bakkot> | so I don't see much benefit from not giving it semantics |
20:26 | <rbuckton> | spin locks use the counter to make a determination as to whether you've been spinning often enough to justify a context switch/kernel transition. The counter is used to indicate frequency and sometimes introduce and reduce contention. not all counter values are guaranteed to pause in some implementations |
20:26 | <bakkot> | I guess it allows implementations to give special behavior for specific scripts, which they do sometimes do, but... ugh |
20:27 | <littledan> | well, this is a lot like tuning GC |
20:28 | <rbuckton> | Most spinlock/spinwait implementations are very handwavy on specifics because it's CPU architecture dependent |
20:28 | <Chengzhong Wu> | Likely will be there, thanks! |
20:32 | <rbuckton> | iteration count is used more to determine pause request frequency, not how much time to wait. |
20:34 | <rbuckton> | If the iteration count is high and you're approaching a context switch, you want to wait less and less time to give the high-iteration spin a chance to attain the lock. But many spin wait operations will decide that it may only pause for any length of time when iterationCount % 10 === 0 or iterationCount % 100 === 0 , etc., and return immediately in other cases. |
20:35 | <rbuckton> | The purpose of the argument is indicate spin frequency and to not always pause. |
20:37 | <shu> | i am just confused what there is to disagree on, there is literally no observable behavior |
20:37 | <nicolo-ribaudo> | i am just confused what there is to disagree on, there is literally no observable behavior |
20:37 | <shu> | it's implementation-defined |
20:38 | <rbuckton> | My understanding of the concerns are that the spec text that is either over detailed, or uses the confusing terminology. |
20:38 | <shu> | i will remove the text about the backoff and any particular relation between the pause length and the iterationNumber argument |
20:39 | <shu> | there's another design constraint in play which waldemar and michael saboff seem to be saying: if there is no observable behavior, there should be no argument |
20:39 | <Justin Ridgewell> | i will remove the text about the backoff and any particular relation between the pause length and the iterationNumber argument |
20:40 | <shu> | Justin Ridgewell: how so? |
20:40 | <shu> | he said JSC will likely ignore it, which is certainly fine |
20:40 | <shu> | my personal intention is that until the code is in the optimizing JIT and inlined, it is ignored |
20:40 | <Justin Ridgewell> | His final statement was that if thereās no note, the param will just be ignored. |
20:40 | <littledan> | he said JSC will likely ignore it, which is certainly fine |
20:41 | <shu> | i don't think i do...? |
20:41 | <shu> | presumably JSC will do the right thing for M-chips |
20:41 | <shu> | and V8 might do a more generic thing? |
20:41 | <littledan> | OK sure |
20:42 | <littledan> | but "just ignore it" doesn't sound like something tuned for M-chips; it sounds like not engaging with the question |
20:42 | <shu> | but "just ignore it" doesn't sound like something tuned for M-chips; it sounds like not engaging with the question |
20:42 | <shu> | why would we try to align on how to interpret a hint? |
20:42 | <shu> | it's supposed to give freedom to implementations! |
20:42 | <nicolo-ribaudo> | (I don't know if this is actually implementable given the perf constraints) What if instead of the iteration number we passed an object, and the engine internally counts how many times it sees that object? So that there is no possible expectation connected to "big number" vs "small number" |
20:42 | <shu> | no thanks... |
20:43 | <kriskowal> | I approve of every possible steam (locomotive, motor) metaphor as we can cram into this language. |
20:43 | <littledan> | why would we try to align on how to interpret a hint? |
20:43 | <shu> | well, we'd want to align on, engines feeling like they can interpret this as a hint, rather than that it's just something to ignore |
20:44 | <Justin Ridgewell> | I assume the CPU architecture has some semantic meaning for the hint? |
20:44 | <littledan> | yes, it is valid, but I hope that JS engines can get past the philosophical disagreement that seems to exist right now and agree that this is a potentially usable hint if it makes sense for the architecture |
20:44 | <Justin Ridgewell> | We can tell engines to ignore it, but how do they map our hintās intention to the archās intention? |
20:44 | <littledan> | since msaboff's statement was not about the architecture |
20:44 | <shu> | I assume the CPU architecture has some semantic meaning for the hint? |
20:45 | <rbuckton> |
|
20:46 | <Justin Ridgewell> |
|
20:46 | <rbuckton> | Most implementations of a pause -like method I've seen are heavily optimized based on OS, CPU Arch, runtime, clock speed, and other factors and as such can't be easily expressed in an algorithmic form. |
20:46 | <rbuckton> | This is my observation from looking at spin-wait mechanisms in several languages and runtimes. |
20:46 | <rbuckton> | This isn't precise copy that I would use in the spec. |
20:47 | <rbuckton> | This is more to argue that we should put a lot less into the algorithm steps and NOTEs than we currently are. |
20:49 | <shu> | i think there are only two realistic options:
i still think 2 is uncontroversial but clearly it is controversial but not in a way i understand how to make progress on |
20:49 | <rbuckton> | We should not drop the hint argument. |
20:50 | <shu> | i think this would be a strictly worse API for its purpose if it dropped the hint argument, yes |
20:51 | <Justin Ridgewell> | Would Waldemar be satisifed with bigger-number-longer-wait? Then let the engine handle that to match the VMās CPU instructions, eg smaller-number-longer-wait? |
20:51 | <rbuckton> | The simplest text would be something along the lines of: "An implementation can use the value of iterationCount as a hint to determine whether it is advisable to request the CPU pause for an extremely short period of time (TBD) to decrease contention and avoid an expensive context switch." I can look for something better though. |
20:52 | <rbuckton> | Basically, (2) but with a few examples of what iterationCount could be used for and the time scale we are trying to avoid. |
20:54 | <shu> | Would Waldemar be satisifed with bigger-number-longer-wait? Then let the engine handle that to match the VMās CPU instructions, eg smaller-number-longer-wait? |
20:55 | <bakkot> | I am confused about why people don't like "semaphore" as the name for this |
20:55 | <bakkot> | this is the obvious name |
20:55 | <rbuckton> | I don't think "bigger-number-longer-wait" is correct, TBH. |
20:55 | <bakkot> | everyone uses this name when independently implementing this exact API, which has happened many times |
20:56 | <Justin Ridgewell> | I think the implementation is still left to the engine, but the hint now has a known meaning. Whether it uses exponential/quadratic/etc, whether the CPU expect big or large numbers, is left for the implementation. |
20:56 | <bakkot> | https://www.npmjs.com/package/semaphore https://www.npmjs.com/package/@shopify/semaphore https://www.npmjs.com/package/async-sema |
20:56 | <littledan> | I think just because "semaphore" is often used to mean "binary semaphore" |
20:56 | <littledan> | but yes I agree that the name makes sense, just trying to explain the confusion |
20:57 | <Justin Ridgewell> | I don't think "bigger-number-longer-wait" is correct, TBH. |
20:57 | <rbuckton> | It may be helpful to put together links to spin wait implementations in other languages and to papers on spin waiting, contention, and cpu/task scheduling that are relevant. |
20:57 | <Justin Ridgewell> | I donāt know whether it matches CPU behavior. |
20:57 | <rbuckton> | Itās the most intuitive, and itās how Iāve written async-retires before. |
20:58 | <Justin Ridgewell> | Sync retries sounds very similar to async retriesā¦ |
21:00 | <shu> | I think the implementation is still left to the engine, but the hint now has a known meaning. Whether it uses exponential/quadratic/etc, whether the CPU expect big or large numbers, is left for the implementation. |
21:01 | <shu> | the meaning is basically, write this: for (i; i < spinCount; i++) { TryLock(); Atomics.pause(i); } , and know the VM will do what it thinks best |
21:01 | <keith_miller> | I think if we're going to spec something bigger number longer wait is more intuitive. If you want the opposite you can just implement your spin lock in a countdown loop rather than a count up loop |
21:01 | <shu> | do not think at a lower level, because the JS programmer has no control at that lower level given the multi-tiered execution, etc |
21:02 | <shu> | keith_miller: my proposal is to say nothing in the spec text. an implementation can simply choose to interpret it as "larger number is longer" |
21:02 | <snek> | why do people want to specify a certain relationship between loop count and pauses |
21:02 | <keith_miller> | I think that's the worst outcome |
21:03 | <rbuckton> | Sync retries sounds very similar to async retriesā¦ pause the thread to give another core an opportunity to perform the operation under contention. However, the closer we get to the amount of time it should have taken to put the thread to sleep, we should pause less frequently to aggressively avoid that threshold."In some implementations, like .NET's SpinWait , once you reach a certain number of spins you trigger a sleep(0) or sleep(1) , which is a long wait, and then essentially restart as if the spin count was close to zero again. |
21:03 | <shu> | I think that's the worst outcome |
21:03 | <littledan> | we don't specify pauses for GC; we just trust that engines will figure out a good policy |
21:03 | <keith_miller> | Then we could end up in a world where one implementation does longer waits as count goes up and others do shorter |
21:04 | <snek> | that's good, the implementation should do whatever is best on the machine its running on. |
21:04 | <rbuckton> | So the "backoff" tends to be something like: short -> shorter -> shortest -> long -> short -> shorter -> shortest -> etc. |
21:04 | <rbuckton> | With a number of "don't even wait at all" operations thrown into the mix. |
21:04 | <rbuckton> | That's perfectly acceptable. An implementation should choose the pause strategy that works best within its own requirements. |
21:05 | <keith_miller> | I guess my concern is that one workload wants longer and longer waits. And others want shorter and shorter waits |
21:05 | <keith_miller> | And there's no way to know as an implementation a priori which is better |
21:05 | <shu> | keith_miller: that's fair, and the concession is that you can't actually express that in JS code |
21:05 | <shu> | because the call overhead is so high in the interpreter |
21:06 | <ryzokuken> | ljharb could you add yourself to the queue again? |
21:06 | <bakkot> | shu: I guess I agree that the actual API doesn't look like exactly like traditional semaphore but the way you use it is exactly the way you use a semaphore 99% of the time |
21:06 | <rbuckton> | I guess my concern is that one workload wants longer and longer waits. And others want shorter and shorter waits |
21:06 | <snek> | i don't think a user of the pause api would want that. that is a concern you have to express in cpu instructions, so js would be a poor choice for that code. |
21:06 | <keith_miller> | Then we should just not have anything passed by the JS users and the JS engine can infer the loop count if they really want to |
21:06 | <rbuckton> | If you want longer waits, you use a mutex/condvar or futex. Spin waiting generally should be as short as possible to avoid starvation and break contention. |
21:07 | <rbuckton> | If you want longer waits, you sleep , not pause . |
21:07 | <rbuckton> | Then we should just not have anything passed by the JS users and the JS engine can infer the loop count if they really want to |
21:08 | <shu> | bakkot: the API as proposed currently is about governing rate limits, not a mutual exclusion building block. i understand conceptually the core mechanism is the same. |
21:08 | <rbuckton> | "Spin lock" is also the wrong terminology. This is a "spin wait" and is generally used to write lock-free code via compare-and-swap |
21:08 | <shu> | if i were to propose a mutual exclusion Semaphore, i would not use this protocol |
21:08 | <rbuckton> | You can write a spin lock using pause , but that's not the goal. |
21:09 | <shu> | it may be that because of this API mismatch, neither Semaphore-like APIs should get the simple name "Semaphore" |
21:09 | <Luca Casonato> | https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.html <- thing called semaphore in a non-js language, with a broadly similar api to our proposal. we will investigate other names though! :D |
21:09 | <littledan> | +1 on Stage 1 |
21:09 | <shu> | it can also be perhaps more simply resolved by just namespacing |
21:09 | <shu> | Governor.Semaphore vs Locks.Sempahore or Atomics.Semaphore or whatever |
21:10 | <rbuckton> | In esfx, I use Semaphore for the thread coordination primitive, and AsyncSemaphore for the async coordination (non multi-threaded) primitive, though AsyncSemaphore is still a simple coordination primitive and has now knowledge of async iteration. |
21:10 | <shu> | https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.html <- thing called semaphore in a non-js language, with a broadly similar api to our proposal. we will investigate other names though! :D tokio::sync , right? |
21:11 | <Luca Casonato> | Yes! |
21:11 | <bakkot> | In esfx, I use Semaphore as proposed wouldn't have knowledge of async iteration; async iteration would take a governor-protocol-implementing-thing, which Semaphore would be) |
21:11 | <shu> | right, my objection is about this being a globally named Semaphore |
21:12 | <bakkot> | Governor.Semaphore wfm if we have such a class though fwiw I don't think Governor needs to actually exist |
21:12 | <nicolo-ribaudo> | Governor is just a protocol right? Like Thenable |
21:13 | <bakkot> | right |
21:13 | <bakkot> | but the proposal includes a class, also |
21:13 | <bakkot> | for some reason |
21:13 | <Luca Casonato> | bakkot: the helpers! :D |
21:13 | <keith_miller> | keith_miller: that's fair, and the concession is that you can't actually express that in JS code |
21:14 | <littledan> | yeah I agree with Igalia that this stuff is a bit complicated, but investigating APIs during Stage 1 SGTM |
21:14 | <shu> | keith_miller: but it might not be a linear/superlinear relationship at all, like what Ron was saying with "wait every 10th iteration" or something |
21:14 | <shu> | keith_miller: can you articulate why an implementation-defined hint is the worst outcome? |
21:14 | <rbuckton> | https://en.cppreference.com/w/cpp/thread/counting_semaphore https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphore?view=net-8.0 https://learn.microsoft.com/en-us/dotnet/api/system.threading.semaphoreslim?view=net-8.0 https://docs.python.org/3/library/asyncio-sync.html#asyncio.Semaphore https://docs.python.org/3/library/asyncio-sync.html#boundedsemaphore https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Semaphore.html https://esfx.js.org/esfx/api/threading-semaphore.html?tabs=ts (NOTE: my implementation, though not heavily used) https://esfx.js.org/esfx/api/async-semaphore.html?tabs=ts (NOTE: my implementation, though not heavily used) |
21:15 | <rbuckton> | keith_miller: but it might not be a linear/superlinear relationship at all, like what Ron was saying with "wait every 10th iteration" or something pause() (MIT Licensed): https://github.com/dotnet/runtime/blob/4c58b5a5132cb089b23d32cafe3fcfa7e615a0da/src/libraries/System.Private.CoreLib/src/System/Threading/SpinWait.cs#L144 |
21:16 | <rbuckton> | Also: https://github.com/dotnet/runtime/blob/040fde48a75f0c211353f073e4f69e2e31607752/src/coreclr/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs#L139 |
21:16 | <ljharb> | to me a semaphore are those big flags the dude waves on the top of an aircraft carrier |
21:17 | <ljharb> | like, a way to send messages/signals |
21:18 | <keith_miller> | From skimming that code it does Thread.Yield(); so it doesn't provide a hint? |
21:18 | <rbuckton> | to me a semaphore are those big flags the dude waves on the top of an aircraft carrier |
21:19 | <bakkot> | https://en.cppreference.com/w/cpp/thread/counting_semaphore Semaphore really seems like the right name, though I'm fine with CountingSemaphore also. shu wdyt about this list? |
21:19 | <shu> | Given this list, Semaphore that implements this symbol-based governor protocol |
21:19 | <shu> | i do not object to using the name Semaphore , so long as it is clear that it is not the mutual exclusion building block semaphore |
21:20 | <bakkot> | the protocol isn't symbol-based |
21:20 | <keith_miller> | I agree it does sleep periodically but sleep is an independent concept. |
21:20 | <shu> | i thought it has a protocol? |
21:20 | <bakkot> | it's string-based, at least currently |
21:20 | <keith_miller> | Which, I assume isn't intended to be part of your proposal? |
21:20 | <bakkot> | it is, there is a string-named async acquire method which gives you an object with a string-named sync release method |
21:21 | <shu> | ah, string, sorry |
21:21 | <Luca Casonato> |
|
21:21 | <Michael Ficarra> | yeah string vs symbol was one of the post-Stage 1 design considerations |
21:21 | <rbuckton> | The Semaphore in the slides implements the Governor interface, which is very iterator specific |
21:21 | <bakkot> | (Symbol.dispose isn't part of the protocol but should generally be there yes) |
21:21 | <Michael Ficarra> |
|
21:22 | <shu> | that acquire(): Promise<GovernorToken> is not how i would declare acquire if i were designing a shmem semaphore |
21:22 | <snek> | how about tryAcquire |
21:22 | <Luca Casonato> | and, not or |
21:22 | <rbuckton> | Well, it's listed as "non-essential", to be fair, but if it did it would pollute semaphore with unrelated functionality, IMO. |
21:22 | <shu> | like, it would be a blocking void acquire() that can't be called on the main thread |
21:22 | <Luca Casonato> | how about tryAcquire |
21:23 | <Michael Ficarra> | how about tryAcquire |
21:23 | <Michael Ficarra> | sorry it went by really fast |
21:23 | <snek> | no i was distracted, mb |
21:23 | <bakkot> | like, it would be a blocking void void part? the fact that you can release a thing you didn't previously acquire seems like a non-essential (and bad) part of textbook C semaphore implementation |
21:24 | <rbuckton> | In my AsyncSemaphore implementation, you don't acquire and then dispose , as that grants a capability to the acquire caller that belongs on the semaphore itself |
21:24 | <snek> | i am also a fan of release tokens. you can build the meh c api on top of that if you want. |
21:25 | <rbuckton> | how attached are you to the |
21:25 | <littledan> | another reason why those other snapshots are not all exposed is because these are often not useful and each is complicated to define. So the idea is to add the "causal snapshots" where they are useful. |
21:25 | <bakkot> | ... access to the semaphore should not imply access to the ability to cause the semaphore to think it has more resources available |
21:25 | <rbuckton> | IMO that's like putting cancel() on a Promise return value. It's the wrong separation of concerns. |
21:25 | <bakkot> | you should only be able to cause it to think it has resources if it previously told you that you owned those resources |
21:26 | <bakkot> | I have exactly the reverse intuition about who should be responsible for this capability |
21:26 | <shu> | yeah that particular implementation keeps a magic internal _count to determine |
21:26 | <snek> | wait ron you're arguing that there should be a release method that does not require a previously acquired capability? |
21:27 | <rbuckton> | To use the train or ship "Semaphore" metaphor, that's like waiving a flag to let a user in, and then that user gets to waive the flag to let the next user in. |
21:27 | <shu> | https://github.com/dotnet/runtime/blob/4c58b5a5132cb089b23d32cafe3fcfa7e615a0da/src/libraries/System.Private.CoreLib/src/System/Threading/SpinWait.cs#L225-L230 |
21:27 | <bakkot> | ... right, yes, that's the correct thing |
21:27 | <bakkot> | the user announces when they're done |
21:27 | <shu> | i am warming up to the idea of letting the VM do complete magic here without a userland hint |
21:27 | <bakkot> | the user does not get to say "someone is done", only "I personally am done" |
21:27 | <Luca Casonato> | To use the train or ship "Semaphore" metaphor, that's like waiving a flag to let a user in, and then that user gets to waive the flag to let the next user in. |
21:27 | <rbuckton> | wait ron you're arguing that there should be a release method that does not require a previously acquired capability? AsyncSemaphore works, though releasing an empty semaphore does nothing. |
21:28 | <rbuckton> | And I do have "release tokens" in many of the other coordination primitives in esfx |
21:28 | <snek> | i am new to someone arguing that linear types are bad |
21:28 | <littledan> | do <2 minutes remain? |
21:28 | <littledan> | Mark: Yes, that is the case |
21:29 | <keith_miller> | I guess it's also weird because all of the examples are behaviors that the particular lock implementation wants and don't seem to be architecture specific. |
21:29 | <ryzokuken> | do <2 minutes remain? |
21:30 | <keith_miller> | Maybe for compile times? :P |
21:30 | <snek> | lol, fair |
21:31 | <shu> | the high-level goal here is for the VMs to do the optimal thing in both the interpreter (where the call overhead is huge) and in inlined code (where you can inline pause ) |
21:31 | <shu> | do you see a way to do that other than an implementation-defined hint parameter? that was my best idea |
21:31 | <Chris de Almeida> | this topic goes to the end of day (~28 mins remaining) |
21:31 | <shu> | if that causes more confusion and problems, then i will drop it. but that also means giving up on that goal, which is too bad |
21:31 | <rbuckton> | i am new to someone arguing that linear types are bad Semaphore , it's about capability and separation of concerns. |
21:32 | <keith_miller> | Do you just want consistent timing? Or are you looking for something else? I guess I don't see how you're anticipating the goal to be achieved. |
21:33 | <rbuckton> | Just because a user can invoke acquire() or wait() on a semaphore does not mean that the next waiting user should automatically be let in when that user is done. A counting semaphore only lets one user in, but can release n waiting users as is necessary. |
21:33 | <keith_miller> | I don't see how the loop count is related to the JIT tier |
21:33 | <shu> | i want consistent timing, yes |
21:34 | <shu> | if i have a fast path loop:
|
21:34 | <keith_miller> | I guess I still don't see how the loop hint helps with that? |
21:35 | <shu> | what i was going for was, if that do-while loop is running in the interpreter, Atomics.pause(spins) would always execute one pause |
21:35 | <keith_miller> | The goal seems reasonable. But wouldn't it just be implemented with extra pauses in the JIT? |
21:35 | <shu> | if it's inlined JIT code, Atomics.pause(spins) would use spins to determine how many pause s |
21:35 | <shu> | The goal seems reasonable. But wouldn't it just be implemented with extra pauses in the JIT? pause |
21:35 | <shu> | i don't think it's the end of the world to do it for all calls of pause, to be clear |
21:35 | <shu> | i thought this would be uncontroversial |
21:36 | <keith_miller> | Are you saying you want the total spin loop time to be the same between JIT and interpreter? |
21:36 | <shu> | yes |
21:36 | <keith_miller> | Or that any given pause is the same |
21:36 | <shu> | i mean, or as close as feasible |
21:36 | <shu> | not some hard real-time kind of guarantee |
21:36 | <keith_miller> | I see, I misunderstood |
21:37 | <shu> | if i write a mutex implementation in JS, i don't want the contention fast path to be waiting for longer/shorter depending on whether you're in the interpreter or in the JITs |
21:38 | <shu> | there are different ways to accomplish that goal: i thought a hint parameter was the most flexible |
21:40 | <keith_miller> | Hmm, it almost feels like the right way to express that would be Atomics.compareExchangeWithRetry(..., count) or something but idk |
21:40 | <keith_miller> | Or at least less controversial at this point lol |
21:41 | <bakkot> | The user should be the one who indicates that they are done. Ability to acquire should not give you any further capabilities except the ability to indicate that you're done. And you should not have the ability to indicate that you're done without previously having exactly one corresponding completed acquire . |
21:41 | <keith_miller> | if i write a mutex implementation in JS, i don't want the contention fast path to be waiting for longer/shorter depending on whether you're in the interpreter or in the JITs |
21:41 | <shu> | right, it's a best effort thing |
21:41 | <shu> | my thread can certainly be preempted |
21:42 | <shu> | i mean, that doesn't stop C++ implementations from trying to be clever with inline asm and spin counts even though it's also up to OS scheduling |
21:42 | <shu> | since it shows enough improvement on enough workloads on enough OSes |
21:43 | <rbuckton> | It's easy enough for a consumer to wrap release:
But you can also implement semaphore wrappers that defer the actual call to |
21:45 | <rbuckton> | Counting semaphores, especially, have different cases than a simple binary semaphore or mutex |
21:45 | <keith_miller> | Yeah, idk, I think if I really cared about this enough I would probably just implement my spin loop in WASM where I have more control on the assembly that's generated anyway. |
21:47 | <snek> | why would you want to have to write a wrapper for the behavior that enforces correct usage |
21:47 | <rbuckton> | You're assuming a single correct usage, and there is not a single correct usage. |
21:48 | <Justin Ridgewell> | You're assuming a single correct usage, and there is not a single correct usage. dispose on the manager instead of the instance. |
21:49 | <rbuckton> | You can also separate the "locking" mechanism of a semaphore from the "counting" mechanism of a semaphore, just as we're considering separating the "locking" mechanism of a Mutex from the state of the mutex, e.g.:
|
21:49 | <shu> | but i still have not heard a counter argument on why it's bad to have a hint |
21:49 | <snek> | i think the release(n) use case is valid and is also better served with tokens. |
21:49 | <rbuckton> | I donāt understand why youād encoruage putting |
21:50 | <Justin Ridgewell> | tauri://localhost/#/%23tc39-space%3Amatrix.org/%23tc39-delegates%3Amatrix.org/%24-8J3nO4bp71-0prpP61GST9bcoyqHtYjRn_JCuXs0NA is on the manager instead of the token instance. |
21:50 | <rbuckton> | acquire().dispose() maybe makes sense for binary semaphores, but not necessarily counting semaphores. |
21:50 | <Justin Ridgewell> | Wow, thatā copy-link is borked. |
21:50 | <rbuckton> | That link didn't work in matrix |
21:50 | <Justin Ridgewell> | https://matrixlogs.bakkot.com/TC39_Delegates/2024-07-29#L327 |
21:51 | <bakkot> | In JavaScript, I claim, the overwhelming most common use case for an API that looks kinda like this is "I want to make at most 5 simultaneous network requests" (or database connections, or filesystem accesses, or threads, or whatever). Leaving aside naming questions, I think that use case is best served by the API presented today. |
21:51 | <Justin Ridgewell> | Cinny apparently doesnāt implement link to message well. |
21:51 | <snek> | i don't agree that there is a difference in api preference for n=1 vs n>1 |
21:51 | <rbuckton> | https://matrixlogs.bakkot.com/TC39_Delegates/2024-07-29#L327 release in the call to acquire to emulate what snek was suggesting. |
21:52 | <bakkot> | If the concern is "I sometimes want to do something other than that, and I think of that other thing as being called Semaphore", that's a fair piece of feedback to support the rename, but not to support a different API. |
21:52 | <Justin Ridgewell> | this.#sem.release(1) , with #sem being a manger. |
21:52 | <rbuckton> | That's what a { acquire(): Promise<Disposable> } would do anyways? |
21:53 | <kriskowal> | So, consider ReverseAsyncContext with add and reduce instead of set and get . |
21:54 | <snek> | what does this mean |
21:54 | <keith_miller> | If the hint does anything other than monotonically increase/decrease doesn't it kinda need to know the limit anyway? |
21:55 | <keith_miller> | I don't think it's necessarily bad just that without any comment on what it means then it's unclear. |
21:55 | <kriskowal> | We were looking into something like this for opentracing, opencontext, which is now opentel. |
21:55 | <snek> | oh no i just mean what is a ReverseAsyncContext |
21:55 | <kriskowal> | We were interested because it would make it possible to solve some āread after writeā hazards, with a ācookieā or lamport clock variants |
21:55 | <shu> | there would be an internal limit |
21:55 | <kriskowal> | Itās AsyncContext but the variables are sets instead of cells. |
21:55 | <shu> | like no matter waht the input is you'd never pause more than N times, whatever that is |
21:56 | <shu> | 50, 100 |
21:56 | <keith_miller> | Like what if I like to write my loops as for (I = spinCount; I--;) Atomics.yield(i); |
21:56 | <kriskowal> | Such that itās meaningful for information to flow up the stack instead of broadcast outward. |
21:56 | <keith_miller> | because it's less characters or w/e |
21:56 | <snek> | oh i see, got it, ty |
21:56 | <keith_miller> | Wait then why not have the JIT inject a count into the IR? |
21:57 | <kriskowal> | The reducer would merge all the members of the set such that the next consumer would have less work. |
21:58 | <kriskowal> | It is probably a Bad Ideaā¢ but this is the natural conclusion. There was a paper about it that I will have to find. |
21:59 | <Chengzhong Wu> | Thanks for sharing! We was discussing about possible merging/reducing with this flow at https://github.com/tc39/proposal-async-context/pull/94#discussion_r1651720741 |
22:00 | <shu> | Like what if I like to write my loops as |
22:00 | <littledan> | yeah we were thinking that it'd be bad if JS code got injected into the middle whenever context merges happen (which is all the time) |
22:01 | <rbuckton> | If the concern is "I sometimes want to do something other than that, and I think of that other thing as being called Semaphore", that's a fair piece of feedback to support the rename, but not to support a different API. I have two concerns:
|
22:01 | <shu> | the guidance is that you pass monotonic increasing integers as hints that you are pausing for the Nth time |
22:01 | <shu> | whatever strategy the VM chooses, it needs to understand that |
22:01 | <kriskowal> | Coming to think of it, putting a Set in an AsyncContext variable canāt be prevented, and this kind of work will be possible regardleess |
22:01 | <shu> | it can then choose whatever |
22:02 | <kriskowal> | And itās only safe anyway if the mergers are consistent (which is enforced to a degree by ocap discipline around holding the variable) |
22:02 | <rbuckton> | For Mutex , we have been discussing an API design to allow Mutex , Condition , and UniqueLock to work in a blocking manner (when off the UI thread in the browser), and an async manner (when on the UI thread in the browser). The current design has both sync and async methods on UniqueLock . |
22:03 | <kriskowal> | Ah, but contexts derived from a context holding growing subsets that might contribute to the parent set is not there. |
22:04 | <rbuckton> | Were we able to consider Semaphore for the MVP for shared structs, it likely would be a sharable object like Mutex , and have both a blocking (off UI thread) and non-blocking async (on UI thread) API, that would likely make a separate Semaphore unnecessary. |
22:04 | <snek> | why do you draw a line between binary and counting semaphores for this api? |
22:04 | <snek> | do you think they must have different apis? |
22:04 | <kriskowal> | Well, itāll be fun to talk about. |
22:05 | <kriskowal> | (I for one do not think that we need to replay the history of mutual exclusion with all its conventional names.) |
22:05 | <rbuckton> | Not necessarily. But I'm not sold on acquire().dispose() . I can see it making sense on a binary semaphore, as it can only ever have one user, but a counting semaphore can have multiple users, or release in batches. |
22:06 | <kriskowal> | ((I also for one do not think that we need to foist the unavoidable downsides of mutual exclusion on JavaScript, which is more useful for some workloads due to omission while not precluding folks from using other languages for workloads for which they are better suited.)) |
22:07 | <shu> | keith_miller msaboff waldemar rbuckton https://github.com/tc39/proposal-atomics-microwait/issues/9 |
22:12 | <bakkot> | Mutex does not handle the "I want to make at most 5 simultaneous network requests" case, which is (I claim) the most common case for something like this in JS. But if you change the binary nature of Mutex to be counting instead, that's just what's proposed here, plus natural extensions already discussed: an off-thread-only acquireSync plus coordination with the host to be structuredClone -able. |
22:13 | <rbuckton> | why do you draw a line between binary and counting semaphores for this api? acquire().dispose() possibly due to my experience and prior use of semaphores in .NET and C++, which both keep the acquire/release capabilities on the semaphore itself. To address that reaction, I need additional time to consider the implications of such an API over the existing use cases. Normally I would be chomping at the bit for the DX improvement that acquire().dispose() might provide, but the shared structs proposal is trying to enable a very specific set of capabilities that have unique concerns that differ from most existing JS code. |
22:14 | <keith_miller> | Hmm, I guess, I'm a bit confused about the objection to monotonically increasing pauses? Isn't that morally the same as what you proposed today? If you want shorter and shorter waits just write your loop as a count down. We could even add such a note to the spec or MDN. What changed that made you think monotonic was a problem? |
22:16 | <rbuckton> | I also regret I did not have the time to flesh out those concerns prior to the meeting, unfortunately a week was not long enough given other pressing concerns. I hope to have a longer discussion with Michael Ficarra following the plenary. |
22:17 | <snek> | i see. my personal experience using counting and binary semaphores in rust (which is just tokio::sync::Semaphore, its the same api) is that it works well for both cases, and it has helped to reduce bugs in more complex use cases like the ones you seem to be concerned about. |
22:17 | <shu> | i probably overreacted |
22:17 | <shu> | i am not really objected to monotonic increasing |
22:18 | <shu> | i think it is slightly less ergonomic because i think monotonic decreasing wait time is the right default interpretation |
22:18 | <shu> | but as you say i can of course just write the opposite loop |
22:18 | <shu> | but it's easy to flip that around: why is monotonic increasing the right default? |
22:19 | <keith_miller> | i think it is slightly less ergonomic because i think monotonic decreasing wait time is the right default interpretation |
22:19 | <shu> | yeah, tbf neither do i |
22:19 | <rbuckton> | i see. my personal experience using counting and binary semaphores in rust (which is just tokio::sync::Semaphore, its the same api) is that it works well for both cases, and it has helped to reduce bugs in more complex use cases like the ones you seem to be concerned about. Semaphore from a different perspective. I also was a bit put off by the, albeit optional, wrap and wrapIterator convenience methods, they felt out of place and seemed to be pushing Semaphore towards being very specific to async iteration use cases. |
22:19 | <keith_miller> | In other contexts backoff is typically longer and longer I guess |
22:19 | <shu> | the only tie-breaking data i see is there might be non-monotonic relationships that are reasonable as well |
22:20 | <shu> | (and like, there's no way to check for non-compliance, so whatever relationship we say in that step, an implementation can still do anything it wants) |
22:20 | <shu> | so given that, i am also happy to say that the hint is monotonic increasing |
22:21 | <keith_miller> | But non-monotonic seems like it would need to know the upper bound in most cases. |
22:21 | <shu> | so does monotonic |
22:21 | <shu> | you don't want a user to be able to pause for 2^53 times... |
22:21 | <keith_miller> | Well that would presumably be a static cap? |
22:21 | <shu> | right |
22:21 | <bakkot> | (Those would be on Governor, not Semaphore, and would be implemented purely in terms of acquire /release . also wrap is useful for lots of stuff; it's basically https://www.npmjs.com/package/throat which gets used a bunch) |
22:22 | <keith_miller> | But e.g. if you wanted to do X -> X-1 -> X-2 -> X-1 -> X you need to know the midpoint |
22:22 | <rbuckton> | From working with Atomics.Mutex in the Shared Structs dev trial, and experimenting with a UniqueLock approach that dovetailed with using , I found I quite liked way concerns were separated. In that design, Mutex is essentially an opaque shared token. To take, attempt to take, release, or assume a lock required a non-shared UniqueLock wrapper that was bound to the local scope. |
22:22 | <keith_miller> | I dunno, maybe we're way too deep into the bike shed lol |
22:23 | <rbuckton> | Were we to do acquire().release() for Semaphore , I would have generally preferred to use a similar API design to Mutex /UniqueLock . |
22:23 | <rbuckton> | Pardon, I must break for dinner. I will return shortly. |
22:25 | <keith_miller> | I guess if I were reading the API I would like to have some intuition about how the VM should/might be using my hint that doesn't involve reading the implementation's source (if it's even available). |
22:25 | <keith_miller> | Especially for such a low level API |
22:26 | <keith_miller> | Anyway, I'll move this to the GH issue since that seems like the right place for this discussion at this point |
22:27 | <shu> | i think what's crystallizing for me is that i am fine with dropping the complete implementation-defined language, because since it's timing only, you can do anything you want as an as-if implementation |
22:27 | <shu> | so any language about how iterationNumber is used i'm fine with |
22:27 | <shu> | we can say it's monotonic increasing |
22:28 | <shu> | i'd prefer that to dropping the hint parameter entirely |
22:30 | <waldemar> | Would Waldemar be satisifed with bigger-number-longer-wait? Then let the engine handle that to match the VMās CPU instructions, eg smaller-number-longer-wait? |
22:32 | <shu> | it was in the spec, but i take the point the wording was confusing to many and not well communicated in previous meetings |
22:34 | <shu> | waldemar: after chatting with keith_miller i think i am perfectly happy with spelling out the bigger-number-longer-wait semantics. but i'd like to caution that since timing is not observable behavior, an implementation can still choose another interpretation of the hint as an as-if optimization. would spelling out bigger-number-longer-wait satisfy your concerns? |
22:39 | <waldemar> |
|
22:40 | <waldemar> | And yes, I'd like to keep bigger-number-longer-wait but reword the note that states that the programmer should increase N linearly. What the programmer does with N is up to them. |
23:08 | <rbuckton> | .NET's Thread.SpinWait() (the equivalent of Atomics.pause() ) describes itself as "a busy wait in a very tight loop that spins for the number of iterations specified", though this isn't 100% accurate, it's fairly close. It uses some heuristics to normalize processor "yield" instructions, when in turn correlates to platform/architecture equivalent ASM "pause" or "yield" instructions. |
23:09 | <rbuckton> | .NET's SpinWait struct uses Thread.SpinWait() under the hood, but maintains a local counter which does the more complex backoff mechanism. |
23:10 | <shu> | bakkot: i did not have the capacity to fully pay attention to the governor thing and the Atomics.pause discussion, but a fully generic "unlock token" concept seems elegant. but the gut reaction i have is (wait for it) performance concerns about the allocations of these tokens in hot paths -- ideally i'd like to not bottleneck finer grained locking architectures |
23:10 | <shu> | rbuckton: i think i've been convinced on "large N is longer wait" is just fine to spec, see the PR https://github.com/tc39/proposal-atomics-microwait/pull/11 |
23:10 | <rbuckton> | e.g., spinWait.SpinOnce(sleep1Threshold) increments the internal spin counter and uses the provided argument to indicate how often to back off to a full Thread.Sleep(1) . |
23:11 | <shu> | the user code can pass in linearly increasing N if they want linear backoff, exponentially increasing N if they exponential backoff, or the reverse if they want "smaller N longer wait" semantics like i originally envisioned. the JIT and interpreter can scale the wait time differently |
23:11 | <rbuckton> | Something like the SpinWait struct can be implemented in userland to introduce a backoff mechanism, so long as we have Atomics.pause(iterations) to trigger a busy loop w/o sleeping, which aligns with "large N is longer wait" |
23:11 | <bakkot> | bakkot: i did not have the capacity to fully pay attention to the governor thing and the Atomics.pause discussion, but a fully generic "unlock token" concept seems elegant. but the gut reaction i have is (wait for it) performance concerns about the allocations of these tokens in hot paths -- ideally i'd like to not bottleneck finer grained locking architectures semaphore.acquireAndCall(fn) for the hot paths would make sense. though I'd guess in hot paths the token objects would never leave the nursery and therefore(?) not be that expensive? |
23:12 | <rbuckton> | I think I was conflating how SpinWait the struct and Thread.SpinWait() work in my responses. |
23:13 | <shu> | I think that having |
23:13 | <bakkot> | well, either it's short or infrequent, can't be both long and frequent |
23:14 | <shu> | and if your GC has per-thread linear allocation buffers for shared objects, then hopefully also okay |
23:14 | <bakkot> | the tokens need not be shareable |
23:14 | <bakkot> | though they could be I guess |
23:14 | <shu> | oh, these are sync tokens? |
23:14 | <shu> | i'm not clear on what's sync and what's async |
23:14 | <bakkot> | design is the same either way |
23:15 | <bakkot> | that is, it's either acquireAsync: Promise<token> or acquireSync: token , but in either case you have a sync token.release() |
23:15 | <bakkot> | (by "either" I mean that my ideal Semaphore would have both, with the sync one only available off main thread) |
23:16 | <bakkot> | but you are not generally going to need to send the tokens themselves across threads, except in very rare cases which I don't know if we'd need to support if it's expensive to do so |
23:17 | <bakkot> | so the tokens would not need to be shared objects |
23:17 | <bakkot> | would be kind of nice, if it was easy, but it's certainly not necessary |
23:17 | <shu> | but the decision on whether the token is shareable needs to be a priori |
23:18 | <shu> | unless token above was shorthand for 2 different types |
23:18 | <shu> | Promise<shareable token> and sync token |
23:20 | <bakkot> | it could also be cloneable-but-not-shareable |
23:20 | <shu> | yeah |
23:20 | <bakkot> | which is perhaps the best of all worlds here |
23:21 | <shu> | anyway this deserves more of my time and i'm hesitant to give any preferences until i've had time to properly digest |
23:21 | <bakkot> | yeah |
23:21 | <bakkot> | this affects mutexes too but presumably there's still plenty of time for revisions to the shared structs proposal? |
23:22 | <shu> | yes, i'd say definitely true for synchronization primitives |
23:23 | <bakkot> | for which, https://github.com/tc39/proposal-structs/issues/27 |