18:12
<shu>
One of the uses for compareExchange is to implement lock-free updates (i.e., atomically compare and update, returning a value so you can see if you succeeded). Requiring locks to use compareExchange kind of defeats the purpose.
agreed
18:12
<shu>
That's interesting, so then compareExchange, add, sub, etc. will just error on a shared struct until types are added? The issue with the boxed case is that doing the loads atomically to read the actual value has high cost?
no, the issue is that there are no CPU instructions to do this in a way without locking it with an actual mutex, which as Ron said above, kinda defeats the purpose
18:13
<shu>
add and sub will just error without field types, yes
18:13
<shu>
compareExchange would work only for things with identity, like objects
18:13
<shu>
i don't know how to make compareExchange work for e.g. numbers
18:14
<shu>
specifically for cmpxcg i think that's probably fine
18:15
<shu>
most cmpxchg in my experience are for lock-free updates of state, and the workaround in this case is to wrap it in an Object so you can do pointer comparison on the Object pointer
18:16
<shu>
it's a bit annoying if you have a fixed number of states, like what you might use a int enum for in C++, since this means you'd need to make a few constant Objects ahead of time like
18:16
<shu>
for example, if you were writing a mutex yourself
18:16
<shu>
const LOCKED_STATE = {}; const UNLOCKED_STATE = {}; const CONTENDED_LOCKED_STATE = {}; and use those objects for the cmpxchgs
18:17
<shu>
it is kind of annoying but also fundamental, we don't have ints in the language outside of TAs...
18:20
<shu>
another way to say this restriction is: i know how to make atomics work for pointers. the only values guaranteed to be implemented with pointers in all implementations are things with identity. values without identity won't work with atomics because the implementations strategies differ, and there's no way to make the atomics work without the use of expensive mutexes in all implementations. and if you were using mutexes, you might as well use mutexes in the user code to do your synchronization, since the point of Atomics is fast, non-blocking, lock-free operations
18:21
<shu>
and if we were to extend structs with field types in the future, we could make Atomics work on fields with integer field types, but that would also basically require that all implementations must implement those fields as unboxed
22:14
<rbuckton>
Though I don't think {} works as you describe, since a normal JS object isn't shared.
22:24
<rbuckton>
So, my understanding is that in V8, numbers are boxed and stored on the heap, and those heap-allocated boxes are per isolate?
23:27
<shu>
rbuckton: quite right, needed to be shared objects
23:28
<shu>
rbuckton: in V8, non-Smi numbers (for simplicity in this conversation assume Smis are 31-bit ints) are boxed and stored on the heap
23:29
<shu>
they can be allocated in a shared heap that can be shared across Isolates, but the problem is that the sensible semantics for cmpxchging a number isn't box equality but actual numeric equality
23:29
<shu>
and the only way to do that and use an underlying boxing implementation strategy is to have an internalization table for all shared numbers, so you can ensure there is a single canonical heap allocation for a particular number
23:30
<shu>
and i haven't been able to think of a way to apply that canonicalization lazily, i.e. only canonicalize the fields that you want to cmpxchg
23:30
<shu>
because to replace a non-canonical copy with a canonical copy of a boxed number is of course a separate write that might become observable to other threads
23:31
<shu>
but also having a "number table" like an interned string table is complex and may make number performance rather strange
23:36
<shu>
for other atomic operations like add, the boxing itself makes the operations unimplementable
23:37
<shu>
they'd end up being need to be implemented as "fetch original box, unbox, add, re-box, cmpxchg with original box" loops
23:37
<shu>
which seems undesirable?
23:38
<shu>
something to discuss at this week's call :)