21:32
<rbuckton>

shu: I spoke with Luis and we both concur that using is preferred in the long term. For context, these are my primary concerns regarding a callback-based API:

  • Since the addition of async/await, many JS programmers seem to be moving away from CPS for asynchronous code in new projects.
  • Callback based APIs violate Tennent's Correspondence Principle, requiring complex rewrites of statements to introduce the callback when refactoring existing code and making things like for loops harder to reason over.
  • An auto-locking callback API assumes no composition of locking mechanisms, such as building a SharedMutex that supports lock promotion, or holding a lock on a mutex longer than the scope of a single function call.
  • While its feasible to build a rudimentary non-callback wrapper for the callback API, such a wrapper will not release its lock if the worker thread terminates abruptly, such as due to an exception or a call to worker.terminate(). With an object-based lock, it is feasible to write a callback-based wrapper that does not suffer from this limitation.
  • Object-based locks are more flexible in terms of advanced scenarios, such as implementing a "scoped lock" that can lock multiple mutexes at once with a deadlock prevention algorithm (callback-based API is far more complicated and produces an arbitrarily deep call stack), or locks that are only conditionally taken (i.e., to avoid re-acquiring a lock in a recursive algorithm).
21:33
<rbuckton>

Regarding the TCP issue, consider something as simple as a for loop with continue, break, and return:

// non-locking code
outer: for (const back of queues) {
  for (const msg of queue.getMessages()) {
    if (msg.stop) return msg.result;
    if (msg.exitQueue) break outer; 
    if (!msg.accept()) continue;
    processMessage(msg);
  }
}
// add lock using callback-based API
outer: for (const back of queues) {
  for (const msg of queue.getMessages()) {
    const result = Mutex.lock(mut, () => {
      if (msg.stop) return { op: "return", value: msg.result };
      if (msg.exitQueue) return { op: "break_outer" }; 
      if (!msg.accept()) return;
      processMessage(msg);
    });
    if (result?.op === "return") return result.return;
    if (result?.op === "break_outer") break outer;
  }
}
// add lock via `using`:
outer: for (const back of queues) {
  for (const msg of queue.getMessages()) {
    using lck = new UniqueLock(mut);
    if (msg.stop) return msg.result;
    if (msg.exitQueue) break outer; 
    if (!msg.accept()) continue;
    processMessage(msg);
  }
}
21:35
<rbuckton>

And a rough sketch of a UniqueLock API might look like:

class UniqueLock {
  constructor(mutex?: Atomics.Mutex, t?: "lock" | "defer-lock" | "try-to-lock" | "adopt-lock");
  static lockAsync(mutex: Atomics.Mutex): Promise<UniqueLock>;
  get mutex(): Atomics.Mutex | undefined;
  get ownsLock(): boolean;
  tryLock(timeout?: number): boolean;
  lock(): void;
  lockAsync(): Promise<boolean>;
  unlock(): void;
  release(): void;
  [Symbol.dispose](): void;
}

with usage like

// sync lock
{
  using lck = new UniqueLock(mut);
  ...
}

// async lock (option 1)
{
  using lck = await UniqueLock.lockAsync(mut);
  ...
}
 
// async lock (option 2)
{
  using lck = new UniqueLock(mut, "defer-lock");
  await lck.lockAsync();
}
21:45
<shu>
i see, thanks
21:45
<shu>
i can live with this
21:46
<shu>
rbuckton: Mutex then would be this opaque thing, no prototype methods, nothing?
21:47
<shu>
my only quibble with the sketch is i would've figured tryLock and lock and friends would be on Mutex, with UniqueLock just providing a Symbol.dispose
21:47
<shu>
like what you do in C++
21:50
<rbuckton>
C++ std::unique_lock has a similar API.
21:51
<rbuckton>
std::scoped_lock has no methods, but also locks multiple mutexes at once
21:53
<rbuckton>

And sometimes you need need to hand off a lock to something else, or perform programmatic checks. For example:

using lck = new UniqueLock(mut, "try-to-lock");
if (lck.ownsLock) {
  // fast path
}
else {
  // slow path, may include calls to `wait` for conditions, etc.
  lck.lock(); // blocks
}
21:54
<rbuckton>
And yes, mutex could just be opaque.
21:55
<shu>
why start with that and not mutex_guard?
21:55
<rbuckton>
UniqueLock could also accept user-defined lockables if you need to build more complex coordination primitives for your use case.
21:55
<shu>
(again, the minimal thing). i don't want to lead with things like deadlock avoidance for sequencing locks, like unique_locks are often used for
21:55
<rbuckton>
Because UniqueLock is the most flexible as a building block.
21:56
<rbuckton>
IIRC, unique_lock doesn't provide deadlock avoidance. That's the job of scoped_lock. And I can build scoped_lock on top of unique_lock if I need too
21:56
<shu>
ah perhaps i'm confusing the two
21:56
<shu>
okay
21:56
<rbuckton>
See https://github.com/microsoft/TypeScript/blob/shared-struct-test/src/compiler/threading/scopedLock.ts
21:56
<rbuckton>
And https://github.com/microsoft/TypeScript/blob/shared-struct-test/src/compiler/threading/uniqueLock.ts
21:57
<shu>
i think deadlock avoidance definitely runs afoul of not minimal, but i see that this doesn't have that, that seems fine
21:58
<rbuckton>
Both of those use an object-based wrapper for Mutext to avoid callbacks, but potentially runs afoul of bullet #4 above (assuming the callback-based approach currently releases the mutex if it is held when the worker is abruptly terminated)
21:59
<rbuckton>
UniqueLock gives you the minimal functionality and flexibility necessary to build more complex things.
22:00
<shu>
what's the 4th bullet? thread termination?
22:00
<rbuckton>
And only really exposes lock, tryLock, and unlock
22:00
<rbuckton>
Yeah
22:00
<shu>
yeah that's kind of tricky
22:01
<shu>
it'd be nice to automatically release but... that has cost
22:02
<rbuckton>
Even if there isn't automatic release, the object wrapper incurs more overhead since it needs both a Mutext and another boolean field.
22:03
<shu>
Worker.terminate() is odd
22:03
<shu>
bb in an hour
23:18
<shu>
back
23:18
<shu>
okay, so, i see Web Locks makes an attempt to release an agent's held locks upon termination
23:19
<shu>
if we aspire to do the same, that means keeping a list, ugh
23:25
<shu>
i guess each execution context can keep a stack of currently held mutexes, that, upon termination execution, get unlocked