01:25
<Chris de Almeida>
rbuckton: you got it
07:31
<Rob Palmer>
The sign-in form is now posted if you wish to dial into the Zoom for today's plenary: https://github.com/tc39/Reflector/issues/473
07:49
<Rob Palmer>
Please could someone attempt dial in so we can confirm AV is working
08:01
<Jack Works>
it's working (zoom web)
08:18
<bakkot>
huh, "OramaSearch" is new to me; any delegates from them here today?
08:18
<littledan>
Michele Riva is the delegate, I think?
08:18
<ryzokuken (✈️ to 🇳🇴)>
yeah
08:18
<ryzokuken (✈️ to 🇳🇴)>
it's a new org
08:20
<Jack Works>
i found the voice quality is bad... i need more effort to heard what are the meetings talking about
08:28
<Rob Palmer>
Jack Works: is it one person's voice quality?
08:31
<Jack Works>
Jack Works: is it one person's voice quality?
it's the in-person one
08:32
<Rob Palmer>
The whole room is "hot". Meaning all mics on the whole time. So we need everyone to avoid noise unless they are speaking, e.g. placing cups down gently.
08:34
<Christian Ulbrich>
Rob Palmer: Although it would be possible to mute each mic directly, because they have a direct mute button.
08:35
<Rob Palmer>
That is a good point - lets try using that
08:41
<littledan>
Yeah, this does look nicer! Thanks editors
08:44
<bakkot>
tcq does indeed appear to be down
08:45
<bakkot>
chaos :(
08:46
<Michael Ficarra>
I don't even know what to do without TCQ anymore
08:47
<Michael Ficarra>
we are now 100% dependent
08:47
<nicolo-ribaudo>
A google doc with someone manually sorting the items by priority?
08:47
<Bradford Smith>
current discussion is not audible remotely
08:48
<Christian Ulbrich>
Sorry! We forgot to unmute; it was about missing conclusions.
08:49
<Christian Ulbrich>
Just give us a nod and we will signal, unmute.
08:52
<bakkot>
is TCQ still just running off of some Azure account Brian is expensing?
08:54
<Christian Ulbrich>
bakkot: Looks so -> https://github.com/bterlson/tcq/blob/b5be1287a6843f24dc570c1d951d1c26ac566d66/src/server/db.ts#L1C14-L1C18
08:55
<Michael Ficarra>
why is #783 being done as a needs-consensus PR and not a staged proposal?
09:00
<bakkot>
I don't think it's that useful to make things like this optional?
09:00
<bakkot>
at least not optional for web browsers
09:00
<bakkot>
as a programmer I am going to want to rely on their existence
09:00
<littledan>
+1
09:01
<bakkot>
and as a user I'm going to be annoyed if the page behaves differently in my browser than the one the developer was using
09:01
<ryzokuken 🇳🇴>
but when it comes to internationalization, that's bound to happen, right?
09:02
<ryzokuken 🇳🇴>
I mean, I totally agree with the general sentiment
09:02
<bakkot>
... is it?
09:03
<Michael Ficarra>
bakkot: ligatures and opentype features are optional; that doesn't make a font not useful
09:03
<littledan>
but when it comes to internationalization, that's bound to happen, right?
well, we still try to limit the scope of this
09:03
<littledan>
... is it?
it is because different browsers make different tailorings of the data
09:05
<littledan>
sffc: Can you link to that PR for number format for the notes?
09:05
<littledan>
oops it's in TCQ
09:05
<littledan>
https://github.com/tc39/proposal-intl-numberformat-v3/pull/130
09:09
<Rob Palmer>
I don't even know what to do without TCQ anymore
There is a defined backup plan involving a spreadsheet.
09:09
<Michael Ficarra>
good to hear!
09:10
<bakkot>
someone is typing surprisingly loudly?
09:10
<HE Shi-Jun>
does import attr support boolean values?
09:10
<Rob Palmer>
I think it may be ryzokuken 🇳🇴 aggressively typing.
09:11
<Jack Works>
does import attr support boolean values?
no
09:11
<Michael Ficarra>
I think it is littledan just surprisingly close to the mic
09:11
<rbuckton>
Is someone typing on an open mic?
09:12
<littledan>
sorry that may be me. Is it fixed now?
09:12
<littledan>
(I moved off of the table)
09:12
<bakkot>
there are not currently typing noises
09:13
<HE Shi-Jun>
no
If support boolean I would like it also support null :)
09:14
<Michael Ficarra>
HE Shi-Jun: Jack Works: I would oppose that because I want to keep a future open where we do not ToString these value
09:15
<bakkot>
yeah I like keeping the restrictions tight until there are concrete things we want to loosen them for
09:15
<bakkot>
definitely I can imagine features where we'd want true, false, null etc but I think we can come back when that's relevant?
09:15
<HE Shi-Jun>
we do ToString for attribute values now?
09:16
<bakkot>
otoh I guess this is partially for bundlers to build out there own things so maybe that doesn't make sense
09:34
<Michael Ficarra>
I support auto deferral in this case, but I don't want this or iterators to be considered precedent to justify auto deferring in future proposals
09:43
<Bradford Smith>
could someone remind me what "hint" means in this context?
09:43
<Michael Ficarra>
it's an alias in the spec
09:43
<littledan>
it means "flag" in practice
09:43
<littledan>
it's a silly name
09:43
<Michael Ficarra>
it's used in places where we provide an AO with context about its use
09:44
<Bradford Smith>
so it's not something specific to the syntax for Explicit Resource Management, then. Thanks
09:44
<ryzokuken 🇳🇴>
yeah, I always assumed that it was a contextual hint from the caller, as in toPrimitive
09:44
<Michael Ficarra>
correct, entirely spec internal
09:44
<nicolo-ribaudo>
so it's not something specific to the syntax for Explicit Resource Management, then. Thanks
Well, in this specific case the hint is whether we are using using or await using
09:45
<ryzokuken 🇳🇴>
hmm, I would've said the word "context" is better but at this point it's so overloaded that it's probably not
09:45
<Michael Ficarra>
same with "flag"
09:48
<nicolo-ribaudo>
ptomato (at TC39, limited availability): It is not possible unless we want all the const and let declaration to perform property access
09:52
<bakkot>
or if we change the semantics of using, like suggested here
09:52
<nicolo-ribaudo>
ptomato (at TC39, limited availability): It is not possible unless we want all the const and let declaration to perform property access
Oh well, the resource could throw if it's used before accessing its [Symbol.dispose] property
09:52
<bakkot>
that's kinda cute actually
09:53
<Luca Casonato>
And also, it is totally reasonable to move around disposables using let or const, before passing it to a using
09:53
<bakkot>
but probably confusing
09:53
<littledan>
if we want to seriously consider Symbol.enter, we should demote this proposal to Stage 2
09:55
<ryzokuken 🇳🇴>
And also, it is totally reasonable to move around disposables using let or const, before passing it to a using
I don't quite understand how you envision this
09:56
<Michael Ficarra>
littledan: I think it works fine as a follow-on
09:58
<littledan>
this is not the kind of thing that will be good to have a compatibility matrix around. It's also not some complicated uncertain technology--we know the design space already.
10:00
<Bradford Smith>
littledan: Are you concerned about avoiding a future where some disposables are wrapper objects that pseudo-enforce use of using and some are not?
10:00
<littledan>
it will be annoying to have to reason about different possible ways that this protocol could be implemented, in general. Possible, but annoying
10:01
<littledan>
Queue:
Reply: Concern with disposers that throw exception in GC for embeddings
Philip Chimento
2
New Topic: I am not in favour of recommending GC based cleanup. Should -> may?
Luca Casonato (@denoland)
3
New Topic: Is it possible to enforce use of using in the language, rather than in tooling
Philip Chimento
4
New Topic: Test sanitizers can be used to help with this
Luca Casonato (@denoland)
5
New Topic: I support looking into the @@enter/@@asyncEnter, possibly part of this proposal.
Michael Saboff
10:02
<ryzokuken 🇳🇴>
thanks, could we perhaps also put this in the notes so we have a copy stored?
10:03
<sffc>

is tcq.app down for anyone else? The console says

WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is established.
wKFO:1 Unchecked runtime.lastError: The message port closed before a response was received.
10:04
<ryzokuken 🇳🇴>
yeah, it's down again
10:09
<rbuckton>
To clarify, I am fine with not providing a recommendation to ensure cleanup if a disposable is dropped on the floor, regardless as to the practice in other languages. It was brought up in #159 as one of the concerns about dropping a disposable is that native handles that are usually just tracked by a number value can leak and are not visible in a heap dump.
10:11
<rbuckton>
That said, it is feasible to implement a "cleanup on GC" behavior using FinalizationRegistry in user code if needed, given that's one of FinalizationRegistry's use cases, and such behavior already exists in NodeJS for many native handles.
10:24
<rbuckton>

Regarding the Symbol.enter suggestion, I have been considering the following as a follow-on proposal:

Symbol.enter (or Symbol.enterContext?) - indicates a method that, when invoked, enters a context and returns a value associated with that context. This method is optional, and if missing is implicitly handled as if the definition was:

[Symbol.enter]() {
  return this;
}

Symbol.exit (or Symbol.exitContext?) - indicates a method that is invoked when the context is exited at the end of the block. Receives two arguments, hasError and error, that indicate whether an exception occurred prior to context exit. The method may return true to inform the caller not to throw error (swallowing the exception). Any other return value is ignored, indicating the caller should throw error. The method may also choose to throw its own error, but should not throw error itself. This method is also optional, and if missing is implicitly handled as if the definition was:

[Symbol.exit](hasError, error) {
  try {
    this[Symbol.dispose]();
  } catch (e) {
    if (hasError) {
      throw new SuppressedError(e, error);
    } else {
      throw e;
    }
  }
  return undefined;
}

Thus, [Symbol.dispose]() on its own would constitute a lightweight context manager.

10:28
<rbuckton>
Things do become more complex for async context managers, however, given that they must await the result of [Symbol.asyncEnter]() if we were to adhere to the Python design. await using x = y would potentially Await twice: once at the declaration site in the presence of a [Symbol.asyncEnter]() method, and once at the end of the block for the [Symbol.asyncExit]() or [Symbol.asyncDispose]() call.
10:31
<rbuckton>
Finally, AsyncDisposableStack.prototype.use becomes slightly more complicated as it may return a Promise if the resource that is passed to use is an async context manager. use itself doesn't necessarily need to become async, however, since all it needs to do is return the result of calling [Symbol.asyncEnter](). If you were to pass in a sync context or a disposable, it would not need to return a Promise.
10:32
<rbuckton>
Context managers are extremely powerful, but also extremely complex, which is why I have been reticent to consider them for this proposal.
10:36
<rbuckton>
I also didn't want the general purpose [Symbol.dispose] behavior to be complicated by the error handling and error suppression semantics of __exit__, as the majority of cases related to resource management have no need of that complexity.
11:08
<Luca Casonato>
you may come back ljharb Chris de Almeida
11:08
<Rob Palmer>
Please return to the Zoom
11:10
<Luca Casonato>
const resource = createResource()

promises.push(doSomethingWithResource(resource));

async function doSomethingWithResource(resource) {
  using r = resource;

  ...
}
11:19
<eemeli>
littledan: My preferred source-maps scope change would be to expand the two "including ECMAScript code" mentions to be something like "including CSS and ECMAScript code".
11:21
<Christian Ulbrich>
-> https://notlaura.com/is-css-turing-complete/
11:28
<littledan>
Thank you for ensuring that the CoC Committee member list is accurate
11:28
<littledan>
with the pruning
11:28
<littledan>
and thanks to new volunteers
11:34
<Bradford Smith>
The delegate's organization is not listed in https://github.com/tc39/notes/blob/main/delegates.txt. Is that information available somewhere?
11:35
<nicolo-ribaudo>
We are divided in teams in the tc39 GH org based on our organizations
11:36
<nicolo-ribaudo>
Example: https://github.com/orgs/tc39/teams/member-igalia
11:36
<Bradford Smith>
Thanks for that reminder. I still don't really know how to "look up" the information that way. I mean go from name/abbreviation -> organization
11:37
<littledan>
so, the public calendar email address to invite will be shared with the committee to make events public?
11:37
<Michael Ficarra>
Bradford Smith: type their name here: https://github.com/orgs/tc39/teams/eligible-meeting-participants?query=membership%3Achild-team
11:44
<Chris de Almeida>
yes
11:45
<Michael Ficarra>
I'm confused, don't only the chairs have edit access?
11:45
<Chris de Almeida>
the public calendar is managed by way of the private calendar
11:45
<Michael Ficarra>
yes, I mean edit access to the private calendar
11:45
<littledan>
no, we can all edit the private calendar
11:46
<Chris de Almeida>
not everyone can edit it
11:46
<Chris de Almeida>
several people have permission though
11:46
<Chris de Almeida>
https://github.com/tc39/how-we-work/issues/94#issuecomment-1518375862
11:46
<Chris de Almeida>
calendar id there is the address to invite the public calendar to a private calendar meeting
11:47
<Chris de Almeida>
as discussed, we will place this information in a more accessible place rather than buried in a GH issue
11:48
<Chris de Almeida>
we should document some points of contact who have edit access so it's not ambiguous who to contact when needing to make calendar updates
11:48
<Chris de Almeida>
at the very least the chairs, but other folks with edit access may be willing to make themselves available for this purpose
11:49
<ljharb>
i've been fielding calendar update requests for awhile fwiw
11:49
<ryzokuken 🇳🇴>
I think the admin role is perfect for this tbh
11:49
<ryzokuken 🇳🇴>
so it makes sense for you to be among the listed contacts ljharb
11:49
<littledan>
maybe we should give edit access somewhat more broadly?
11:50
<littledan>
I don't really see why I should have more access than others
11:50
<ryzokuken 🇳🇴>
I think we have a high enough degree of trust within the committee to share edit access to the calendar more broadly, yes
11:50
<ryzokuken 🇳🇴>
like the calendar edit permission is not really that critical now that I think about it
11:50
<ljharb>
that said, calendar management can be tricky and it's really easy to accidentally fire off dozens of email notifications, and there's no audit log to restore deleted events
11:51
<Chris de Almeida>
true
11:51
<littledan>
that said, calendar management can be tricky and it's really easy to accidentally fire off dozens of email notifications, and there's no audit log to restore deleted events
In fact it looks like the source map meetings might've been randomly deleted or something
11:52
<ljharb>
oof, ping me with the deets you want and i'll be happy to set it back up
11:52
<ryzokuken 🇳🇴>
that said, calendar management can be tricky and it's really easy to accidentally fire off dozens of email notifications, and there's no audit log to restore deleted events
I guess that makes sense given that those operations are not actually part of the calendar protocol
11:52
<Michael Ficarra>
yeah I'm fine with delegating the calendar management as long as the people are decently responsive
11:52
<ryzokuken 🇳🇴>
the calendar protocol is weird actually, but maybe you have some degree of a backlog in your emails
11:53
<ryzokuken 🇳🇴>
or CalDAV
11:54
<Chris de Almeida>
source maps still on my calendar, but not on the tc39 calendar
11:54
<Chris de Almeida>
it's 2023 and calendaring is still in the dark ages
11:54
<Chris de Almeida>
TG5: Calendar Spec
11:55
<ryzokuken 🇳🇴>
Chris de Almeida: 100% seriously I've been thinking about it a lot lately
11:55
<ryzokuken 🇳🇴>
and trying to fix that somewhat
11:56
<Chris de Almeida>
I'd support you
11:57
<shu>
littledan: so actually for the loop bound recomputation bugs there are already tests AFAICT
11:58
<shu>
(marja implemented and wrote the tests for them, but i didn't correctly fix the spec draft)
11:58
<Andreu Botella>
shu: I wonder if throwing if there was a concurrent too-large grow makes sense, since if the concurrent grow happens just after this thread's grow returns, you'd still have a too-large buffer when you try to use it
11:58
<shu>
the only one missing is the detach timing
11:58
<shu>
Andreu Botella: you mean throwing if there is any race?
11:59
<shu>
i don't think so, because racing for too-large cannot be reliably detected
11:59
<Andreu Botella>
no, I mean you'd still have a race in user code if the concurrent grow happens just a bit too late
11:59
<shu>
right, you can't detect that
11:59
<Andreu Botella>
so why try to detect it inside the loop?
12:00
<Andreu Botella>
well, I guess you'd have to detect it anyway, but why throw
12:00
<shu>
the race inside the loop is for the opposite case, where grow(10) races with grow(20)
12:00
<Rob Palmer>
https://github.com/tc39/proposal-array-grouping/issues/57
12:00
<shu>
and grow(20) happens first, and grow(10) is now a shrink
12:00
<shu>
and is disallowed
12:00
<shu>
it's throwing not because of a race per se, it's throwing because shrinks already throw
12:01
<shu>
you could have it silently do nothing, that's a possibility
12:01
<Andreu Botella>
After a grow, you know the SAB is at least the size you've grown it too
12:01
<Andreu Botella>
so I'm not thinking of the grow(10) as a shrink
12:01
<Andreu Botella>
maybe a user might though
12:02
<shu>
that's fair, this could be relaxed, but it's shipped with these semantics already and i don't feel particularly compelled to change it
12:02
<Andreu Botella>
oh, right, a slightly-too-late grow(10) would indeed throw
12:02
<Andreu Botella>
that makes sense then
12:02
<shu>
the actual answer is just... don't race your grows
12:02
<shu>
synchronize another way
12:03
<ryzokuken 🇳🇴>
I understand that but given the level of abstraction we work on, couldn't we do a bit of hand holding here?
12:03
<ryzokuken 🇳🇴>
make it a bit harder to run into this issue to begin with to whatever extent we can
12:04
<ryzokuken 🇳🇴>
that's fair, this could be relaxed, but it's shipped with these semantics already and i don't feel particularly compelled to change it
oh, this is quite compelling, nvm
12:04
<littledan>
I understand that but given the level of abstraction we work on, couldn't we do a bit of hand holding here?
yeah I think this is generally not the right kind of thing to do with low-level concurrency primitives
12:05
<littledan>
So, should we have a brief overflow topic to get through the conditionality?
12:05
<littledan>
if msaboff expects to review it tonight
12:05
<littledan>
it has not really been helpful to replace what we were calling "conclusion" with "summary". The idea was to be more detailed.
12:06
<shu>
agreed with littledan, i think hand-holding lock-free concurrency stuff is just not a good idea
12:06
<littledan>
we've had a conclusion for a long time
12:07
<Michael Ficarra>
the request from this morning is just that we don't forget to add a conclusion
12:07
<littledan>
this is false. The request from Ecma has been to add summaries. We had been adding conclusions previously.
12:08
<littledan>
I'm really confused by the resistance from the committee to summaries. I really think they would make the notes more accessible.
12:08
<littledan>
the idea is to cover the main points in the presentation and discussion, not only the things we got consensus on
12:09
<littledan>
if a delegate doesn't want to write a summary, that's OK, but I don't understand why they should oppose others writing summaries...
12:10
<ljharb>
i don't recall any opposition to someone just going in and adding a summary - i thought the opposition was just to pausing the meeting and/or asking champions to write a summary, but maybe i'm not remembering right
12:21
<Chris de Almeida>
the summaries (with or without a conclusion) are very helpful. it takes only a moment for the speaker to dictate a summary. I don't think there's a lot of controversy with this, but in the past IIRC it was only because it was perceived to take up too much committee time. but I think that was also when we were sitting there waiting for the speaker (or someone else) to type up the summary, whereas it should just be dictated, which should only take a moment, and can be cleaned up async as needed
12:22
<Chris de Almeida>
so yes, Jordan is right that it was in opposition to pausing the meeting. it's such a small sacrifice though and is a lot easier to do when it's timely
12:24
<Chris de Almeida>
100% almost every time in recent memory where I needed to refer to meeting notes, it was so helpful to have those summaries rather than having to scour the entire dialog
12:38
<Luca Casonato>
syg: The profile ^^
12:43
<Michael Ficarra>
ljharb: if you don't use namespace imports, you will have effects triggered on reading a local
12:43
<Michael Ficarra>
reading a local shouldn't have an effect
12:44
<ljharb>
i agree with that
12:45
<ljharb>
that doesn't mean import * is any more palatable tho
12:45
<littledan>
that doesn't mean import * is any more palatable tho
Do you have any other suggestions?
12:45
<rbuckton>
reading a local shouldn't have an effect
with would like a word /s
12:46
<Michael Ficarra>
"why not with but everywhere?"
12:46
<HE Shi-Jun>
I don't understand current page...
12:47
<ljharb>
Do you have any other suggestions?
no, the design goals of ESM didn't really leave many options here i can see :-/
12:47
<Jack Works>
reading a local shouldn't have an effect
but imported variable isn't a local.
12:47
<ljharb>
sure it is
12:47
<littledan>
like it could be import with instead of import defer? (joking)
12:48
<HE Shi-Jun>
personally i think some magic like with is acceptable in this specific case :)
12:48
<ljharb>
altho it's slightly different in that its value can change out from under you if the export is a let that's reassigned
12:48
<littledan>
no, the design goals of ESM didn't really leave many options here i can see :-/
I don't really understand why the namespace restriction is fatal, but I guess you'll explain in your queue item?
12:48
<ljharb>
i didn't say it was fatal. but yes, i will elaborate on my queue item (i just combined my two into one)
12:49
<Jack Works>
I support allowing import defer { ... }, we have already using it by the webpack implementation and we found enforcing a namespace is somewhat a bad DX .
12:50
<littledan>
in our use at Bloomberg, we've found that the restriction to a namespace is a little annoying but not really that bad
12:50
<ljharb>
bakkot: adding TLA to a module is already a breaking change i think
12:51
<littledan>
the more significant thing is the decision about whether to include deferred * re-exports (coming later in the slides, proposed to discuss during stage 2)
12:51
<danielrosenwasser>
sent an image.
is the second column milliseconds?
12:51
<littledan>
well. it's observable but it's intended to not be quite breaking
12:52
<Luca Casonato>
is the second column milliseconds?
i think either function call count or sample count - but I don't know for sure
12:52
<ljharb>
i believe there's nonzero use cases where if a module starts using TLA, things will break - i forget which off the top of my head tho
12:52
<rbuckton>
in our use at Bloomberg, we've found that the restriction to a namespace is a little annoying but not really that bad
I wonder if we could employ a heuristic where named imports that are only used in functions are deferred until the first function call?
12:53
<nicolo-ribaudo>
i believe there's nonzero use cases where if a module starts using TLA, things will break - i forget which off the top of my head tho
If it has side effects and a module that does not list it as a dependency relies on its side effects
12:55
<ljharb>
hm, that's not what i was thinking of, but since i can't recall specifics rn, ¯\_(ツ)_/¯
12:55
<rbuckton>

i.e.:

// a.js
import defer * as ns from "foo";
// defer until `ns` property accessed.

// b.js
import defer { a, b } from "foo";
export function f() { console.log(a, b); } // defer until `f` is called

// c.js
import defer { a, b } from "foo";
console.log(a, b); // not actually deferred
12:55
<bakkot>
i believe there's nonzero use cases where if a module starts using TLA, things will break - i forget which off the top of my head tho
for well-behaved graphs I don't think it's ever breaking; it's only if there's weird side-effect ordering problems that it breaks
12:56
<bakkot>
I wonder if we could employ a heuristic where named imports that are only used in functions are deferred until the first function call?
fwiw that seems like too much magic to me
12:56
<Jack Works>
for well-behaved graphs I don't think it's ever breaking; it's only if there's weird side-effect ordering problems that it breaks
we've hit that. we have code onAppInstall.addListener(...) which the callback is only called if the callback is registered in 1st event loop.
12:57
<rbuckton>
fwiw that seems like too much magic to me
defer itself is too much like magic
12:58
<bakkot>
the thing where the namespace object implicitly has side-effecting accessors is, just barely, not too magic for me
12:58
<bakkot>
the previous suggestion of having local access be side-effecting was too magic
12:58
<bakkot>
but property access can already be side-effecting
12:58
<Luca Casonato>
bakkot: Most places I have seen TLA in is leaf modules (for example Wasm loading), or the top level entrypoint (for example data fetching in a CLI). In my experience TLA anywhere between entrypoint and leaf is very rare.
12:58
<bakkot>
you can (mostly) explain this feature in terms of existing ones
12:58
<ljharb>
it still feels too magic to me tbh
12:58
<bakkot>
so it is, just barely, ok with me
12:58
<bakkot>
bakkot: Most places I have seen TLA in is leaf modules (for example Wasm loading), or the top level entrypoint (for example data fetching in a CLI). In my experience TLA anywhere between entrypoint and leaf is very rare.
why would wasm loading be a leaf?
12:59
<bakkot>
wasm is just like... code
12:59
<rbuckton>
defer itself is too much like magic
but also, determining whether named imports are lexically scoped to functions is statically analyzable, with the exception of direct-eval (which could just block deferred evaluation anyways).
12:59
<Jack Works>
+1 I don't think it's magical for the ns case. Direct import binding is a little bit magic but also good to me.
13:00
<Michael Ficarra>
why would wasm loading be a leaf?
... because it needs to be passed an imports object? how can it not be a leaf?
13:01
<bakkot>
I guess I mean a difference sense of "loading"
13:02
<bakkot>
yes, loading the bytecode is a leaf, of course
13:03
<bakkot>
but if you have a library which has a wasm component, it seems like "TLA to async-compile the wasm" is a pretty normal use case
13:03
<bakkot>
which means your library's entry point would have a TLA, not just the leafs of the library
13:05
<shu>
i don't think i'm confused
13:06
<shu>
i'm saying in JS, compilation can be transparently deferred and folded into evaluation
13:06
<shu>
in wasm because timing is of a bigger concern, it cannot be generally transparently deferred
13:06
<Jack Works>
I also want to notice the deferred is not only the compilation happened in the engine, but also the execution of the JS code
13:06
<shu>
and the motivating number of "half the time spent in evaluation" as shown by the profiler output suggests that that includes main-thread deferred compilation
13:07
<shu>
well yes, obviously the execution is the main thing to be deferred
13:07
<shu>
i'm questioning the "half the time" measure and what it includes
13:07
<sffc>
https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/Instance/Instance#syntax says in a big red box, "Warning: Since instantiation for large modules can be expensive, developers should only use the Instance() constructor when synchronous instantiation is absolutely required; the asynchronous WebAssembly.instantiateStreaming() method should be used at all other times."
13:07
<sffc>
^ This is about the sync constructor after you already have an async compiled Module
13:08
<Jack Works>
for example https://github.com/alexcorvi/anchorme.js/pull/127 this library cost 30ms to init (this PR is reverted by the author). We now using defer to save this 30ms
13:08
<littledan>
i'm questioning the "half the time" measure and what it includes
right, the thing is, we have a whole bunch of data about the benefit of this, and the champions were conservative about what they included on slides. I think it's reasonable that you ask how much benefit this will get on the web.
13:08
<littledan>
I would point out, though, that when it's a cache hit, the fetching and compiling part might be pretty quick
13:08
<shu>
i'm convinced it's a win for JS for sure
13:08
<shu>
the wasm questions got me doubting about how well the performance story composes
13:08
<Rob Palmer>
Synchronous lazy loading in CommonJS is sometime known as "inline requires". Jest managed to half their startup time by incorporating this technique.
13:09
<shu>
or are we also introducing performance footguns
13:09
<littledan>
I guess I don't understand the performance breakdown case
13:10
<littledan>
you're all right that we should investigate this better for Wasm. I think this should block Stage 3.
13:10
<littledan>
I should've realized that earlier, so thanks for raising it
13:11
<Michael Ficarra>
Chris de Almeida: have you considered just implementing https://github.com/bterlson/tcq/issues/14?
13:11
<Michael Ficarra>
there's so many good things on the TCQ issue tracker
13:11
<eemeli>
I wonder if a module could say e.g. "defer load" to communicate that if it's loaded via import defer, all of its imported dependencies could be completely deferred until its evaluation.
13:11
<shu>
well i think the wasm case is generalizable to a certain kind of module with a certain weight profile of time spent in which phases
13:12
<shu>
the high-order bit for me is "does the performance story compose"
13:12
<bakkot>
concretely, I have published exactly one wasm-based library (z3-solver). It's CJS-based so it has an async init function, but if it was ESM-based it would totally have a top-level await in its entrypoint, not at a leaf. and it's doing a nontrivial amount of computation on load, of exactly the sort I'd hope to be deferring by the use of defer.
13:12
<shu>
can someone who doesn't need sync evaluation can just always insert defer and get some loading wins
13:12
<Chris de Almeida>
Michael Ficarra: yeah.. I think we need a DX workflow presentation from BT first
13:13
<littledan>
the high-order bit for me is "does the performance story compose"
right, I think this comes down to some details of the algorithm for exactly what is made eager. It's a good action item for the champions to articulate some more cases to investigate this. My intuition is that it should work out as long as you do import defer for the imports from within the module that uses TLA--then you get the deferred-ness back
13:13
<Rob Palmer>
the high-order bit for me is "does the performance story compose"
Given our experience deploying an ESM system, we've found the ESM performance composition story is not viable without this feature.
13:14
<Jack Works>
let me rejoin
13:14
<littledan>
Given our experience deploying an ESM system, we've found the ESM performance composition story is not viable without this feature.
this matches what Yulia shared for Firefox DevTools at the outset of this project
13:14
<shu>
that's not what i mean by composition, i don't think
13:14
<Rob Palmer>
It also matches what I've said at every plenary where this item is brought up ;-)
13:14
<shu>
i understand Yulia's original claim to be "ESM performance story is not viable without this feature"
13:15
<Rob Palmer>
ok sorry if I misinterpreted the point
13:15
<shu>
maybe that's the overriding thing
13:15
<littledan>
right, I think this comes down to some details of the algorithm for exactly what is made eager. It's a good action item for the champions to articulate some more cases to investigate this. My intuition is that it should work out as long as you do import defer for the imports from within the module that uses TLA--then you get the deferred-ness back
does this match what you're asking about, shu ?
13:17
<shu>
let me try to articulate what i'm asking for plenary littledan
13:18
<ljharb>
webpack might be the TLA breaking change i was thinking of
13:19
<danielrosenwasser>

I know bakkot mentioned something like this - but it does feel like import defer is going to ring the wrong bells for a lot of people who expect this won't even do script parsing.

Not that I am necessarily advocating for that behavior.

13:19
<Jack Works>
this proposal is good enough for sync usage. I have no idea about TLA and WASM and we haven't try that yet.
13:19
<ljharb>
so it did get stage 2?
13:20
<ryzokuken 🇳🇴>
yeah
13:20
<ljharb>
k
13:20
<Ashley Claymore>
fwiw, at Bloomberg we still encourge use of fully-async dynamic import when that is viable. This proposal has been great for the places where it has not been viable to go fully async
13:20
<shu>
Ashley Claymore: see that smells really off to me
13:21
<Ashley Claymore>
the current situation when it is not viable is that code is written to be fully eager.
13:22
<shu>
hm my internet is not down but zoom disappeared for me
13:23
<shu>
just me?
13:23
<Jack Works>
no, I can see zoom
13:25
<shu>
okay, took a while to come back
13:25
<shu>
Ashley Claymore: right, agreed on that premise
13:26
<littledan>
Ashley Claymore: see that smells really off to me
it's pretty common that you're writing code that's not in an async function... I don't think that's going to go away.
13:26
<shu>
Ashley Claymore: but it doesn't smell right to me (yet) that we've been saying "asyncify" and now saying "well, if you did asyncify if you wanna take advantage of this new deferral thing you might not be able to"
13:27
<Rob Palmer>
shu: Using dynamic import would only neuter the deferred import if the dynamic import were awaited at top-level. The general advice is to never do that. Instead, dynamic import is best used just-in-time inside async functions that require it.
13:27
<shu>
i mean top-level await is a thing?
13:28
<shu>
i don't know how to reconcile that we added it with "actually the general advice is to never do that"?
13:28
<littledan>
In some way, this is a key part of providing a well-scoped amount of feature parity from CJS into ESM, since CJS has always supported this pattern (differently, just by having sync require, which is more general than this feature)
13:28
<shu>
interesting framing
13:28
<shu>
i don't know enough about CJS to say if i find that compelling
13:28
<Rob Palmer>
"never do that" -> "avoid if possible" (I was too strong)
13:28
<Michael Ficarra>
😮 I had no idea they didn't have an FPU!
13:29
<littledan>
i don't know how to reconcile that we added it with "actually the general advice is to never do that"?
I think this isn't actually the advice--it's more like, if you want the deferral to happen, you also need to defer the imports within the TLA-containing module, and not put the only defer things higher up in the module graph
13:31
<Jesse (TC39)>
someone is spamming the notes with the letter t
13:32
<Andreu Botella>
did someone fall asleep at their keyboard? /j
13:32
<Michael Ficarra>
why would we want irandom to take bounds if random doesn't? I don't like it
13:32
<Chris de Almeida>
🐈️ ⌨️
13:37
<HE Shi-Jun>
Because it's very common? Math.random() actually have bounds implicitly
13:37
<littledan>
Note that our internal membership lists don't exactly match what Ecma has recorded. We should work to reconcile these. cc saminahusain
13:37
<bakkot>
why would we want irandom to take bounds if random doesn't? I don't like it
don't think if it as "like random, but gives an int"
13:37
<bakkot>
think of it as "like randInt, the extremely useful function in every other standard library"
13:38
<bakkot>
(probably it should also not be spelled irandom)
13:39
<Michael Ficarra>
sffc: is divrem considered an alternative to divmod or is it still useful even when you have divmod?
13:40
<HE Shi-Jun>
why not operator a /% b instead of divrem? 😁
13:40
<Michael Ficarra>
littledan: doesn't Ecma only keep one point of contact per member?
13:41
<bakkot>
sffc: ... because we don't have 64-bit integers?
13:42
<shu>
yeah i think... that's just that
13:42
<shu>
also i don't want to allocate BigInts?
13:43
<littledan>
yeah I guess if we wanted to support 64-bit integers we'd need stuff for, like, adding two bigints and then rounding to the right 64-bit int (possible to define in multiple ways for signed vs unsigned). I'm not convinced we need that--my hope was that BigInt.asUintN would be a suitable replacement (to call after the bigint arithmetic op)
13:44
<littledan>
also i don't want to allocate BigInts?
I guess the hope is that the compiler would do representation analysis and avoid allocating BigInts
13:44
<shu>
lol, lmao
13:45
<littledan>
why lol?
13:45
<littledan>
I mean, it's fine that you don't do it now, but I don't know what the blocker would be if it were used commonly enough
13:46
<littledan>
like, this is a totally classical optimization in the Scheme world
13:46
<shu>
i don't think that will ever happen outside of the optimizing tier
13:46
<littledan>
oh yeah definitely
13:46
<shu>
and i don't consider optimizing tier hope to be very compelling for proposal motivations, or alleviating proposal performance concerns
13:47
<littledan>
I mean, this logic went into the decision to not add int64 and instead go for bigint, in the first place
13:48
<shu>
yes, and in retrospect i think it was wrong
13:49
<shu>
for Promise.withResolvers: i have another meeting at 7am (didn't realize this went until 7:15)
13:49
<littledan>
shu: Are you OK with the presentation going on? we can return to it later if you want to give more comments
13:49
<littledan>
but you might have time for yours
13:49
<shu>
yes of course
13:59
<sffc>
sffc: is divrem considered an alternative to divmod or is it still useful even when you have divmod?
I was referring to the Euclid divrem, which always returns positive numbers for the remainder and rounds the quotient toward negative Infinity instead of zero. At least this is what Rust calls the operation (added a link to the notes)
14:02
<littledan>
I was referring to the Euclid divrem, which always returns positive numbers for the remainder and rounds the quotient toward negative Infinity instead of zero. At least this is what Rust calls the operation (added a link to the notes)
thanks for clarifying in the notes
14:03
<bakkot>
I maintain that defer and deferred are awful names for anyone who doesn't know the history, which is 95%+ of users
14:03
<bakkot>
it doesn't defer anything
14:03
<bakkot>
and deferred is not a noun
14:07
<littledan>
I maintain that defer and deferred are awful names for anyone who doesn't know the history, which is 95%+ of users
Yeah, bikeshedding names is sort of the rare case where it is reasonable to make decisions based on the knowledge of percentages of JS developers
14:07
<Christian Ulbrich>
Rob Palmer: We'd gladly host you in Dresden :)
14:08
<Jack Works>
than we can name it import duck until stage 3 😇
14:09
<bakkot>
ok I'm assuming nothing else important is happening so I am going to go sleep
14:27
<Luca Casonato>
// wasm.js
const mod = await WASM.instantiateStreaming(fetch(…));
export const foo = mod.exports.foo;
14:27
<Luca Casonato>
This is a leaf. No imports
15:02
<Mathieu Hofman>
bakkot: looks like we're missing the logs for today? https://matrixlogs.bakkot.com/TC39_Delegates/2023-07-11
15:19
<Chris de Almeida>
bakkot: looks like we're missing the logs for today? https://matrixlogs.bakkot.com/TC39_Delegates/2023-07-11
I think it's a cron job that runs, not realtime (?)
15:32
<Chris de Almeida>
I forgot who asked, but I added the TG3 slides to the agenda (07.md)
15:37
<Chris de Almeida>

https://www.ecma-international.org/news/ecma-tc39-ecmascript-has-formed-a-new-task-group-tg3-dedicated-to-the-security-of-the-ecmascript-javascript-language/

do we want/need to work with Ecma on drafting something like this for TG4? littledan jkup

16:08
<jkup>

https://www.ecma-international.org/news/ecma-tc39-ecmascript-has-formed-a-new-task-group-tg3-dedicated-to-the-security-of-the-ecmascript-javascript-language/

do we want/need to work with Ecma on drafting something like this for TG4? littledan jkup

I think that would be great!
16:14
<ljharb>
Array Grouping has now met its stage 3 condition
16:48
<shu>
peetk: i am fine with Promise.withResolvers advancing to Stage 3 given the explanations
16:48
<shu>
sorry for having to drop out early
16:48
<shu>
(though for the future i'd still prefer to not have meetings end on :15 which is kinda weird)
17:02
<shu>
i think it's good and healthy to revisit, and overturn, previous design decisions based on new evidence
17:03
<shu>
but still in this particular case and future ones i don't want to conflate "good" defaults and developer signal
17:07
<shu>
the end goal for me is users, not developers, and good defaults for me should be chosen to nudge developers to the result (responsiveness, correctness, fast loading, whatever) we want on products they build for the user. if all developers ignore a default constantly, then we obviously failed and nobody benefits. Google-internally, i did not get this sense when talking with practitioners about a lack of a defer-like thing
17:08
<shu>
this is all to say that it's a common argument in committee to say "look developers all do X and want Y", and it's not really that much signal to me most of the time
17:10
<Chris de Almeida>
how do you feel about DX as motivation for proposals in general?
17:11
<shu>
i think it is a weak motivation by itself
17:13
<shu>
as a general rule i do not think DX outweighs other concerns like security and performance
17:13
<shu>
if the other concerns are minimal, then DX is the right thing to optimize for
17:14
<shu>
but at the scale of JS and the web, until we figure out a way to do zero-cost DX improvements, it is explicitly a non-goal for me
17:21
<shu>
(it is not an anti-goal, i'm not saying i will actively oppose DX proposals, just that it is not a thing i will push for and find compelling standalone, but i certainly won't block absent other concerns)
17:24
<bakkot>
bakkot: looks like we're missing the logs for today? https://matrixlogs.bakkot.com/TC39_Delegates/2023-07-11
ugh, yes, something's up with my server
17:25
<bakkot>
it keeps making the boot disk read-only
17:25
<bakkot>
for reasons I have not been able to discern
17:26
<bakkot>
I will fix it later today and the logs will come back
17:50
<Chris de Almeida>
https://hackmd.io/BkORU_-kTKmR43Ipuohwog schedule has been updated. there is no difference in terms of constraints, but please review if you are presenting. edit: I think only Day 3 was affected, and changes are minor
18:58
<Mathieu Hofman>
I do not see Explicit resource management in the overflow section but from reading the notes I gather it did overflow its timebox?
19:02
<Chris de Almeida>

Consensus on PRs: 180,178,175,171 and 167.

Debates about the appropriate use of GC and Symbol.enter are ongoing and will take place in overflow time

19:04
<Chris de Almeida>
rbuckton: are we scheduling a continuation? if so, how much time do you estimate?
20:01
<Ashley Claymore>
Thank you!
20:17
<rbuckton>
rbuckton: are we scheduling a continuation? if so, how much time do you estimate?
Yes, if we could. 15 minutes, maybe?
20:18
<Chris de Almeida>
tomorrow morning work?
20:18
<Chris de Almeida>
if other topics run long or if 15 mins turns out to not be enough, there's still time available on Thursday morning
20:19
<Chris de Almeida>
done