08:46 | <eemeli> | What's the "TKTKTK intro" that's occupying the first hour of the schedule? Searching online suggests mainly ASMR videos, which would be a bit weird. |
08:56 | <nicolo-ribaudo> | It was a placeholder for the various initial reports (secretary, editors, etc); I helped ryzokuken preparing the schedule and forgot to replace it 😅 |
10:05 | <littledan> | TC39 starts in around 5 hours, right? |
10:05 | <littledan> | (Am I calculating correctly?) |
10:09 | <nicolo-ribaudo> | Yep, according to the TC39 calendar |
11:29 | <ryzokuken> | oops, sorry, I'll do that now |
11:29 | <ryzokuken> | but it's supposed to be the opening of the meeting |
12:43 | <littledan> | Does that 60 minute topic include Samina’s introduction? |
12:43 | <ryzokuken> | Yeah |
12:43 | <pzuraq> | I've had a meeting come up so won't be able to make the first hour or so |
12:44 | <pzuraq> | don't expect to be presenting then but just in case |
12:45 | <littledan> | ryzokuken: how do you expect the time to divide up within that segment? |
12:48 | <ryzokuken> | just updated the hackmd |
12:48 | <ryzokuken> | apologies again for doing this last minute |
12:50 | <littledan> | Np thanks |
15:17 | <eemeli> | I'm getting an error "Couldn't connect you to the video call" when following the Google Meet sign-in link. Is this just me? |
15:18 | <Lea Verou> | there are 33 people in the meeting right now |
15:18 | <Michael Ficarra> | yes, we have 33 people in here |
15:19 | <nicolo-ribaudo> | Are the slides frozen to the first one? (not that I'm missing much, but...) |
15:20 | <eemeli> | Got in by trying again. Dunno what was wrong. |
15:22 | <ljharb> | the first time i connected i heard no audio, i had to drop out and reconnect, definitely something a bit weird |
15:27 | <littledan> | s/master/authoritative/; s/slave/derived/ |
15:27 | <littledan> | (this also explains it more clearly!) |
15:30 | <shu> | "we need a solution for pdf generation in 2024" needs to further define "we" |
15:30 | <ljharb> | i assume we all realize that the graph dropped there because we started hosting it on github then, not because of the pdf |
15:30 | <shu> | "we" means Ecma, not any TC39 volunteers |
15:30 | <bakkot> | print-to-pdf is a solution for pdf generation |
15:30 | <shu> | correct |
15:31 | <ljharb> | as is ecma providing the budget we've asked for for years for professional typesetters |
15:32 | <bakkot> | I am happy to believe that this PDF is downloaded more than any other PDF ecma provides but that doesn't actually make it important per se |
15:33 | <ljharb> | also an earlier slide implied that PDF quality impacts downloads, and that 402's PDF is better, but 402's PDF downloads is a tiny fraction of 262's PDF. |
15:33 | <eemeli> | It sounds like it may be important to ECMA, but not to TC-39. |
15:33 | <Michael Ficarra> | surely, if the PDF was not available, people would use the HTML version instead, right? |
15:33 | <shu> | eemeli: yes, that's a statement of fact |
15:34 | <Andreu Botella> | I'm curious which things are missing from print layout in CSS/browsers to be good enough for ECMA |
15:34 | <bakkot> | glad to hear the json spec is approved for the next 5 years |
15:35 | <ryzokuken> | also an earlier slide implied that PDF quality impacts downloads, and that 402's PDF is better, but 402's PDF downloads is a tiny fraction of 262's PDF. |
15:35 | <Lea Verou> | it's not just about browsers, there are print formatters that are developed especially for printing (PrinceXML, Antennahouse etc). There are entire books typeset in HTML & CSS using these |
15:35 | <ljharb> | yeah i'm sure it's just that 262 has some specific content that makes it print worse than 402, i was just pointing out the problem with istvan's argument |
15:36 | <littledan> | honestly it would be good to have a summary of topics like Istvan's, to understand what the main points are (I think this is usually not clear to the committee) |
15:38 | <ljharb> | also this kind of seems like it should be its own agenda item, not part of the secretariat's report |
15:38 | <littledan> | What do you mean, Samina's introduction? |
15:39 | <littledan> | She did put it on the agenda, and you can blame me if you think she did so in an incorrect way (as I walked her through that process of the agenda edit) |
15:39 | <littledan> | Do you have any particular questions or concerns for Samina? |
15:42 | <ljharb> | no no, i meant istvan's PDF concerns that we'd been discussing in chat |
15:42 | <ljharb> | samina's topic was on the agenda and is perfectly fine, i met her last week and am happy to welcome her |
15:43 | <bakkot> | I don't want the pdf topic to be its own agenda item because I don't want to spend more time on it |
15:43 | <ljharb> | i agree, but it seems easier to convey that if it's separated :-) |
15:43 | <Michael Ficarra> | surprisingly 262 does have camel case AO names (though we are working on fixing that) |
15:43 | <bakkot> | fair |
15:43 | <littledan> | hehe yeah I think it concerns a small group of people and that group is sort of in touch by an email thread |
15:43 | <ljharb> | otherwise it'll be snuck into the secretariat's report for another year |
15:44 | <littledan> | well, Samina is onboarding here; I think we may see different communication styles here over time |
15:44 | <ljharb> | i'm confused, why were we talking about normative changes in this section |
15:44 | <littledan> | I'm pretty optimistic |
15:44 | <ljharb> | this was just a status update, not a consensus item |
15:45 | <Michael Ficarra> | normative 402 changes are always run by TG1 |
15:45 | <littledan> | i'm confused, why were we talking about normative changes in this section |
15:45 | <littledan> | but it'd also be fair to ask for more explanation and discussion |
15:45 | <ljharb> | that's not my recollection, but ok |
15:46 | <ljharb> | we don't do that in 262 updates, and i definitely want more explanation than "go look at the PR" |
15:46 | <littledan> | it's true that it'd be nice to have more proactive support from the committee |
15:46 | <ljharb> | we have a separate section for "needs consensus" |
15:46 | <littledan> | we don't do that in 262 updates, and i definitely want more explanation than "go look at the PR" |
15:46 | <Michael Ficarra> | yes because TG1 members are expected to be much more familiar with 262 than 402 |
15:48 | <littledan> | can the code of conduct committee update their membership list? |
15:48 | <littledan> | (sorry I haven't gotten the queue up yet) |
15:48 | <ljharb> | it should be up to date |
15:48 | <littledan> | in what sense? it contains multiple people who are not delegates |
15:49 | <littledan> | if you have an idea of which 4 people are somewhat active, then shouldn't the list reflect that? |
15:49 | <ljharb> | everyone on the list is a delegate as far as i'm aware. |
15:49 | <ljharb> | and we haven't previously evicted people for not attending meetings |
15:51 | <ljharb> | regarding engagement on 402 items, it's not that they need to be longer, it's that the agenda didn't include the item, nor any supporting materials, 10 days prior to the meeting, so how could anyone have reviewed it or known they needed to |
15:51 | <ljharb> | there's lots of reasons normative changes have their own section, and need their own item. |
15:59 | <ryzokuken> | ljharb: I understand your concerns but at the same time this is how we've dealt with normative Intl stuff so far |
15:59 | <ljharb> | i guess i've missed it. but either way the specific items must be referenced on the agenda, 10 days in advance, per our current policy |
16:00 | <ryzokuken> | but I'm not against presenting every normative change as a separate item if that's useful |
16:00 | <ljharb> | i don't personally care if it's one "402 normative changes" item, or separate items, or whatever, i just care that it's called out with supporting materials on the agenda, like every other normative change we approve (and it's confusing to me to do it under "status updates") |
16:00 | <littledan> | Yes, I agree that normative changes should be posted in advance like that, and for this topic it'd be reasonable to ask for an overflow item if you want to go into more detail |
16:01 | <ljharb> | i don't, on this specific item, i'm speaking about in general |
16:02 | <littledan> | omg I don't remember this issue at all |
16:02 | <littledan> | I mean I don't remember posting it |
16:04 | <HE Shi-Jun> | It seems the issue mentioned wasm might allow 2**53 bigger Arraybuffer? |
16:06 | <shu> | like we're not going to have computers in the medium term future with that kind of memory i don't think? |
16:06 | <bakkot> | yeah but virtual memory |
16:07 | <littledan> | In general, don't we want to write the summaries collectively? |
16:07 | <bakkot> | as long as you don't actually try to use all the pages you can still pretend |
16:07 | <littledan> | like, synchronously in the meeting |
16:07 | <littledan> | rather than just telling the presenter to write something |
16:07 | <shu> | ehhh maybe virtual |
16:07 | <littledan> | that way we can be sure we actually agree on it |
16:08 | <ryzokuken> | that way we can be sure we actually agree on it |
16:08 | <ryzokuken> | but we're not so pressed for time this time around so it shouldn't be a problem I suppose |
16:08 | <Chris de Almeida> | yes, I think we previously at least paused for a moment till the conclusion was in reasonable shape |
16:09 | <ryzokuken> | alright then I'd pause a bit for the next items |
16:10 | <Chris de Almeida> | also allows the person writing the conclusion to participate in the next topic, rather than focused on the conclusion and missing the presentation |
16:11 | <littledan> | we can do that, but it could get somewhat time consuming |
16:14 | <Justin Ridgewell> | IMHO, this feels like we're waisting time. |
16:15 | <Willian Martins> | That is useful to help us report back on our internal meetings. |
16:16 | <ryzokuken> | I'm with the general idea of summarizing items, but pausing for a long duration feels like a bit much |
16:17 | <Justin Ridgewell> | What is the point of blocking all other items while we committee a summary? |
16:19 | <Michael Ficarra> | I prefer doing this async as well |
16:19 | <littledan> | we can do it more quickly if the presenters are a little more active about it. They can just dictate a quick summary and conclusion at the end of their topic |
16:19 | <bakkot> | yeah that's my preferred option |
16:19 | <bakkot> | we did that last meeting IIRC and it was efficient |
16:19 | <ryzokuken> | I think doing it sync especially for these smaller items is not the best idea |
16:19 | <ryzokuken> | for instance the last item was 4 minutes by itself |
16:19 | <ryzokuken> | and we spent over 5 minutes coming up with the summary |
16:19 | <littledan> | last meeting I wrote many of the summaries. I want to get out of that pattern. |
16:20 | <littledan> | the champion should just dictate a really quick summary and conclusion at the end of their topic. It should take less than one minute |
16:20 | <littledan> | If the champion is not able to dictate it, then someone else can write it |
16:20 | <ryzokuken> | last meeting I wrote many of the summaries. I want to get out of that pattern. |
16:20 | <littledan> | I can keep writing the summaries but I don't know if anyone is reviewing them and that concerns me. It risks making the notes biased. |
16:22 | <littledan> | heh, I wanted the main discussion points to be just part of the conclusion section but others disagreed hence we have two separate parts |
16:22 | <Michael Ficarra> | omg I just realised my camera was off that whole time lol, sorry |
16:22 | <ryzokuken> | littledan: how did you feel about that last one |
16:22 | <ryzokuken> | where someone just presented the summary |
16:23 | <ryzokuken> | and we can spend a few seconds to see if anyone wants to add to it |
16:23 | <littledan> | yeah that was good. There was nothing to write in the summary; we just had a conclusion. |
16:23 | <littledan> | In the future, I think a Stage 4 summary could list how the proposal meets Stage 4 requirements (e.g., where it's shipping, the fact that there are tests) |
16:23 | <ryzokuken> | yeah that was good. There was nothing to write in the summary; we just had a conclusion. |
16:23 | <littledan> | but that's optional |
16:23 | <Michael Ficarra> | summary: proposal meets all stage 4 criteria |
16:23 | <littledan> | anyway Kevin quickly dictating a conclusion is a good case |
16:23 | <littledan> | summary: proposal meets all stage 4 criteria |
16:24 | <Michael Ficarra> | that's what the content was meant to provide evidence for |
16:24 | <Michael Ficarra> | summary: just take my word for it |
16:25 | <Michael Ficarra> | I'm looking forward to chatting with Samina about it in Bergen |
16:25 | <nicolo-ribaudo> | For discussion these short "summary" and "conclusion" are basically the same thing |
16:26 | <shu> | supporting evidence should be out-of-line |
16:26 | <shu> | notes aren't adversarial! |
16:26 | <littledan> | OK yes this is fine |
16:26 | <shu> | i don't think someone is going to pull a fast one and then accidentally get something to stage 3+ |
16:26 | <littledan> | we're just working on making the notes meaningful |
16:26 | <littledan> | no one is adversarial about this |
16:27 | <littledan> | it's fine to just have a conclusion and no summary for this kind of topic |
16:27 | <littledan> | summaries are more important when we have a big debate and people make important points, IMO |
16:32 | <Chris de Almeida> | it's contextual -- sometimes the points make sense, sometimes it would be overkill. we'll get into a better rhythm with it, but just quickly getting it done when it's timely (at the end of the presentation) is the quickest and cleanest way to do it |
16:39 | <Michael Ficarra> | toBase64String /toHexString ? |
16:39 | <Michael Ficarra> | seems like a stage 2 thing |
16:40 | <Justin Ridgewell> | Yah, definitely not blocking. I just like node's API |
16:41 | <ljharb> | to confirm, the streaming api is included in this in stage 2? |
16:41 | <Michael Ficarra> | ljharb: yep |
16:42 | <rbuckton> | The property names more and extra are a bit opaque, but I imagine anyone using the streaming API is going to likely need to refer to documentation anyways |
16:42 | <shu> | ✨extra✨ |
16:42 | <Michael Ficarra> | rbuckton: now that it's stage 2, it's the perfect time to suggest alternative names |
16:43 | <rbuckton> | rbuckton: now that it's stage 2, it's the perfect time to suggest alternative names |
16:46 | <bakkot> | chairs: advance the queue? |
16:46 | <bakkot> | oh wait that's done nvm |
16:47 | <HE Shi-Jun> | rbuckton: yeah, when we discussed base64 api in JSCIG meeting, it's hard for us to figure out the streaming api without check the full example in the proposal site |
16:48 | <Michael Ficarra> | I find Kevin's argument about it being better for developers if this is loud convincing |
16:49 | <Michael Ficarra> | (I didn't previously form an opinion about which semantics was better, just that the name needs to match) |
16:49 | <rbuckton> | The downside of throwing is that something simple like array.filter(Symbol.isWellKnown) may now require an arrow to do a typeof test to avoid throwing. |
16:50 | <HE Shi-Jun> | Is there any isXXX api in JS throws? |
16:50 | <HE Shi-Jun> | I don't remember any |
16:50 | <littledan> | BTW the summary for Shu's proposal is excellent |
16:50 | <rbuckton> | By throwing, we're mandating forced overhead |
16:51 | <bakkot> | The property names |
16:52 | <bakkot> | TextDecoder has stream which must be false for the last call, which confuses people a great deal, and I was trying to pick a name which avoids that problem (hence more ) |
16:52 | <ljharb> | jschoi: it is a predicate and definitely would start with "is" |
16:53 | <Michael Ficarra> | streamMore |
16:53 | <bakkot> | ugh |
16:54 | <bakkot> | I could imagine last instead of more ? and you set last: true on the last call? |
16:54 | <rbuckton> | TextDecoder has more actually does, from just reading the spec. It seems like it means "if true , also return any unencoded bytes as a new array". would that be a correct interpretation? |
16:55 | <bakkot> | [ugh matrix does not have strikethrough] |
16:55 | <bakkot> | i.e., it indicates whether more input is expected |
16:55 | <Michael Ficarra> | crossRealmObj[Symbol.iterator]() needs to use a well-known symbol |
16:55 | <bakkot> | wait |
16:55 | <bakkot> | yes the thing you said |
16:56 | <bakkot> | I was thinking of the inverted one I just described |
16:57 | <bakkot> | the important part is, if more is false then all bytes get encoded, including any padding if necessary |
16:57 | <shu> | Michael Ficarra: so like, for iframes? |
16:57 | <ryzokuken> | [ugh matrix does not have strikethrough] |
16:57 | <bakkot> | how |
16:58 | <Michael Ficarra> | shu: yep |
16:58 | <ryzokuken> | it's cursed actually |
16:58 | <ryzokuken> | you need to surround it with <del> tags |
16:58 | <bakkot> | what |
16:58 | <rbuckton> | While I know choosing a single-word property might generally be preferrable, I think the underlying semantics are complex enough that a more descriptive name might be advisable, like excludeOverflow or something. |
16:58 | <Michael Ficarra> | "the following text has been struck: " |
16:58 | <ryzokuken> | ikr |
16:58 | <shu> | Michael Ficarra: do you think that rises to the level of needing a predicate in the language? |
16:59 | <ryzokuken> | <del>this</del> |
16:59 | <rbuckton> | I don't actually like the name excludeOverflow , tbh, just using it as an example. |
16:59 | <Michael Ficarra> | shu: eh, barely |
17:00 | <bakkot> | rbuckton: yeah, seems reasonable (in general, I also don't like that particular name) |
17:00 | <shu> | like, what is the thing you're writing other than polyfills that can benefit from this (and i still contend this adds major pain point to polyfills, but the counterargument there seems to be "we'll just ignore patching it when polyfilling because that's not realistic anyway") |
17:00 | <shu> | the sum of "what'll happen in practice" conclusions here leaves me mostly unhappy |
17:00 | <bakkot> | extra is more annoying because you need to shuttle it between calls - the value returned from one call gets passed into the next. so I wanted a name which could make sense in both positions (so you can use destructuring and the property shorthand), and that's tricky. |
17:04 | <rbuckton> | I understand why you might not want to use TextEncoder directly, but is there a reason a similar API wouldn't be sufficient? Similar APIs in other languages usually have the number of bytes read or written as part of the result, leaving it up to the user to do any byte shuffling when streaming. |
17:04 | <bakkot> | a similar API would be sufficient, but much heavier - I'd really prefer to avoid adding an entirely new class to the language for this |
17:05 | <rbuckton> | I'm not saying you should add a new class. I'm just talking about the use of read/written counts for decode/encode instead, much like TextEncoder /TextDecoder do. |
17:05 | <bakkot> | Are we talking about the same TextEncoder ? |
17:06 | <bakkot> | The one in HTML doesn't have a byte count afaik |
17:06 | <rbuckton> | Let me piece together an example |
17:06 | <rbuckton> | https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/encodeInto has read /written |
17:07 | <bakkot> | ah, I was thinking of encode , yeah |
17:07 | <Andreu Botella> | a similar API would be sufficient, but much heavier - I'd really prefer to avoid adding an entirely new class to the language for this TextEncoder , encode and encodeInto could very well be static methods |
17:08 | <bakkot> | Andreu Botella: Yeah I mean TextDecoder |
17:08 | <bakkot> | encoder is trivial but decoder has to handle surrogate pairs when streaming, so it needs to keep state |
17:08 | <ljharb> | shu: https://github.com/tc39/proposal-symbol-predicates/issues/12 |
17:09 | <shu> | ljharb: excellent, thanks |
17:09 | <bakkot> | and you have the same issue with base64, where you need to encode 3 bytes at a time, and if your chunks are not == 0 mod 3 you need state |
17:16 | <bakkot> | shu: https://github.com/tc39/proposal-arraybuffer-base64/issues/21 |
17:25 | <rbuckton> | bakkot:
This could be used in tandem with |
17:28 | <rbuckton> | I'm not sure if your proposal was just creating a new u8 array view over the underlying array buffer to hold onto the overflow, or if it was creating a brand new array buffer (the spec steps say "TODO" here, so its unclear). If its the former, that would make it difficult to use a fixed-size buffer as a work area (which is fairly common when encoding/decoding a stream), since you could potentially overwrite the overflow in between calls. If its the latter, then you're introducing a lot of overhead for a stream with all of the one or two-byte arrays you might generate. |
17:30 | <bakkot> | Proposal was an entirely new buffer. I figured that the overhead of making some new small arraybuffers wasn't actually worth worrying about, unless implementers say otherwise. |
17:31 | <bakkot> | In principle it should be any more overhead than making any other object, I would think. |
17:37 | <rbuckton> | That's where having an actual encoder class might be even more efficient, since you wouldn't be allocating all of these nursery objects for the options bag and return value. |
17:40 | <rbuckton> | but barring a class or something like ref parameters to avoid the potential nursery object allocations, an api that works with offsets/counts seems more efficient (by avoiding the extra arrays) and user-friendly (by avoiding hidden overwrite conflicts). |
17:43 | <bakkot> | with the offset/count design, how do users handle the extra bytes, in the case that they have e.g. two chunks (in different Uint8Arrays) where the first chunk has a length which is not a multiple of 3? |
17:43 | <bakkot> | concretely, let's say the first chunk is 10 bytes, and the second chunk is 4 bytes |
17:44 | <rbuckton> | In pretty much every other language, they copy from one array into the other, or copy both into a fixed-size buffer (which your spec steps also do) |
17:44 | <bakkot> | the spec steps are fictional; no reason an actual implementation would work that way |
17:45 | <bakkot> | the buffers aren't growable so you can't copy from one into the other |
17:45 | <rbuckton> | The code a user would have to write would be fairly similar to other languages, which is familiar. |
17:45 | <bakkot> | asking to copy into a new array seems like a big ask |
17:45 | <rbuckton> | the buffers aren't growable so you can't copy from one into the other |
17:46 | <bakkot> | ... how? |
17:46 | <rbuckton> | asking to copy into a new array seems like a big ask |
17:46 | <bakkot> | by "a big ask" I mean "that does not seem like it is a more ergonomic API than the current proposal" |
17:47 | <bakkot> | I am in general amenable to arguments of the form "we should do it like other languages", but in this particular case I'm not convinced, at the moment |
17:47 | <rbuckton> | by "a big ask" I mean "that does not seem like it is a more ergonomic API than the current proposal" |
17:48 | <rbuckton> | If you want something more ergonomic, I'd advise a class that can maintain the state. |
17:48 | <bakkot> | I think the design in the proposal is more ergonomic than this design, and does not require a class to maintain the state |
17:48 | <bakkot> | so I like it best of the options discussed so far |
17:48 | <bakkot> | possibly there is some tweak to your design which allows you to avoid the copy, which is my main objection to it |
17:49 | <rbuckton> | My argument is that the current design is not friendly to memory or GC. |
17:49 | <bakkot> | I am still missing the claim about, how do you copy from one buffer into the other |
17:49 | <rbuckton> | Except that your design also performs copies, but they're out of the user's control |
17:49 | <bakkot> | My design doesn't perform copies |
17:49 | <bakkot> | The spec steps do but they are fictional |
17:50 | <bakkot> | If implementations object on the grounds of creating new objects I'd hear them out, but I'd want to hear that from them - generally speaking we create new objects all of the place and don't worry too much about it |
17:50 | <bakkot> | concretely I would be reluctant to sacrifice the design if the only concern is the new objects |
17:50 | <rbuckton> | No, not if you're returning a fresh array that is 0-2 bytes for the overflow. Its small, but its a copy. |
17:50 | <bakkot> | sure, yes, I am creating a copy of 0-2 bytes |
17:50 | <bakkot> | not of an entire chunk |
17:50 | <littledan> | If implementations object on the grounds of creating new objects I'd hear them out, but I'd want to hear that from them - generally speaking we create new objects all of the place and don't worry too much about it |
17:51 | <bakkot> | littledan: the BYOB thing isn't really related to this conversation afaict |
17:51 | <littledan> | err sorry I'm sort of agreeing with you |
17:51 | <bakkot> | ah, gotcha |
17:53 | <rbuckton> | The way you would normally do this in C++ or C# would be to have a working buffer. You block-copy bytes from your incoming chunk into the working buffer, and use offset/count to control where to read/write from. If there is overflow, you block-copy the overflow bytes to the start of the temp buffer, and the next chunk would be written after those bytes. |
17:54 | <rbuckton> | Typed arrays have two copying mechanisms: copyWithin and set (though I'm not sure if they are block-copy operations when compiled/optimized). |
17:54 | <bakkot> | The way you work with binary data in C++ in general is sufficiently far from JS that I am not willing to trust that this will be familiar to JS developers |
17:55 | <bakkot> | and if your concern with the design in the proposal is memory, having something which requires manual management and copying with an additional working buffer seems like it is worse for memory concerns |
17:56 | <rbuckton> | This capability seems somewhat niche enough that the Venn diagram of entry-level JS devs and calls to toPartialBase64 seems fairly small. |
17:56 | <bakkot> | Even non entry-level JS devs are not necessarily going to have written a bunch of C++. |
17:57 | <rbuckton> | and if your concern with the design in the proposal is memory, having something which requires manual management and copying with an additional working buffer seems like it is worse for memory concerns |
17:58 | <bakkot> | I mean if they want to implement that way that's certainly their prerogative |
17:58 | <bakkot> | and if they aren't worried about the memory involved I'm not either |
17:58 | <bakkot> | I don't want to worry about it for them |
17:58 | <bakkot> | if they are worried about the memory it's easy to write in such a way that the copy of the full chunk is avoided |
18:00 | <littledan> | Perhaps this might be a question for implementers, since this seems exactly like how the current spec text might be implemented at runtime. |
18:00 | <littledan> | (as long as it doesn't look too obscure) |
18:01 | <bakkot> | I wrote it that way originally but it does end up looking pretty obscure |
18:01 | <bakkot> | I can add a NOTE pointing out the opportunity for optimization |
18:01 | <rbuckton> | Saying "the spec text is fictional" isn't a useful argument, IMO. This is working with memory in a way that the actual implementation that engines will end up using would have a significant bearing on the API design. If there are concerns about memory/GC efficiency of an actual implementation, that seems like useful information to consider before Stage 3. |
18:02 | <bakkot> | I agree that if engines say they don't like this design because of GC concerns then we'd want to reconsider. I just don't see it right now. And the internal copy, if they choose to implement spec steps blindly, doesn't interact with GC at all. |
18:07 | <rbuckton> | Your proposal, however, observably creates new arrays for the overflow bytes, which will need to be GC'd. |
18:08 | <bakkot> | Yes, I agree it creates two small objects rather than only one, and as I say if engines don't like that because of GC concerns then we'd want to reconsider |
18:09 | <bakkot> | The current design does rest on the assumption that "creates two small objects per chunk" is not a big problem |
18:09 | <bakkot> | (three, really, since the user needs to create one to pass in as an options bag) |
18:15 | <rbuckton> | This seems like the kind of API where you're going to use it in a loop (i.e., encoding/decoding files in a resource-constrained environment), so ideally we'd have as few objects as possible per iteration (i.e., use multiple arguments rather than an option bag), unless implementations can optimize away the options bag at runtime. |
18:15 | <rbuckton> | Though the options bag can be made efficient if you just reuse the same object for each iteration. |
18:17 | <bakkot> | I worked out the current design in conjunction with Peter from moddable; if they're not worried about it, I'm not either. I'm not going to sacrifice the UX for concerns about devices which are more resource constrained than them. |
18:18 | <bakkot> | It is definitely true that using multiple arguments rather than an options bag would use fewer objects; I just don't think it's worth it. |
18:19 | <bakkot> | Similarly it is true that we could do a C++ style design where users are expected to manage copies into a working buffer themselves, and thereby create one fewer object; I just don't think it's worth the cost to the UX (and am not convinced that having this extra buffer would in fact be better for memory, even though it would be fewer total objects). |
18:26 | <rbuckton> | This design is halfway between efficient and ergonomic. One that works on offsets would be far more efficient, but far less ergonomic. One that utilized a class to encapsulate scope could be both extremely efficient and far more ergonomic, so I'm not very convinced of the argument that the UX benefit of the proposed design is worth the efficiency loss. |
18:29 | <bakkot> | I don't think "creates one fewer object per chunk" can reasonably be characterized as "far more efficient". |
18:29 | <bakkot> | It is very marginally more efficient, at the cost of a much worse experience. |
18:30 | <bakkot> | (And I'm not even convinced it would be more efficient in practice, since it requires the user to manage an additional working buffer.) |
18:33 | <littledan> | +1 to Nicolo's answer |
18:49 | <HE Shi-Jun> | I have a question about "attach context" and "link", these two phase seems do not need to be in specific order? I mean currently is is designed to first attach context then link, but could first link then attach context also work? |
18:50 | <nicolo-ribaudo> | I have a question about "attach context" and "link", these two phase seems do not need to be in specific order? I mean currently is is designed to first attach context then link, but could first link then attach context also work? "context" includes the "resolution context", i.e. instructions on how to load/resolve the dependencies. But yes, "attach the globalThis context" could potentially happen after linking |
18:53 | <HE Shi-Jun> | oh, i see, thank u! |
19:00 | <nicolo-ribaudo> | Is TCQ down? |
19:00 | <nicolo-ribaudo> | I get an internal server error |
19:01 | <ryzokuken> | no |
19:01 | <Willian Martins> | It is fine here |
19:01 | <msaboff> | Seems good to me |
19:01 | <nicolo-ribaudo> | Uh ok it works on a different device |
19:03 | <Justin Ridgewell> | This design is halfway between efficient and ergonomic. One that works on offsets would be far more efficient, but far less ergonomic. One that utilized a class to encapsulate scope could be both extremely efficient and far more ergonomic, so I'm not very convinced of the argument that the UX benefit of the proposed design is worth the efficiency loss. |
19:03 | <rbuckton> | Can you show an example API? I weakly preferred a stateful class instead of the objects myself in #13 |
19:04 | <shu> | wait what is "attach context" again? |
19:04 | <Justin Ridgewell> | Yah, how would you use it and how is it more efficient than the current API? |
19:05 | <Justin Ridgewell> | wait what is "attach context" again? |
19:05 | <littledan> | This is attaching the global variable\ |
19:05 | <littledan> | for context, one of the compartments proposals was about user-supplied globals |
19:06 | <littledan> | module expressions don't have their global attached yet |
19:06 | <shu> | i see |
19:07 | <shu> | thanks |
19:07 | <littledan> | https://docs.google.com/presentation/d/1mZrAHHimtM_z_8fM9L3DUXFz5bjlJPxx8VrwsC68hmk/edit#slide=id.g23e5197d83a_1_38 |
19:07 | <littledan> | module expressions do not have a global variable attached yet, for example (but they do have a base URL) |
19:08 | <littledan> | or at least, module source doesn't... actually module expressions might have them attached and it just gets discarded during structured clone |
19:08 | <littledan> | (sorry ignore me here I got confused; nicolo-ribaudo can clarify better) |
19:10 | <nicolo-ribaudo> | Module expressions inherit the global object / realm from where the module expression is evaluated, but when structuredCloning them the idea is that the unserializable parts (such as, the global context) would be re-attached when deserializing them |
19:11 | <nicolo-ribaudo> | The "context" is currently attached when creating a Module Record (i.e. in ParseModule) that receives the realm and hostDefined params (where hostDefined includes the baseURL when used in HTML) |
19:12 | <HE Shi-Jun> | when will it be structurecloned? send to a worker? |
19:13 | <nicolo-ribaudo> | Yes, or structuredClone(aModuleObject) , or v8.deserialize /v8.serialize in Node.js |
19:13 | <HE Shi-Jun> | So can i also structureclone module declaration? |
19:13 | <littledan> | anyway module source objects do not have context attached |
19:14 | <nicolo-ribaudo> | So can i also structureclone module declaration? |
19:15 | <Andreu Botella> | It seems like it would have to be a host hook requirement that there's no module source for JS imports, right? |
19:15 | <nicolo-ribaudo> | It seems like it would have to be a host hook requirement that there's no module source for JS imports, right? |
19:15 | <littledan> | It seems like it would have to be a host hook requirement that there's no module source for JS imports, right? |
19:16 | <littledan> | https://tc39.es/proposal-import-reflection/#sec-getmodulesource |
19:16 | <Michael Ficarra> | ljharb: I see where you're coming from, but IMO that would just add noise |
19:16 | <nicolo-ribaudo> | Concretely the throwing behavior is inherited from Cyclic Module Records |
19:17 | <ljharb> | it indeed would add noise |
19:17 | <littledan> | (maybe it would make sense to move it to Source Text Module Record, since Wasm modules also are cyclic module records in the Wasm-ESM integration) |
19:24 | <shu> | i am not understanding the alternative ron is proposing |
19:29 | <littledan> | i am not understanding the alternative ron is proposing |
19:30 | <littledan> | I got the feeling he was suggesting that we only have the dynamic case, and use import.source(x) for that. Or maybe even just a function call. |
19:30 | <shu> | +1 let's get to other queue items if ron's particular alternative isn't actually on the table |
19:30 | <shu> | instead of getting in the weeds for a vague alternative |
19:30 | <shu> | but the current clarification sounds good |
19:31 | <littledan> | oh! this is a completely different alternative: that we use with for all of these instead |
19:32 | <shu> | okay then we should pause to clarify |
19:34 | <Luca Casonato> | Chris de Almeida: do we have time to overrun the timebox? |
19:34 | <ryzokuken> | yes |
19:35 | <Chris de Almeida> | we can go through the end of the hour |
19:35 | <ryzokuken> | you can run upto the top of the hour |
19:35 | <Chris de Almeida> | still need to be mindful of the queue |
19:42 | <Justin Ridgewell> | I've argued a few times that phase could be in the import attributes. Memo caching can be solved, early errors can be supported. |
19:43 | <Justin Ridgewell> | My reasoning is that phase is conceptually similar to an evaluator, meaning it changes the source text of the thing it imports (and this is how it's going to be implemented in bundlers) |
19:43 | <Justin Ridgewell> | Evaluators go in the import attributes bag. |
19:43 | <ljharb> | import attributes isn't exactly "evaluators" tho - they can do that but that's not the purpose of them |
19:46 | <rbuckton> | I have a weak preference against import <keyword> for the static syntax.I have a stronger preference for import.<phase>() for the dynamic syntax as the import(url, { phase }) syntax either introduces unnecessary asynchrony for asset references, or it introduces a non-promise return value from import (for asset references). The import(url, { phase }) syntax also introduces complexity for future proposals that may need to somehow navigate phase and with attributes on the object. |
19:46 | <HE Shi-Jun> | module from {} import source from from "url" |
19:47 | <Kris Kowal> | My reasoning is that phase is conceptually similar to an evaluator, meaning it changes the source text of the thing it imports (and this is how it's going to be implemented in bundlers) |
19:47 | <Michael Ficarra> | I don't really understand the syntax brittleness argument, all syntax is brittle to some degree |
19:48 | <Michael Ficarra> | we don't have like syntax checksums or ECC for syntax |
19:48 | <Justin Ridgewell> | I think you're relying on the engine to do the memoization, instead of letting the importHook ? |
19:49 | <nicolo-ribaudo> | The intersection semantics with importHook would be quite bad in that case. The importHook is responsible for returning a Module irrespective of the phase, and the importHook will need to see the whole options bag, and the importHook is memoized according to the specifier + all attributes. You will get duplicate instances if you import with multiple phases. |
19:49 | <Kris Kowal> | The Module instance is relied upon to do the memoization, yes. |
19:50 | <Michael Ficarra> | if everything was S-expressions, I guess it would be less brittle, but also it would look like (()()()))()()()()(())())()()())))()()((()((()) |
19:50 | <littledan> | I have a weak preference against |
19:50 | <rbuckton> | Its somewhat odd that we can import asset ... to resolve a url, and import source ... to fetch AND compile, but no way to just fetch via syntax. |
19:51 | <Justin Ridgewell> | The only way this could work is to "hide" the phase attribute and not pass it to the importHook even if it's specified importHook 's memo can handle that case pretty easily? |
19:51 | <nicolo-ribaudo> | if everything was S-expressions, I guess it would be less brittle, but also it would look like (()()()))()()()()(())())()()())))()()((()((()) |
19:51 | <rbuckton> | I can understand this argument, but also it seems really specific to asset references, which we really haven't worked out yet. import into an RPC call, i.e., specifying the operation via a property in an object as opposed to an imperative call. |
19:51 | <littledan> | Its somewhat odd that we can |
19:52 | <rbuckton> | It feels akin to asking someone to write array.do({ action: "filter", callback: fn }) |
19:52 | <nicolo-ribaudo> | The |
19:52 | <littledan> | it sounds like you're arguing against import attributes then? |
19:53 | <HE Shi-Jun> | module from {} |
19:53 | <Justin Ridgewell> | That's the responsibility of the code import hook code. If you use a power-use feature, you're expected to follow the rules |
19:53 | <Michael Ficarra> | why do SES folks have the time to burden every proposal with this requirement but they don't have the time to work on the getIntrinsics proposal or whatever it is? |
19:53 | <rbuckton> | it sounds like you're arguing against import attributes then? |
19:53 | <littledan> | is this case cause syntax ambiguity ? from should be banned in all these cases; it's good that you and Waldemar are pointing this out |
19:54 | <nicolo-ribaudo> | Yes this is probably what WH was worried about. It can be specced as unambiguous (and in parser it's a two tokens lookahead, or just "eat three identifiers and then decide if they are bindings or syntax"). It would still "read" as ambiguous, but how often would you import a source binding from a module declaration named from ? (how many files do currently exist have called from.js that export a source variable?) |
19:54 | <littledan> | No, import attributes are a bit more esoteric. This is far more obviously an operation import.stage() and not talking about the put-it-all-in-attributes side (which would be fine with phase:) |
19:55 | <rbuckton> | oh, I see, we're back on |
19:56 | <HE Shi-Jun> | Yes this is probably what WH was worried about. It can be specced as unambiguous (and in parser it's a two tokens lookahead, or just "eat three identifiers and then decide if they are bindings or syntax"). |
19:56 | <littledan> | Yes. I also brought up the cache-key concern internally, though the same argument was made that "such attributes don't need to be made part of the cache key" |
20:00 | <HE Shi-Jun> | I also prefer import.phase() for similar reason. |
20:02 | <Michael Ficarra> | people don't mean literally import.phase , right? you mean import.source() ? |
20:02 | <rbuckton> | I am ok with import <phase> , prefer import.<phase>() for dynamic unless there are strong objections. It seems like the champions had "no preference" when discussing it in slides. |
20:03 | <rbuckton> | people don't mean literally import.<phase>() is what I'd tried to use earlier in chat. |
20:04 | <ljharb> | why do SES folks have the time to burden every proposal with this requirement but they don't have the time to work on the |
20:05 | <Michael Ficarra> | please let's not do conditional advancement |
20:05 | <Michael Ficarra> | I'm sorry, I missed part of the conversation because my power cut out, but I do not want to do conditional advancement |
20:06 | <HE Shi-Jun> | Agree with shu , I don't like such big syntax decision (not like simple case await using or using await ) be conditional stage 3. |
20:06 | <bakkot> | we might have time at the end of the meeting anyway |
20:06 | <bakkot> | we could just come back to it |
20:06 | <bakkot> | we should not rush stuff |
20:06 | <nicolo-ribaudo> | I'm sorry, I missed part of the conversation because my power cut out, but I do not want to do conditional advancement |
20:06 | <Michael Ficarra> | nicolo-ribaudo: no it's fine |
20:07 | <littledan> | sorry I'm also very happy to wait until the end of the meeting or a future meeting and not rush stuff |
20:07 | <littledan> | though I do think that, syntax-wise, this is actually easier than using await /await using since the technical aspects of grammar work out easier. |
20:08 | <shu> | littledan: conditional stage 3 might need to be renamed then. i think it's most useful as a "we have decided here are the set of things we don't want reopen discussion on", and in the past it's been mostly smallish things like a name or something, so it's also coincided with "let's start implementing it just like other stage 3 proposals" |
20:09 | <littledan> | Yeah anyway I sort of retract my comment |
20:09 | <littledan> | I don't think we've been using conditional stage 3 that way though |
20:09 | <shu> | hmm |
20:09 | <littledan> | but I've been surprised by the breadth of usage of conditional stage 3 and I'm happy to use less of it |
20:10 | <shu> | yes, that's probably closer to the root of my unease |
20:10 | <littledan> | the "conditional" aspect is just supposed to save us the two months of waiting for a perfunctory, meaningless signify to something we've already agreed on |
20:10 | <msaboff> | I also don't support "conditional" stage N, where N is ≥ 2. We have time during this meeting for the champions to come back with the dynamic import.phase() change discussed. |
20:10 | <shu> | if we want to piecemeal decide "this topic is now closed and we have consensus", we should do that. i for one would like that |
20:10 | <bakkot> | I do wish we made more use of "consensus for this thing not being open for further discussion" outside of specific stage questions |
20:11 | <shu> | i've been thinking of conditional stage 3 as like, look, here're a few small things that a small set of stakeholders care about for small N, settle that async, i'll reload the issue in a month when i or someone on the team implement the feature |
20:11 | <rbuckton> | I know I'm guilty of using conditional advancement, but I've generally tried to keep it to small things that we usually have consensus on (or are close to consensus on), but require some additional leg work outside of plenary to resolve. |
20:11 | <littledan> | I also don't support "conditional" stage N, where N is ≥ 2. We have time during this meeting for the champions to come back with the dynamic import.phase() change discussed. |
20:11 | <shu> | lol, sick |
20:11 | <shu> | if we had conditional stage 1 |
20:12 | <littledan> | i've been thinking of conditional stage 3 as like, look, here're a few small things that a small set of stakeholders care about for small N, settle that async, i'll reload the issue in a month when i or someone on the team implement the feature |
20:13 | <msaboff> | I suspect some of this would depend what we are conditional on. e.g. conditional on a spec review by XYZ is different than conditional on a syntax or semantic change. |
20:13 | <littledan> | anyway I think a good rule of thumb could be, "if it's controversial whether conditionality could be used here, it's probably not time yet" |
20:15 | <Luca Casonato> | I do wish we made more use of "consensus for this thing not being open for further discussion" outside of specific stage questions This is a good idea. We can rephrase it as this when we resume the queue later in the meeting. We ask for consensus on everything except:
We can then come back next meeting (97th) to actually advance. We'd stay at stage 2 for now. I also don't want to make conditionallity overly complex. If we have locked in the semantics for everything except those two, we can doing some preparatory implementation work already under the assumption that we will go to stage 3 next meeting. |
20:21 | <rbuckton> | guybedford, Luca Casonato: apologies this concern didn't come up earlier in stage 2. My preference for import.<phase>() is stronger than the "no preference" described by the champions, but not a deal breaker. If no one else strongly prefers that syntax, I certainly won't block advancement. |
20:22 | <HE Shi-Jun> | Does module declaration allow: module x {} import x; ? |
20:22 | <Luca Casonato> | yes |
20:23 | <Luca Casonato> | guybedford, Luca Casonato: apologies this concern didn't come up earlier in stage 2. My preference for |
20:24 | <HE Shi-Jun> | If allow, i guess parser need to read many token in the case like: module source {} import source x from "url" |
20:24 | <Luca Casonato> | no worries - i think your strong preference outweighs the "no to very light preference" from others here, so we can make the change for next meeting |
20:24 | <Luca Casonato> | If allow, i guess parser need to read many token in the case like: import source "specifier" is not valid |
20:25 | <Luca Casonato> | unless I am misunderstanding what you mean |
20:25 | <Luca Casonato> | oh i see what you mean now - ok good catch |
20:26 | <Luca Casonato> | could we avoid this by banning module declarations that have import phases as names? |
20:27 | <HE Shi-Jun> | ban all phase names like source/instance/asset/defer ...? |
20:28 | <Luca Casonato> | or actually even easier, just ban import <phase>; |
20:28 | <nicolo-ribaudo> | HE Shi-Jun I'm curious, do you get these cases by experience working on a JS parser? I would love to borrow some tests for Babel 👀 |
20:30 | <HE Shi-Jun> | HE Shi-Jun I'm curious, do you get these cases by experience working on a JS parser? I would love to borrow some tests for Babel 👀 import source from from , someone in my wechat group asked the question. |
20:31 | <HE Shi-Jun> | or actually even easier, just ban |
20:35 | <nicolo-ribaudo> | I will try to implement module declarations (at least the part for import ) in Babel's to have a clearly understanding of hard it is to disambiguate.Currently we implement only an old syntax of import reflection ( import module foo from "foo") without implementing also module declarations, and disambiguating import module from from "x"vs import module from "x"` requires a 1-token lookahead. Another way to disambiguate is to keep eating identifiers until they represent possible valid syntax and then disambiguate based on the number of identifiers parser (and discard 0 to 2 "identifier" nodes created in the process) |
20:37 | <Luca Casonato> | If allow, i guess parser need to read many token in the case like: |
20:51 | <Chris de Almeida> | Source Phase Imports for Stage 3 (cont’d from Day 1) has been added to the schedule as the final item tomorrow |
20:52 | <Chris de Almeida> | limited to 20 minutes |
20:52 | <littledan> | What was the public calendar topic conclusion? |
20:52 | <littledan> | no conclusion is recorded in the notes. Are we going ahead with anything? |
20:52 | <Chris de Almeida> | I ~~was~~ am going to update that |
20:52 | <littledan> | In general, a lot of the notes need significant edits and are incoherent |
20:53 | <littledan> | (pro tip: use <del> ) |
20:53 | <littledan> | also in general, we can delete comments in the notes that are just queue management, right? |
20:53 | <bakkot> | yes |
20:54 | <bakkot> | at least, I always do when I'm editing |
20:54 | <bakkot> | shu: re byo-buffer, thoughts on https://github.com/tc39/proposal-arraybuffer-base64/issues/21#issuecomment-1548418530? |
20:55 | <bakkot> | also: you know TextEncoder's encodeInto ? thoughts on letting it grow a growable buffer? (possibly with an opt-in option) that would be handy in some cases |
20:55 | <bakkot> | (similarly here) |
20:55 | <littledan> | Reminder to everyone editing notes: we need a blank line between different speakers (this is Markdown) |
20:55 | <shu> | bakkot: won't have time to fully think it through until later, but first blush sgtm? |
20:56 | <shu> | it doesn't sound like a huge delta, and i appreciate not making the API even grosser |
20:57 | <littledan> | and should we delete the introductory comments by Ujjwal? |
21:01 | <littledan> | What is our conclusion on the ECMA-402 status update? Do we have consensus on the PR https://github.com/tc39/ecma402/pull/768 ? I share Jordan's concern that this should've been added to the agenda in advance so that folks could review. |
21:21 | <ljharb> | nobody objected, during or since, so i think it still has consensus. i brought it up as a procedural comment for the future |