17:38 | <Rob Palmer> | Ok, the Zoom is now running for the plenary. Please could someone remote dial in so we can test AV. |
17:39 | <ryzokuken> | waiting for host to let me in |
18:03 | <Chris de Almeida> | please add your name to the top of the meeting notes |
18:07 | <Chris de Almeida> | for people on the opposite side of the room, can you hear Rob over the PA ok? |
18:09 | <Michael Ficarra> | yep |
18:09 | <Chris de Almeida> | splendid, thank you |
18:19 | <littledan> | FYI this is replay.io if people are wondering |
18:20 | <rbuckton> | Is it just me, or is the sound coming from the room very low? I had to turn up the volume on my speakers quiet high to hear, so I'm wondering if the gain on the microphones in the room is too low. |
18:20 | <littledan> | also I want to call the TC for WinterCG "WinterTC" :) |
18:20 | <ryzokuken> | Is it just me, or is the sound coming from the room very low? I had to turn up the volume on my speakers quiet high to hear, so I'm wondering if the gain on the microphones in the room is too low. |
18:21 | <Michael Ficarra> | how would WinterCG standardisation in Ecma work? it's already in W3C |
18:21 | <Michael Ficarra> | also super excited for Sentry to join! |
18:22 | <jkup> | Yes!! |
18:23 | <littledan> | how would WinterCG standardisation in Ecma work? it's already in W3C |
18:23 | <leobalter> | we know why (reference to the ecma-262 access) |
18:26 | <leobalter> | not sure if Ecma is the best host for that work if they want actual web support. It might be challenging for WinterCG. I'd be interested to know what may not be working at W3C. |
18:27 | <rbuckton> | Rob Palmer: Can you have someone check either the mic gain in the room, or the input level of the room microphones to Zoom? I had to increase my volume to hear, which means if any other remote attendee speaks it will probably be far too loud. I think this is the source of the static remote attendees are hearing too. |
18:28 | <rbuckton> | Zoom doesn't let me fine tune volume for volume for individual participants |
18:30 | <Rob Palmer> | AV technicians have recommended rebooting the room. We will do that at the break. |
18:30 | <Chris de Almeida> | they did consider a W3C WG but opted for an Ecma TC instead, but I don't know how the discussion went |
18:33 | <Chris de Almeida> | it seems the full mics are much louder than the lapel mics... ? |
18:33 | <leobalter> | I truly hope they work towards inclusive Web support |
18:34 | <snek> | lapel mics are highly directional |
18:34 | <snek> | gotta be careful to point it directly at your mouth |
18:35 | <Michael Ficarra> | yeah I think you just need to basically eat them |
18:35 | <snek> | 😋 |
18:35 | <shu> | who is the Zoom host? |
18:35 | <Rodrigo Fernandez> | i am |
18:35 | <shu> | i'm trying to join in prep for sharing slides at some point. it says waiting for host to let me in |
18:36 | <shu> | i'm in now, thank you very much |
18:37 | <Rodrigo Fernandez> | done. sorry about the delay, we have a few co-hosts adimitting people |
18:37 | <Rob Palmer> | Rod, Duncan, Chris, Ujjwal and I are Zoom hosts and we must let people into the Zoom. |
18:37 | <Michael Ficarra> | you can change that setting |
18:37 | <Michael Ficarra> | assuming that's undesirable |
18:40 | <Rodrigo Fernandez> | sorry, we can't. company policy. |
18:49 | <Rodrigo Fernandez> | rbuckton: i set the mics gain to the maximum, is it better? |
18:50 | <rbuckton> | It still sounds very quiet. |
18:50 | <rbuckton> | I don't think that changed anything. |
18:50 | <snek> | should check if the mic itself has a knob on it |
18:51 | <Rodrigo Fernandez> | lapel mics should be hold closer to the mouth... btw, i have an extra mic here |
18:52 | <rbuckton> | Shu's volume sounds good. |
18:52 | <snek> | shu has a big mic |
18:53 | <Michael Ficarra> | littledan: how do you think we should track addressing your feedback? do you want to open issues? |
18:53 | <bakkot> | Michael Ficarra: if the outcome of your topic was consensus, add that to the notes? we currently lack a conclusion/summary |
18:54 | <Michael Ficarra> | okay |
19:05 | <rbuckton> | Was the microphone off when Dan Minor was speaking? I couldn't hear what was said. |
19:05 | <rbuckton> | I also can't hear Rob |
19:05 | <Michael Ficarra> | stage 2 or 2.7? |
19:06 | <bakkot> | 2 |
19:06 | <littledan> | IMO process-wise we should be using stages for these kinds of things |
19:09 | <littledan> | sorry for jumping out of turn |
19:09 | <Michael Ficarra> | littledan: thanks for handling that clarification |
19:09 | <Michael Ficarra> | no, I appreciated it! |
19:10 | <dminor> | Was the microphone off when Dan Minor was speaking? I couldn't hear what was said. |
19:10 | <dminor> | Could you hear Michael Saboff? |
19:11 | <dminor> | I thought I could hear myself on the room speaker. |
19:12 | <snek> | should we do a tutorial on how to use a lav mic |
19:12 | <rbuckton> | It seems like the lapel mics are turned down or aren't balanced properly with the handheld mics? lapel mics should be able to pick up your voice from 5-6 inches away (i.e., as if clipped onto the lapel or collar). |
19:12 | <Michael Ficarra> | I recall addressing the specific case of advancing directly from 2 to 3 when introducing stage 2.7 |
19:13 | <Michael Ficarra> | the correct way is to wear it on your lapel |
19:13 | <Rob Palmer> | We have no rules forbidding advancing multiple stages in one meeting. No need to introduce one now. |
19:14 | <snek> | you can attach them to hats/wigs too |
19:19 | <ptomato> | apparently yes, since I keep picking it up and talking into the antenna |
19:22 | <Michael Ficarra> | the battery pack? |
19:22 | <Michael Ficarra> | I am so happy the process document rewording was merged today |
19:23 | <littledan> | it's great that you put so much work into clarifying things here! |
19:23 | <Michael Ficarra> | 😀 thanks |
19:24 | <Michael Ficarra> | the way we communicate, especially to the community, about our process is important |
19:26 | <littledan> | I'm really happy about the shape of the API for iterator sequencing--it's a really intuitive extension |
19:27 | <littledan> | The discussion was on an open GitHub thread + in recorded meetings and open WinterCG Matrix chats |
19:27 | <littledan> | it took place over many months |
19:27 | <Chris de Almeida> | if put on the lapel it would be even quieter... |
19:27 | <littledan> | there will definitely be interaction with W3C, and technical development will continue to be in the W3C CG |
19:29 | <littledan> | To summarize: Ecma is a very lightweight and agile place to publish a standard. Personally, I know how to work us through Ecma's processes. I understand W3C's process to involve a bunch of convincing people to approve a charter, repeatedly, which I don't feel like doing. |
19:29 | <littledan> | See this thread: https://github.com/wintercg/admin/issues/58 |
19:31 | <littledan> | Ecma is just an equally legitimate place to do work, and we will work to accommodate collaboration with everyone who wants to get involved |
19:31 | <littledan> | there's no consistent layering separation between Ecma and W3C 😄 |
19:34 | <Luca Casonato> | rust Iterator.flatten: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.flatten results in GH code search: https://github.com/search?q=.flat%28+language%3ARust+&type=code |
19:38 | <bakkot> | variadic Iterator.append sounds great to me |
19:41 | <rbuckton> | distinct() is essentially distinctBy(x => x) |
19:43 | <rbuckton> | I have a preference for comparator. Not every thing you would want to compare can be mapped into a natively comparable key. |
19:44 | <bakkot> | doesn't the comparator require quadratic time instead of nlog(n)? |
19:45 | <rbuckton> | a comparator of (a, b) => boolean isn't efficient, but a comparator of { equals(a, b), hash(a) } is more efficient |
19:45 | <littledan> | about R&T: Sorry for it being stalled. Expect to see an update in committee some time this year. |
19:48 | <danielrosenwasser> | Given the way groupBy works, along with the most straightforward userland implementation using a Set , it is hard for me to imagine something more complex than a mapper. |
19:49 | <danielrosenwasser> | let me rephrase - I can imagine it, but I am drawn to the mapper :) |
19:50 | <TabAtkins> | rbuckton: Wait, how is that better than quadratic, still? Pairing a mapper with the comparator is also good for reducing the work in the comparator, but you still end up needing to compare everything to everything, no? |
19:50 | <bakkot> | no, you make a hash set internally |
19:50 | <Luca Casonato> | I agree - a userland polyfill of this is trivial with a mapper, but less so with a comparator:
|
19:50 | <TabAtkins> | Or are you imagining you only compare things that hash equal? |
19:51 | <bakkot> | the mapper would compare with === [modulo NaN] or possibly Object.is , so yes, you only compare things that hash equal |
19:51 | <bakkot> | i.e. the thing luca wrote |
19:51 | <rbuckton> | Examples of an equality comparator: https://esfx.js.org/esfx/api/equatable/equaler-interface.html#_esfx_equatable_Equaler_interface This is essentially what implementations do under the covers for |
19:51 | <snek> | one of these days we should do System.hash(value) |
19:52 | <ljharb> | i thought shitposts go in TDZ :-p |
19:52 | <snek> | this isn't a shitpost :( |
19:52 | <rbuckton> | one of these days we should do System.hash(value) |
19:53 | <ljharb> | ah, there's tons of stuff on es-discuss from back in the day on why it's not an option, iirc |
19:54 | <ljharb> | altho maybe WeakRef and FinalizationRegistry changes the landscape there, not sure |
19:54 | <rbuckton> | For some objects, its inefficient to convert the object into an existing comparable thing, or possibly even impossible to do so. |
19:56 | <rbuckton> | A mapper seems more efficient when you only consider the comparison algorithm, but doesn't consider the cost of the actual mapper call. You are paying for both the comparison and serialization into a comparable format. |
19:56 | <rbuckton> | An { equals(a, b), hash(v) } comparator uses the object as-is and requires no serialization. |
19:56 | <nicolo-ribaudo> | For some objects, its inefficient to convert the object into an existing comparable thing, or possibly even impossible to do so. |
19:57 | <snek> | you can't build custom collection types without something like hash |
19:57 | <rbuckton> | Maybe cheaper than serializing into a string, but you are still allocating a wrapper and stuffing values into it purely for comparison. |
19:58 | <rbuckton> | Implementations are already using hash-based comparators for JS collections internally. |
19:59 | <Kris Kowal> | you can't build custom collection types without something like hash hash can be shimmed. |
19:59 | <snek> | a performant one? |
19:59 | <rbuckton> | Though, a sufficient |
19:59 | <snek> | yea ok |
20:00 | <rbuckton> | https://esfx.js.org/esfx/api/equatable.html#_esfx_equatable_rawHash_function_1_ is a Object.hash(v) like thing, but it has a lot of workarounds. |
20:00 | <rbuckton> | I have to implement a string hashing algorithm for strings, and use weak maps for objects. |
20:01 | <rbuckton> | Or use NAPI and V8 internals. |
20:01 | <snek> | wow this esfx repo is pure spaghetti |
20:01 | <rbuckton> | wow this esfx repo is pure spaghetti |
20:02 | <snek> | ok so this depends on a native module |
20:02 | <hax (HE Shi-Jun)> | We already have the Array unique proposal: https://github.com/tc39/proposal-array-unique |
20:02 | <rbuckton> | ok so this depends on a native module |
20:03 | <ljharb> | We already have the Array unique proposal: https://github.com/tc39/proposal-array-unique |
20:03 | <snek> | and goes to GetIdentityHash in v8 |
20:03 | <snek> | this is what i wanna expose 🥲 |
20:08 | <Kris Kowal> | I’m pretty sure the party line at Agoric for exposing such a hash is “no” because we care about determinism and enabling as much of the JavaScript ecosystem to continue to work in a deterministic virtualization. |
20:09 | <Kris Kowal> | And regardless of our interests, those identity hashes are going to have to pay a price to not reveal memory layout. Thankfully, Go tells us about how to abuse AES machine instructions. |
20:09 | <rbuckton> | { equals(a, b), hash(v) } would require no serialization, nor extra allocations in the form of a composite key. Most implementations of hash are just bitwise math on 32-bit integers, and we would have roughly the same efficiency that Map or Set already has since it already uses a similar mechanism. |
20:10 | <Kris Kowal> | Maybe an engine VM implementer can tell us about whether there exists an immutable value for every string that can safely reveal an identity. |
20:11 | <rbuckton> | In my experience, its a bad idea to depend on determinism for non-cryptographic hashes, except within the same process. Many string hashing algorithms generate a random seed value when they're first used. |
20:11 | <Kris Kowal> | One that doesn’t require a content address, that is. |
20:11 | <rbuckton> | Maybe an engine VM implementer can tell us about whether there exists an immutable value for every string that can safely reveal an identity. |
20:12 | <rbuckton> | For example, the esfx project I mentioned above uses XXHash64 to generate a non-cryptographic hash with good avalanche properties. |
20:13 | <rbuckton> | Object identity tends to be the area of concern, as you don't want to expose the address when determining identity. |
20:16 | <rbuckton> | So you either use something like a monotonically increasing integer for each object allocated, or use a random number and just be aware that there could be collisions. |
20:17 | <rbuckton> | Or objects all have the same hash value (0 ), and you expect the end user to implement { hash(v) } on the comparator if they want efficiency. |
20:24 | <rbuckton> | Also, a composite key doesn't help if you want to compare strings with a different sensitivity, as you would need to normalize the string and that's not entirely reliable. |
20:24 | <TabAtkins> | You can avoid a lot of comparator work, too, if we just exposed a composite key structure that sorts lexicographically. Python leans on key= returning a tuple a lot. |
20:25 | <TabAtkins> | oh lol, colliding composite key opinions |
20:25 | <TabAtkins> | yeah, normalization being unreliable or expensive is the big thing that pushes toward a comparator. |
20:29 | <rbuckton> | I don't expect most developers are regularly thinking about unicode case folding or the pitfalls of normalizing Turkish ı/İ or German ß. Intl.Collator exists for a reason. |
20:32 | <rbuckton> | A comparator does not preclude the use of a composite key. It means that users can craft their own composite keys, or leverage a shared one that is coupled with an existing comparator it's paired with. |
20:59 | <Chris de Almeida> | 📢📢📢 all, please note: significant updates to the schedule, as we have freed up time. all overflow topics are now accounted for, but topics have moved, in some cases to a different day |
20:59 | <Chris de Almeida> | TCQ is still in the process of being updated, but the schedule is the source of truth atm |
21:07 | <Rob Palmer> | We believe we have solved the audio issues with remote participants hearing in-room folk using lapel mics by increasing the gain on the lapel units. Please say if you still hear poor/quiet audio. |
21:11 | <Michael Ficarra> | we don't pick winners in TC39 |
21:12 | <Michael Ficarra> | we "pave the cowpath" |
21:12 | <ryzokuken> | I think the idea instead is that the option we're picking would most likely become the winner |
21:12 | <ryzokuken> | due to the sheer pervasiveness of JS |
21:13 | <ryzokuken> | but that's not the whole story, obviously |
21:13 | <ptomato> | I think you two are saying the same thing, with 'pave the cowpath'? |
21:13 | <ryzokuken> | there's the fact that ICU, which is the de-facto i18n library off the web is heavily involved |
21:13 | <bakkot> | "if we build it, they will come" is the exact opposite of "pave the cowpath" |
21:14 | <shu> | yeah... |
21:17 | <snek> | my employer would happily rewrite all of our translations for whatever is done here as long as it isn't completely outlandish |
21:19 | <ryzokuken> | wait, so there was breakage due to a change in locale data or was it metadata that was changed without notice? |
21:20 | <shu> | locale data |
21:21 | <ptomato> | my bad. I had never really thought about the expression 'pave the cowpath' before and was interpreting it as 'tread the cowpath' 🤦♂️ |
21:21 | <caridy> | shu: there is a diff between data and syntax. IMO they have been coherent when it comes to syntax changes, and structure. As for the data, it is meant to change as localization requirements change. I will categorize the incident that you mentioned as data changes. |
21:22 | <littledan> | my employer would happily rewrite all of our translations for whatever is done here as long as it isn't completely outlandish |
21:22 | <shu> | caridy: it's accurate to categorize as data change, yes |
21:22 | <snek> | especially if the multi-message format/api happens. |
21:23 | <littledan> | especially if the multi-message format/api happens. |
21:25 | <caridy> | bakkot: MF2 has been used by mozilla (which developed it for about a decade now), and used internally to localize firefox, and other pieces of mozilla infrastructure as far as I can tell. |
21:27 | <bakkot> | caridy: that's good context! if I think a few dozen more companies also use the syntax for a few years without complaint, that would be enough for me |
21:27 | <bakkot> | the experience of a single company with a DSL is very much not sufficient to make me happy, though |
21:30 | <caridy> | bloomberg and salesforce also uses MF2 atm |
21:33 | <shu> | that speaks to MF2's utility as a standardized format |
21:33 | <shu> | i still don't make the connection that speaks to the need for a parser in the stdlib |
21:33 | <littledan> | well, we're working on using it; it's not there yet |
21:33 | <shu> | is the library too onerous to load, too bloated, etc? |
21:34 | <littledan> | Shu, when you refer to "parser", is your concern resolved by Eemeli's use of ASTs instead? |
21:34 | <shu> | i don't know enough about the AST alternative to say |
21:34 | <littledan> | is the library too onerous to load, too bloated, etc? |
21:35 | <shu> | because the implementation and shipping cost for those as built-ins in the runtime are also small |
21:35 | <littledan> | Should our guiding principle be, "don't propose libraries that are big, only libraries that are small"? |
21:35 | <shu> | and their utility are broader |
21:35 | <bakkot> | DSLs are much harder to get right, and much more infectious, than APIs |
21:35 | <bakkot> | so our bar for adding new DSLs should be much (much much much) higher than our bar for adding new APIs |
21:35 | <shu> | Should our guiding principle be, "don't propose libraries that are big, only libraries that are small"? |
21:36 | <ryzokuken> | and their utility are broader |
21:36 | <shu> | the big/small thing is not crucial imo |
21:36 | <snek> | i feel like localization is one of the most obvious things to put in a stdlib |
21:36 | <snek> | everyone needs it and its objectively better |
21:36 | <shu> | could you elaborate? |
21:36 | <bakkot> | localization information and APIs, certainly; localization DSLs, much less obvious to me |
21:37 | <littledan> |
This is kinda curious... Which things do you think benefit from independent reimplementation? (I've heard Chrome people muse that it's generally a waste of time to do independent implementations.) |
21:37 | <snek> | yeah thats true i guess we could make a cldr/unicode/etc api and make people figure it out themselves, but why should people have to figure out such a universal thing |
21:37 | <shu> | i honestly still do not understand what we're losing if the guidance is "use the library" |
21:37 | <ryzokuken> | I don't think when comparing highly specific language APIs whose use is situational based on implementation choices and i18n primitives, I'd make the same conclusion |
21:38 | <bakkot> | yeah thats true i guess we could make a cldr/unicode/etc api and make people figure it out themselves, but why should people have to figure out such a universal thing |
21:38 | <littledan> | i honestly still do not understand what we're losing if the guidance is "use the library" |
21:38 | <shu> | i feel like all counter-arguments are conflating "the world needs a message format" and "JS needs a new parser"? |
21:38 | <shu> | also i no longer see updates in the thread view when new messages come in |
21:38 | <shu> | wtf |
21:39 | <bakkot> | petition to stop being a thread |
21:42 | <shu> | littledan: i would argue that we do not have an agreed-upon vision for how "batteries included" the stdlib ought to be |
21:42 | <shu> | the fact that groupBy and unique exist is because some champions decided to push for it |
21:42 | <shu> | not because we are executing on a vision |
21:42 | <bakkot> | also a whole new DSL is a much larger battery than iterator helpers |
21:44 | <Michael Ficarra> | it's like a Tesla battery: large and single-purpose |
21:44 | <littledan> | littledan: i would argue that we do not have an agreed-upon vision for how "batteries included" the stdlib ought to be |
21:45 | <Michael Ficarra> | if people stop buying Teslas, it's not that useful to have around anymore |
21:45 | <Michael Ficarra> | continuing the battery analogy, iterator helpers are like AAs lol |
21:45 | <snek> | would be pretty nice if making the web localized was a blessed api you could use right off the bat, instead of something you have to build out of intl primitives or try to select a library for. |
21:46 | <shu> | what is the downside to selecting a library |
21:46 | <Michael Ficarra> | I feel we're mostly in agreement on that point? |
21:46 | <shu> | is there a wealth of poor choices or something? |
21:46 | <littledan> | what is the point of this committee meeting and not disbanding? is it just to expose new fundamental capabilities to JS? |
21:47 | <Michael Ficarra> | I don't think people are saying l10n is not motivating |
21:47 | <littledan> | like, we're already avoiding defining too many new syntax features or things that involve primitive types, for reasons that I understand |
21:47 | <shu> | and syntax? |
21:47 | <littledan> | (it's been argued that we shouldn't add too much syntax, either) |
21:47 | <Michael Ficarra> | we're just not in agreement about whether we pick winners |
21:49 | <shu> | i understand the desire to standardize APIs |
21:51 | <shu> | why can't we just say "this library is blessed" again? i don't mean something like built-in modules, i just mean literally "this package is blessed" |
21:51 | <shu> | loading is annoying? |
21:53 | <snek> | i think the argument is reductive in both directions |
21:53 | <snek> | but i'd prefer to err on the side of doing things instead of doing nothing |
21:54 | <sffc> | (1) Localization is easy to get wrong and hard to get right and we want to raise the bar for building a Multilingual Web (2) A full MF2 parser is a nontrivial amount of JS payload (3) The longer-term vision littledan mentioned |
21:55 | <shu> | (2) does not go away by making it part of your JS VM |
21:55 | <eemeli> | Any chance of getting extra time for this, I don't think we're necessarily wrapping up in 5 mins? |
21:55 | <ryzokuken> | also Zibi is making a great point here. It took Unicode years to gather the knowledge, experience and effort to nearly finish this effort. |
21:55 | <ryzokuken> | How long are we willing to wait for something else |
21:56 | <ryzokuken> | and who exactly are we expecting to come up with it |
21:56 | <sffc> | Every Intl proposal moves code from every individual web site (N-time download) into the browser (1-time download). That's a principle we've discussed at length in previous proposals. |
21:57 | <shu> | i agree that is a thing we should solve |
21:57 | <leftmostcat (UTC-8)> | is there a wealth of poor choices or something? |
21:58 | <shu> | there are other ways to solve that distribution problem than to standardize into JS |
21:59 | <leftmostcat (UTC-8)> | There's, I think, a bit of a discoverability problem. Making something part of the standard doesn't automatically let people know it exists, but I'd argue it's easier to stumble across it that way than when it's a library you have to search out and specifically add to your project. |
22:00 | <shu> | IMO that conflates discoverability and distribution with standardization. being a standard can result in being more discoverable and does have "free" distribution |
22:00 | <shu> | but a standard exists for interoperable implementations |
22:00 | <shu> | if there is no value add from interoperable implementations and the value add is actually from "blessing" |
22:00 | <shu> | we should solve for discoverability and distribution directly |
22:00 | <Michael Ficarra> | littledan: I would describe it like "momentum": the bigger it is or the faster it is being adopted, the less time it will take to reach that confidence |
22:01 | <littledan> | littledan: I would describe it like "momentum": the bigger it is or the faster it is being adopted, the less time it will take to reach that confidence |
22:02 | <jschoi> | API standardization also tries to solve ecosystem fragmentation, insofar that the ecosystem has multiple solutions solving the same problem with difficult interoperability between solutions. I can see an anti-fragmentation justification for MessageFormat 2 in ECMA-402. It would benefit translators to converge, across organizations, on a single solution (that is better than gettext). |
22:03 | <Michael Ficarra> | I can't predict the future, it could look like a lot of things |
22:03 | <Rodrigo Fernandez> | the room was also rebooted |
22:03 | <littledan> | API standardization also tries to solve ecosystem fragmentation, insofar that its ecosystem has multiple solutions solving the same problem with difficult interoperability between solutions. I can see an anti-fragmentation justification for MessageFormat 2. It would benefit translators to converge on a single solution (that is better than gettext). |
22:05 | <snek> | i think this is also a case where its extremely difficult to get it right, so having a lot of different solutions can imply that every individual solution is a bit worse |
22:05 | <Michael Ficarra> | if there's ecosystem fragmentation, isn't that because no solution is strictly best, and we shouldn't pick a winner from among them? |
22:07 | <jschoi> | Maybe. That’s not the only possible cause of ecosystem fragmentation. It can also be caused by accidents of history and inertia. But certainly, in some cases, it’s because their actual requirements vary a lot across applications. It depends. |
22:07 | <jschoi> | So: Are the requirements of text localization, across applications, homogenous enough such that a single solution can cover (almost all?) applications well? Or are they heterogenous enough that we should let the ecosystem remain fragmented and different solutions continue to compete? |
22:08 | <snek> | i think most (serious) localization systems are actually somewhat similar, especially in how they author data |
22:08 | <bakkot> | I am happy to believe that MF2 is in fact better than all the existing solutions, but if that's so, I'd expect that if there were a permissively licensed production-grade parser for the format, people would want to start using it widely |
22:08 | <bakkot> | and if we saw that happening, and none of those people were like "oh actually I can't adopt this because of [reason]", then it would be a good candidate for standardization |
22:09 | <zbraniecki> |
shu on top of what I brought up before. I don't think we can move forward with efforts such as https://nordzilla.github.io/dom-l10n-draft-spec/ without a syntax. |
22:09 | <littledan> | Maybe TC39 can just resolve, as part of the conclusion, that we encourage people to experiment with this library? |
22:10 | <littledan> | also people keep talking about syntax, but the data model is probably more significant (this comes up in the second half of the presentation) |
22:10 | <ljharb> | if our conclusion is recommending a specific library then we've already picked a winner, so that objection shouldn't exist to the proposal afterwards |
22:10 | <shu> |
|
22:11 | <Michael Ficarra> | littledan: the data model can be similarly problematic, although less so if it's not the API surface since that layer of indirection can sometimes give us an out |
22:11 | <bakkot> | much easier to repair small flaws in a data model than a syntax, usually (though not always) |
22:12 | <bakkot> | repair-by-addition, that is |
22:14 | <ljharb> | littledan: questions are not discouragement. |
22:14 | <littledan> | I disagree? sometimes they are? |
22:14 | <Michael Ficarra> | well, I think that depends on the person |
22:15 | <shu> | if our conclusion is recommending a specific library then we've already picked a winner, so that objection shouldn't exist to the proposal afterwards |
22:15 | <littledan> | I wonder if it'd help to have a presentation explaining the design of the syntax and data model of MessageFormat v2 to plenary in more detail, including how learnings from other past formats were incorporated. Would people be interested in that? |
22:16 | <bakkot> | littledan while such a presentation would be interesting, it wouldn't do much for my concerns, which are about getting experience with the actual thing in practice before standardizing it |
22:17 | <bakkot> | having past experience inform the design makes it more likely that the result will be good, but it does actually need to get used in the real world before we can be confident enough to fix the syntax in the language forever (IMO) |
22:18 | <ljharb> | littledan: i assume you're referring to dr. herman's presentation? (which i missed) or were there more |
22:18 | <jschoi> | This does match my experience too; the requirements of text localization seem to be fairly homogenous across applications. (They do vary much more across human languages with very different grammars. But my assumption here is that all applications want to be able to localize their text to any living human language, so this versatility requirement is similar between applications.) This homogeneity may make the benefit of reducing ecosystem fragmentation—having translators converge on the same standard language across different applications—potentially outweigh the cost of convergence preventing necessary ecosystem variation / innovation. |
22:18 | <littledan> | littledan: i assume you're referring to dr. herman's presentation? (which i missed) or were there more |
22:19 | <littledan> | It wasn't referring to any comment from you towards Felienne or that area of work |
22:25 | <ljharb> | https://github.com/tc39/proposal-arraybuffer-transfer/issues/12 |
22:27 | <littledan> | one mitigation with respect to the risk of TG5 fizzling out: Mikhail is on a tenure track :) |
22:29 | <Michael Ficarra> | I believe he actually has tenure-equivalent status at UiB |
22:30 | <shu> | sgtm |
22:50 | <Mikhail Barash> | yeap, the position I have is a permanent academic position, equivalent to tenure :) |
23:09 | <littledan> | oops, well, congrats! |
23:10 | <littledan> | Congrats on reaching this milestone to ptomato and all of the Temporal champions! The remaining editorial changes should unblock all implementations to proceed. |
23:11 | <littledan> | if implementers still feel blocked or like it's not ready yet: let's discuss your concerns |
23:13 | <Mathieu Hofman> | can someone let me in the zoom ? |
23:14 | <Anthony Bullard> | OT: I hope everyone is enjoying our San Diego office, wish I could have made it to be there. |
23:14 | <Chris de Almeida> | can someone let me in the zoom ? |
23:15 | <Mathieu Hofman> | you are showing as in the room. all good? |
23:19 | <rbuckton> | It's not just locks that need this, but also lock-free, concurrent algorithms that depend on CAS and spin-waiting. |
23:20 | <rbuckton> | While not shared structs specific, I ran into this when working with the shared structs dev trial in an experiment with the TypeScript compiler. |
23:22 | <rbuckton> | By "this" I meant Atomics.microwait() . |
23:26 | <bakkot> | are we gonna get an Atomics.futex |
23:27 | <rbuckton> | Isn't that just Atomics.compareExchange and Atomics.wait ? It's already a futex. |
23:27 | <snek> | ye |
23:27 | <snek> | and then this proposal is the natural extension of that |
23:28 | <ljharb> | srs question, could this be an asm-like string pragma, for emscripten? |
23:28 | <bakkot> | well, given saboff's feedback, I am imagining a version which did the spin-plus-backoff-then-sleep thing internally |
23:28 | <snek> | wdym jordan |
23:28 | <bakkot> | srs question, could this be an asm-like string pragma, for emscripten? |
23:28 | <ljharb> | like shu explained this as a hint to the CPU |
23:28 | <ljharb> | so "spin 5"; or something |
23:29 | <ljharb> | a static no-op string, not in the spec, that gives a hint |
23:29 | <ljharb> | (i'm assuming there's a reason why that wouldn't work but i don't know what it would be) |
23:29 | <snek> | like if the engine sees "spin 5" in a statement position, it emits a yield instruction? |
23:29 | <zbraniecki> | I feel reluctant to bring this slide deck to my colleagues due to the controversial choice of font. What a rebel. |
23:30 | <ljharb> | snek: yes, basically |
23:30 | <snek> | i think like |
23:30 | <snek> | technically yes that's a thing that could happen |
23:30 | <snek> | we make the language we can do whatever we want |
23:31 | <ljharb> | right but i mean this wouldn't need to be in the spec necessarily |
23:31 | <snek> | i don't think magic strings are generally a good pattern though |
23:31 | <ljharb> | oh sure, agreed |
23:31 | <ljharb> | was just kind of wondering out loud if that's all that's needed, a statically recognizable marker |
23:32 | <snek> | i'm personally ok with Atomics being a bit of a weird area in the spec |
23:32 | <snek> | atomics are kind of a weird area in every language |
23:36 | <rbuckton> | The implementation of a yield instruction tends to be a bit more complicated than a static marker. You want some input to the instruction to allow some variance to the spin in the case you have two threads on two independent cores spinning at the same time in lock-step. It's not "spin 5" its more like yieldIfNecessary(spinCount) . You don't always yield. sometimes you sleep, and sometimes you do nothing, all to combat contention. |
23:36 | <snek> | it could be "yield if necessary spinCount" |
23:37 | <snek> | we make the rules |
23:37 | <snek> | but |
23:37 | <snek> | i don't want that |
23:37 | <rbuckton> | What is spinCount though? |
23:37 | <snek> | a variable in scope |
23:37 | <snek> | idk |
23:37 | <rbuckton> | That's terrible. |
23:37 | <snek> | yea |
23:37 | <snek> | could make it a template literal 😄 |
23:37 | <TabAtkins> | `yield if necessary ${spinCount}` |
23:37 | <snek> | i'd rather just have this on Atomics |
23:37 | <rbuckton> | Just call it Atomics.microwait(spinCount) and be done. Implementations can and will inline that into instructions. |
23:41 | <bakkot> | ok the graph on the screen does not say "goes up steadily" to me |
23:41 | <bakkot> | that says "is used by exactly one popular package" |
23:45 | <rbuckton> | It looks like 2 or 3 popular packages, and 1300 other packages. |
23:46 | <Luca Casonato> | async do { await cb() } |
23:46 | <snek> | what's up with do expressions these days |
23:46 | <Luca Casonato> | we should do do expressions (and async do expressions) :) |
23:46 | <bakkot> | I think the committee's time and my time is better spent on APIs than syntax at the current margin |
23:46 | <bakkot> | so I am working on APIs rather than syntax |
23:46 | <bakkot> | we have so many syntax proposals |
23:47 | <shu> | is there a concrete API i can just read somewhere that has this pattern |
23:47 | <shu> | the "takes a callback and might throw sync, but otherwise async" |
23:47 | <rbuckton> | While I certainly hope do {} does eventually advance, I have some big concerns about async do {} |
23:47 | <snek> | async blocks are handy in rust |
23:47 | <Luca Casonato> | yeah, so nice |
23:49 | <rbuckton> | It depends on what do { return; } does. If it actually causes a return from the containing function, then what do you do with async do { return; } . Does async do differ dramatically from do in this case? It obviously can't return from the containing function. |
23:49 | <TabAtkins> | is there a concrete API i can just read somewhere that has this pattern |
23:49 | <rbuckton> | And if do {} doesn't support return , break , or continue , it seems far lest interesting to me. |
23:49 | <bakkot> | It depends on what |
23:50 | <bakkot> | that is, async do { return } is syntax error |
23:50 | <shu> | then i'm confused, if this is a bad pattern then... shouldn't we not add this |
23:50 | <bakkot> | and do { return } returns from current function |
23:50 | <nicolo-ribaudo> | that says "is used by exactly one popular package" |
23:51 | <littledan> | No, anything using WebIDL is pretty explicit that if you're returning a Promise, you always return a (possibly rejected) Promise; you never throw sync. |
23:51 | <TabAtkins> | then i'm confused, if this is a bad pattern then... shouldn't we not add this |
23:51 | <hax (HE Shi-Jun)> | I see similar pattern in some other libs, like nodejs perf.timerify() |
23:51 | <TabAtkins> | (I'm not in the meeting, I'm just observing from chat.) |
23:51 | <bakkot> | Is this the Promise.try() discussion happening right now? It doesn't throw sync. It just calls the function sync, but still causes a rejected promise. (If the slides are still accurate.) |
23:51 | <rbuckton> | That means async do {} and do {} are wildly different things, and potentially a refactoring hazard. I'm not opposed to the capability that async do {} offers, but I wonder if it shouldn't use do in that case because it could be confusing. |
23:52 | <littledan> | Is this the Promise.try() discussion happening right now? It doesn't throw sync. It just calls the function sync, but still causes a rejected promise. (If the slides are still accurate.) |
23:52 | <shu> | TabAtkins: i know, but the function throws sync |
23:52 | <bakkot> | That means async {} and expr {} or something yeah |
23:53 | <TabAtkins> | TabAtkins: i know, but the function throws sync |
23:53 | <ryzokuken> | I have been thinking about maybe expr and asyncExpr |
23:53 | <Duncan MacGregor> | Promise construction seems to trip everybody up, so avoiding it I think is generally a win. |
23:53 | <bakkot> | what about |
23:53 | <TabAtkins> | But also: you might have a function in hand that you just don't know if it's sync or async, and want to consolidate your control flow into async |
23:53 | <ryzokuken> | we could both demonstrate the similarity and avoid the refactoring hazard |
23:53 | <ryzokuken> | I would prefer to never encounter a camelcase keyword |
23:54 | <littledan> | But also: you might have a function in hand that you just don't know if it's sync or async, and want to consolidate your control flow into async |
23:54 | <ryzokuken> | I was just wondering how we could mark both as related concepts while avoiding the double keyword issue |
23:54 | <Bradford Smith> | I see the need for the functionality of Promise.try, but I don't quite see why Promise.try(callback) is significantly better than new Promise(resolve => resolve(callback())) |
23:54 | <TabAtkins> | I'm just repeating what was in the slides when I read them this morning ^_^ |
23:54 | <TabAtkins> | Bradford Smith: It's identical in functionality, it's just shorter. |
23:55 | <bakkot> | I am not convinced this comes up enough to need sugar |
23:55 | <bakkot> | maybe it's just that I never try to be defensive against this case? if I ask for an async function and the user gives me a function which sycn throws, that's on them |
23:55 | <rbuckton> | I was just wondering how we could mark both as related concepts while avoiding the double keyword issue |
23:55 | <Duncan MacGregor> | I would say it isn't just shorter, it expresses the intend better. |
23:56 | <littledan> | I wonder whether the intent might usually be expressed better by Promise.withResolvers + try/catch |
23:56 | <snek> | i can't confidently say whether it comes up super often or not but i can't come up with any strong reason to be against it |
23:56 | <Bradford Smith> | Bradford Smith: It's identical in functionality, it's just shorter. |
23:56 | <bakkot> | i can't confidently say whether it comes up super often or not but i can't come up with any strong reason to be against it |
23:56 | <TabAtkins> | bakkot: I think the deal is that, for the non-throwing case, you can trivially consolidate both sync and async functions with new Promise(f()) (or just await f() ). But if f throws, then the sync version causes a throw, while the async causes a rejected promise. |
23:56 | <bakkot> | I guess that's what the consensus topic later is though |
23:57 | <snek> | yeah its more just how i approach it, especially with something as simple as this function |
23:57 | <shu> | i said nullary, not unary |
23:58 | <shu> | this doesn't pass any arguments does it |
23:58 | <snek> | async context is happening? i thought mark was super against that |
23:58 | <TabAtkins> | Correct, nullary |
23:58 | <rbuckton> | That's also consistent with setImmediate(cb, ...args) |
23:58 | <Justin Ridgewell> | @bakkot: me |
23:59 | <littledan> | async context is happening? i thought mark was super against that |
23:59 | <Justin Ridgewell> | i said nullary, not unary |