00:00
<Chris de Almeida>
🚹 📱 some things have moved around on the schedule. please have a look, especially if you are presenting as some items have moved to a different day. no constraints were impacted
00:26
<yulia | PTO until Dec. 8>
I can't believe im missing one of the hats
00:29
<Anthony Bullard>
Getting a new phone and forgetting to install matrix for a month means a lot of catching up to do.
18:03
<rbuckton>
IIRC, it runs on a VM in Azure
18:11
<Michael Ficarra>
I would love to implement some TCQ feature improvements once it's in a developable state
18:18
<bakkot>
why do we put our names in the notes doc? I find it easier to keep delegates.txt open in a different tab where I can c-f for names
18:20
<Chris de Almeida>
why do we put our names in the notes doc? I find it easier to keep delegates.txt open in a different tab where I can c-f for names
for helping note-takers is one use case. we also use it to record attendance -- something Ecma cares about. it also can be helpful to know who was present at the meeting when reviewing notes, as not everyone speaks at every meeting
18:20
<bakkot>
gotcha
18:23
<eemeli>
Heh, is TCQ stuck again? (apparently not)
18:23
<Rob Palmer>
TCQ advanced
18:24
<Ashley Claymore>
for helping note-takers is one use case. we also use it to record attendance -- something Ecma cares about. it also can be helpful to know who was present at the meeting when reviewing notes, as not everyone speaks at every meeting
It never seems better than 50% accurate tho
18:44
<Ashley Claymore>
42 people on the call. 18 names on the notes. Not sure how many of those missing 24 are observers.
18:51
<Chris de Almeida>
It never seems better than 50% accurate tho

true. I don't know if this has ever not been the case, and if so, how long ago. IME it has been voluntary; folks only add themselves and not others

for Ecma's attendance-keeping it is not the only system of record. the secretary monitors the online meeting participants (and in-person folks), as well as the sign-in form people complete to get the link for the online meeting

I think it's useful to have the complete attendees in the doc itself for the notes/history but it may be that some people don't want to be listed for some reason

18:52
<Chris de Almeida>
maybe some of the two-letter folks can provide further context
18:57
<bakkot>
Prior to remote meetings attendance was kept by having a physical sheet of paper passed around, and I think we used that to populate the list in the published notes
18:57
<bakkot>
I think "no data-driven exceptions" is a confusing way to phrase this principle
18:58
<bakkot>
the principle appears to be "don't reject anything which could in principle be valid", which seems like a totally fine principle
18:59
<Chris de Almeida>
📝 we will need someone to volunteer to help with notes after this item. please consider helping out 🙏
19:00
<ryzokuken 🇼đŸ‡č>
29-02 is valid though, it's the combination that isn't valid
19:01
<eemeli>
I think I'm a bit confused by how asking for 2030-02-29 could return 2030-02-28 rather than 2030-03-01.
19:01
<ljharb>
what about 04-31?
19:02
<ryzokuken 🇼đŸ‡č>
what about 04-31?
31-04 should be invalid because it never occurs in the ISO calendar
19:02
<ljharb>
right but april, and the 31st, are both valid in the same way that february, and the 29th, are both valid
19:02
<ljharb>
or are you saying 2/29 is special because of leap days
19:02
<ryzokuken 🇼đŸ‡č>
or are you saying 2/29 is special because of leap days
precisely
19:03
<ryzokuken 🇼đŸ‡č>
because it is valid but when added to a year it might not be
19:03
<ljharb>
philip's answer of "the shape", tho, would mean that a month 01 - 12 and a day 01 - 31 are all "valid" in that sense
19:03
<Kris Kowal>
(The one person I know with a February 29 birthday celebrates on March 1.)
19:03
<ljharb>
thus april 31st, while obv a day that doesn't exist, each part is still the right "shape"
19:03
<ljharb>
just like february 29th depending on the year
19:04
<Kris Kowal>
Seems to me the reasonable behaviors are throw, truncate (to 02-28), and carry (to 03-01).
19:06
<eemeli>
new Date(2030, 1, 29) → Date Fri Mar 01 2030 00:00:00
19:06
<ryzokuken 🇼đŸ‡č>
Seems to me the reasonable behaviors are throw, truncate (to 02-28), and carry (to 03-01).
at the moment we support constrain and reject
19:06
<ryzokuken 🇼đŸ‡č>
carrying over is not generally applicable but it could be useful in certain cases as you mentioned
19:09
<eemeli>
I think either throw or carry can make sense, but truncate is weird.
19:14
<ryzokuken 🇼đŸ‡č>
The cases are well documented in the Temporal docs
19:15
<bakkot>
We should have a notion of "consensus pending one person's offline stamp"
19:16
<bakkot>
so that this can get consensus when waldemar has a chance to review this behavior offline and approves of it, assuming that he does
19:19
<ptomato>
I think either throw or carry can make sense, but truncate is weird.
we used to have "carry" (it was called {overflow: 'balance'}) but removed it during stage 2, based on experience from the Moment maintainers that people only wanted it because it was what new Date() does.
19:19
<Chris de Almeida>
We should have a notion of "consensus pending one person's offline stamp"
provisional advancement is fairly common, no?
19:19
<bakkot>
yes but usually it's like "editor's review" or something, rarely "someone approving the normative behavior"
19:20
<Kris Kowal>
we used to have "carry" (it was called {overflow: 'balance'}) but removed it during stage 2, based on experience from the Moment maintainers that people only wanted it because it was what new Date() does.
Anecdotally, at least one person uses it for her birthday math.
19:21
<Kris Kowal>
(And I do not have an iron in this fire, just this anecdote.)
19:21
<waldemar>
I'm not withholding consensus. I (and others) just found the information to be too poorly presented to understand.
19:26
<nicolo-ribaudo>

If it helps, this is what I understood happens by default based on the type of methods/conversions

  • String->Temporal validates the strings and throws
  • Plain object->Temporal validates that each parameter is individually in its potential domain (e.g. days are positive integers, which is all we know about days without looking at other parameters) and then "rounds towards zero" to get a valid date
  • Temporal->Temporal rounds towards zero to get a valid date

There is an exception to that Temporal->Temporal class, which is the method today the champions were proposing to change to not throw anymore

19:32
<nicolo-ribaudo>
Or maybe "rounds down" and not "rounds towards zero"
19:33
<bakkot>
nicolo-ribaudo: from reading the spec I think the "plain object -> temporal" and "temporal -> temporal" cases were handled the same?
19:33
<bakkot>
could be wrong though, haven't traced through the whole thing
19:34
<nicolo-ribaudo>
Oh probably yes, given that if the input is a temporal object all the properties are already in the valid domain
19:36
<ptomato>
yes, what nicolo-ribaudo said is mostly accurate. for String->Temporal conversions, ISO 8601 is clear on what is and isn't a valid ISO string
19:36
<ptomato>
Plain object->Temporal is indeed basically the same as Temporal->Temporal, but Temporal objects are already valid in the domain
19:38
<ptomato>
Plain object->Temporal and Temporal->Temporal methods - the overflow: 'constrain' algorithm is a bit more complicated than rounding down: https://tc39.es/proposal-temporal/#sec-temporal-calendardatetoiso
19:39
<ptomato>
but in the ISO and Gregorian calendars, the only place where this is relevant is February 29 (if you assume valid data, as you would for a Temporal→Temporal conversion)
19:44
<littledan>
We have experience with Intl in checking in tests amid imprecise specifications, using a specific tag to note that case. We could do this for sum (and transcendental fns) if needed
19:45
<ljharb>
users will rely on whatever algorithm browsers select and they won't be able to change it in the future anyways
19:46
<Michael Ficarra>
a PDF of what waldemar linked in TCQ: https://people.eecs.berkeley.edu/~jrs/papers/robustr.pdf
19:47
<littledan>
I think “batteries included” is a decent reason for this, alongside precision—as Kevin said, this just comes up frequently
19:49
<littledan>
Historically, users have come to depend on answers even if the spec doesn’t say so. Eg see transcendental fns
19:50
<snek>
well we did manage to make sorting stable, even though it made lots of people angry
19:51
<Michael Ficarra>
snek: it being not guaranteed to be stable also made lots of people angry
19:51
<snek>
it made me angry
20:00
<snek>
the meeting in san diego is confirmed to be happening right?
20:02
<Chris de Almeida>
yes
20:04
<Anthony Bullard>
Yes, we can’t wait to have everyone on campus snek
20:05
<snek>
is there a recommended hotel or anything? i recall the building was a little bit far from most stuff
20:09
<bakkot>
apparently Python's full-precision floating point sum is about 10x slower than a naive summation, and probably about 7x slower than Neumaier
20:10
<bakkot>
but in JS using .reduce is probably at least 10x slower than a native Math.sum anyway, so maybe this is fine?
20:12
<snek>
http://blog.zachbjornson.com/2019/08/11/fast-float-summation.html
20:13
<snek>
it seems like with avx512 its faster to do the neumaier
20:13
<snek>
if only intel would properly support avx512
20:15
<Chris de Almeida>
is there a recommended hotel or anything? i recall the building was a little bit far from most stuff
we are waiting on this information; will share once available
20:22
<Anthony Bullard>
@snek I don’t speak officially on this, but we typically stay at a nice Embassy Suites that’s roughly a 10 minute walk from campus. It across from a large shopping center
20:23
<Anthony Bullard>
I’d be surprised if a different recommendation was made, but it is possible
20:29
<waldemar>
I figured out how to add n IEEE doubles in linear O(n) time and get the correctly rounded exact result in all cases, including avoiding overflow. In practice the running time is similar to Neumaier's but you always get exact results.
20:31
<ljharb>
sounds like that's the algorithm we should specify then?
20:33
<waldemar>
There are many algorithms that can do this. We should not specify one any more than we should specify JS language parsing by describing what data structures the parser uses and how the parser updates them when it receives the next character of program text.
20:34
<snek>
horwat's last theorem
20:34
<snek>
can you post the algorithms you're aware of on the repo?
20:35
<waldemar>
The important thing is that the results are completely deterministic.
20:35
<ljharb>
the reality tho is that if we don't pick an algorithm, browsers will, and it won't ever be changeable
20:35
<ljharb>
certainly if we have an algorithm that can unobservably be replaced then they can do so
20:36
<waldemar>
the reality tho is that if we don't pick an algorithm, browsers will, and it won't ever be changeable
Is this claim provable?
20:37
<bakkot>
waldemar: people have started to depend on the precise results of Math.tan and friends, which historically vary across browsers, such that the minority browsers have been updating to match the semantics of the majority ones
20:37
<bakkot>
This may or may not mean that it is not changeable in practice
20:37
<ljharb>
Is this claim provable?
of course not, but it doesn't have to be, because that's already been browsers' experience and feedback
20:37
<waldemar>
The choice of algorithm is unobservable except by side channels like timing
20:37
<bakkot>
of course, if the result is deterministic, then yes the precise choice of algorithm doesn't matter. though Mark Miller wanted us to write down a precise algorithm.
20:37
<ljharb>
the exact results is what's observable
20:38
<bakkot>
Which I don't really want to do because writing down Shewchuk's will be somewhat lengthy.
20:38
<bakkot>
here's Python's, for refernece https://github.com/python/cpython/blob/48dfd74a9db9d4aa9c6f23b4a67b461e5d977173/Modules/mathmodule.c#L1359-L1474
20:39
<waldemar>
The whole point of what I want to do here is to ensure that the result is deterministic by being the exact, correctly rounded answer.
20:39
<bakkot>
Anyway if Mark is OK with not specifying an algorithm I'm quite happy with that.
20:40
<bakkot>
waldemar: Are you OK with nondeterminism in the case of overflow/underflow? Because specifying those exactly will be hard, I think, without specifying a full algorithm.
20:40
<bakkot>
Or overflow at least; not sure about underflow.
20:42
<ljharb>
if the result is deterministic, then what's the problem with specifying an algorithm? it wouldn't be observable to follow it or not as long as you produced the right results
20:43
<bakkot>
it means that implementations probably won't innovate, for one thing
20:46
<bakkot>
Python's fsum throws if the intermediate sum overflows, looks like; e.g. math.fsum([1.6e308, 1.6e308, -1.6e308, -1.6e308]). I would not want to throw in this case though I'm not sure what a better option would be.
20:46
<waldemar>
The algorithm I'm thinking of gives the exact, correctly rounded result in all cases. If that final rounding is ±∞ or NaN, then that's what you get. If the rounding produces a finite double, then that's what you get. No nondeterminism in cases of ±∞ or NaN.
20:46
<waldemar>
Python's fsum is buggy when it gets intermediate overflows. But there is a simple way to avoid that.
20:47
<ljharb>
it means that implementations probably won't innovate, for one thing
the alternative is that they’ll all probably copy the first shipper, no?
20:47
<bakkot>
waldemar: That sounds like a great option, then, though the paper you linked does not handle intermediate overflow from what I can tell
20:48
<waldemar>
It doesn't, but the way to solve that is so obvious they probably didn't bother with it.
20:48
<snek>
can you produce the algorithm you are thinking of
20:48
<snek>
just to sate my curiosity
20:51
<waldemar>
  1. If you have 0, 1, or 2 inputs, the result is trivial.
20:52
<waldemar>
  1. If you have 3 or more inputs, use the approach in the paper to compute an exact sum, represented as (p0+p1+
), where each p_i is a double and their exponents differ by at least 53 binary powers — in practice you'll likely end up with just one or two such p_i. Then round the sum as in fsum, taking care of the round-to-nearest-breaking-ties-to-even case in fsum.
20:52
<waldemar>
  1. If you get ±∞ or NaN as any of the inputs, the result is always ±∞ or NaN and you can figure it out directly without doing arithmetic.
20:55
<waldemar>
  1. Getting ±∞ as an intermediate result is only an issue if you have later cancellation coming that can bring the result back into a finite range. To take care of that case, always try to add in arguments with the opposite sign from your running total first.
20:58
<ljharb>
presumably -∞ yields -∞, but what does ∞ + -∞ yield?
20:58
<bakkot>
NaN
20:58
<waldemar>
NaN
20:58
<bakkot>
same as normal
21:09
<littledan>
The claim is that people won’t use instanceof or toString?
21:09
<littledan>
It seems like accessing these properties and getting the “wrong” value is a risk
21:09
<ljharb>
that they won't likely depend on the exact string output of toString, for one - that's been the case in the past when we've added toStringTag to things
21:10
<ljharb>
and for the constructor, the constructor isn't a global, so i think the likelihood someone will use it for iterator helpers is low
21:10
<bakkot>
... yes it is?
21:10
<bakkot>
Iterator is a global
21:10
<bakkot>
in this proposal
21:10
<ljharb>
oh right sorry this is Iterator not IteratorHelpers
21:11
<bakkot>
so I think the likelihood of someone using it is in fact pretty high
21:11
<ljharb>
ok so scratch that part, i still don't think people are likely to do instanceof Iterator tho
21:11
<bakkot>
I defnitely expect people to do that
21:11
<bakkot>
people use instanceof a lot
21:11
<bakkot>
you don't, I don't, but other people do
21:11
<ljharb>
rather than just throwing it through Iterator.from?
21:11
<bakkot>
... uh, definitely yes?
21:11
<rbuckton>
The claim is that people won’t use instanceof or toString?
If there's no constructor, wouldn't x instanceof Iterator still be fine? You just wouldn't be able to rely on x instanceof someOtherIter.constructor
21:12
<ljharb>
ok
21:12
<rbuckton>
instanceof doesn't depend on prototype.constructor
21:12
<ljharb>
then i think the best thing is to hold off on the proposal until the remaining couple sites are migrated
21:12
<bakkot>
noooooooo
21:12
<ljharb>
ron, can you say that on the queue then?
21:12
<littledan>
+1 to Kevin
21:13
<rbuckton>
instanceof depends on Constructor.prototype
21:13
<littledan>
Sorry I wasn’t able to use the queue for a minute
21:13
<ljharb>
what's two more months compared to "possibly gross forever"
21:14
<Chris de Almeida>
Sorry I wasn’t able to use the queue for a minute
no big deal 🙂
21:14
<littledan>
Apologies for throwing the conversation off by misremembering instanceof semantics
21:15
<rbuckton>
x.constructor === whatever is almost never a good idea since it's so fragile
21:15
<bakkot>
I agree but still expect people to write it
21:15
<littledan>
what's two more months compared to "possibly gross forever"
The answer to this question depends on how gross we are talking about and how many iterations on “two more months” will happen
21:15
<bakkot>
If we had the constraint that we only had to worry about breaking good code, the world would be much nicer
21:16
<ljharb>
The answer to this question depends on how gross we are talking about and how many iterations on “two more months” will happen
that's true. but "forever" is longer than a high number of iterations, and a little grossness accumulates over time
21:16
<rbuckton>
Not adding .constructor doesn't break anything, since %IteratorPrototype% never had a constructor prior to this proposal
21:17
<rbuckton>
It might break an expectation when writing new code, but it wouldn't break existing code.
21:17
<rbuckton>
At least, no existing code that isn't relying on a proposed feature that hasn't yet reached Stage 4
21:17
<littledan>
To be clear I think having .constructor and Object.protototype.toString being “wrong” is grossness and the proposed alternative is “less gross” than omitting things
21:18
<bakkot>
rbuckton: right, but the concern is that people would come to depend on its absence once Iterator becomes a global, which I think is reasonably likely
21:18
<bakkot>
whereas I think there is much less chance of people coming to depend on these properties being accessors, as long as the people in this room agree not to do that
21:18
<ljharb>
wouldn't they depend on it by writing iterator instanceof Iterator tho, which wouldn't stop working later?
21:19
<ljharb>
if someone would otherwise write iterator.constructor === Iterator, i mean
21:19
<bakkot>
someone could write if (val.constructor !== Iterator) val = Iterator.from(val), and that would start going down a different code path when Iterator.constructor was added, and that could easily break something
21:20
<bakkot>
I agree that people shouldn't write that code but I think it's reasonably likely someone will
21:20
<ljharb>
i'm skeptical it's likely they would think about that optimization
21:20
<ljharb>
people don't do that with Promise.resolve now
21:21
<bakkot>
uhhh lots of people do that
21:21
<bakkot>
or if (!Array.isArray(x)) x = Array.from(x) or whatever
21:21
<bakkot>
that is very common
21:21
<ljharb>
for arrays yes
21:21
<ljharb>
but for anything else? arrays are a bit unique imo
21:21
<bakkot>
iterators are more like arrays than promises
21:25
<bakkot>

waldemar: can you elaborate on "always try to add in arguments with the opposite sign from your running total first"? at what point during the algorithm do you mean? there's a basic Python implementation given at https://code.activestate.com/recipes/393090/ ; can you suggest the change you're proposing as a diff to this algorithm? (This algorithm is bugged, on the last line, but otherwise correct I believe)

def msum(iterable):
    "Full precision summation using multiple floats for intermediate values"
    # Rounded x+y stored in hi with the round-off stored in lo.  Together
    # hi+lo are exactly equal to x+y.  The inner loop applies hi/lo summation
    # to each partial so that the list of partial sums remains exact.
    # Depends on IEEE-754 arithmetic guarantees.  See proof of correctness at:
    # www-2.cs.cmu.edu/afs/cs/project/quake/public/papers/robust-arithmetic.ps

    partials = []               # sorted, non-overlapping partial sums
    for x in iterable:
        i = 0
        for y in partials:
            if abs(x) < abs(y):
                x, y = y, x
            hi = x + y
            lo = y - (hi - x)
            if lo:
                partials[i] = lo
                i += 1
            x = hi
        partials[i:] = [x]
    return sum(partials, 0.0)
21:27
<snek>
retvrn to associative arrays
21:36
<waldemar>
Whenever you add the next addend to the running total, prefer to pick an addend with the opposite sign from the running total if one exists. The sign of the running total is the sign of the first partial.
21:37
<snek>
do you get to pick any addend except the next one and still call it O(n)?
21:37
<bakkot>
How do you find such an addend? Sort the whole list? That's pretty expensive.
21:38
<waldemar>
Addition of finite IEEE doubles with opposite signs can never produce ±∞. This way you can only get ±∞ if you've run out of addends of the opposite signs to your running total, in which case you've correctly overflowed to ±∞.
21:38
<bakkot>
Dividing it up by sign is sufficient, I guess, and cheaper.
21:38
<bakkot>
Though it does require keeping the whole list in memory, which previously was not required. Keeping the whole list in memory is potentially expensive also.
21:40
<waldemar>
If you want to do one pass, you can also do the lazy approach and worry about it only if you get ±∞ as an intermediate result, in which case you'd back up by one addend and look for addends of the opposite sign.
21:41
<littledan>
Not having thought about this deeply, I like the idea of variadic Iterator.from
21:41
<waldemar>
That's similar to the approach of dealing with NaN's or ±∞ as inputs. If you see one of those, you want to ignore all finite inputs and only add the ±∞ and NaN's using IEEE double arithmetic. You can either scan for them in a pre-pass or just switch to the mode of ignoring finite values the first time you see a non-finite addend.
21:42
<waldemar>
I'm writing all of this as an issue on the proposal.
21:43
<snek>
can we have Iterator.from(...) and Iterator#concat
21:43
<snek>
and flat
21:43
<snek>
lets just do everything
21:45
<bakkot>
waldemar: when you encounter a NaN or ±∞ you can skip intermediate results, whereas when searching for something of the other sign you have to keep all those values. That's fine if you're using summing values from an Array, but not when summing values from an iterable, since those are one-shot.
21:45
<bakkot>
I'm not saying that's a fatal problem, just that it's more overhead, and might be infeasible with extremely large iterables.
21:50
<Andreu Botella>
I might be wrong, but you could keep two pointers, one with the next positive value, and one with the next negative, and that would still be O(N) with no extra memory, you'd just do at most two whole iterations over the array
21:50
<waldemar>
That's quite annoying. Picking operands with the opposite sign is the simplest approach to deal with this. If you really want to do this in one pass, you can also scale down the exponent of the most significant partial if you get an overflow by, say, 50 powers of 2. This will work as long as you have no more than 2^50 addends.
21:51
<bakkot>
Andreu Botella: the concern I have applies when you're summing a one-shot source, not when you're summing an array
22:01
<bakkot>
waldemar: To confirm my understanding, you're suggesting that when you would otherwise overflow, you instead introduce an additional partial which is specially marked as being scaled? and then if that partial still exists at the end you've actually overflowed in the final sum? I'll have to think about how to handle that partial but I think that makes sense.
22:01
<bakkot>
I'm fine saying that you get a rangeerror if you have more than 2**50 (or whatever) addends, so that this can be precise. I doubt that rangeerror would come up in practice anyway.
22:02
<bakkot>
I'll have to try implementing this before bringing it back.
22:07
<ryzokuken 🇼đŸ‡č>
The link to the slides: https://notes.igalia.com/p/nxMdcUtbb#/
22:55
<waldemar>
waldemar: To confirm my understanding, you're suggesting that when you would otherwise overflow, you instead introduce an additional partial which is specially marked as being scaled? and then if that partial still exists at the end you've actually overflowed in the final sum? I'll have to think about how to handle that partial but I think that makes sense.
Yes. you'd scale p_0 by dividing it and anything you add to it by 2^50 (or whatever), and multiplying back by 2^50 when you start working on p_1. It's a bit tedious but can be done precisely and efficiently.
22:56
<waldemar>
Of course you'd only do this if you're close to overflowing.
22:57
<waldemar>
Filed https://github.com/bakkot/proposal-math-sum/issues/1 on this.
23:04
<bakkot>
waldemar: OK, great. Thanks for the writeup. I'm going to have to try implementing this before I am confident proposing it (and maybe I'll submit a bugfix to cpython while I'm at it), but I think I understand the how to do it in a single pass.
23:04
<bakkot>
I'm hoping the performance is good enough for engines to be comfortable shipping it as Math.sum. If they're not, I may suggest naming this Math.fsum or something, to make it clear that this is special i.e. potentially much slower than naive summation.
23:31
<Michael Ficarra>

ljharb: here's the relevant notes from the last meeting and why I thought that this was not possible

NRO: If you don’t want to wait too long to ship this, could we ship it without the constructor property until when we know it’s safe to do so? Because, like, I know that that’s burden of polyfill containers and people that care about compat matters, but, like, in practice, would this be an okay way to, like, avoid this hack, if we cannot solve the problem quick enough?

JWK: I think it’s not possible because the iterator prototype is already reachable via Array#values. The iterator prototype is already existed and this proposal just exposes it on the global. You cannot really move a thing from it.

CDA: Okay, last on the queue is Jordan. Plus one to drop constructor and toStringTag temporarily in the meantime. End of message.

JHD: I did just want to add to Jack’s comment, the hidden iterator intrinsic does not have a iterator property. It tries to add it. It will fall back to object.prototype.

JWK: Oh, I don’t know that, if that’s the case, maybe we can do it.

23:32
<Michael Ficarra>
but after reading this conversation and thinking about it more, I still prefer the weird accessors over omitting the properties
23:32
<Michael Ficarra>
I'm not sure how to resolve this since they both solve the immediate issue
23:32
<ljharb>
Reflect.ownKeys([].keys().__proto__.__proto__) has only Symbol.iterator on it
23:32
<ljharb>
so it'd be omission, not removal
23:33
<Michael Ficarra>
yes, that was clarified
23:33
<ljharb>
it seems very likely that eventually those websites will get updated, and whatever workaround we use will be something that we want to undo
23:33
<ljharb>
omission is a much easier mistake to unmake than the accessors
23:33
<Michael Ficarra>
I don't see why anyone would ever go out of their way to observe that they are accessors
23:34
<ljharb>
https://npmjs.com/get-intrinsic does, because it's using getOwnPropertyDescriptor
23:35
<ljharb>
it's not that i think someone will actually care which it is; it's that i know that code written to assume one kind of descriptor will break if that kind changes. and SES lockdown does change them, and it does break code, so i have actual evidence that this breakage is a problem
23:36
<littledan>
it seems very likely that eventually those websites will get updated, and whatever workaround we use will be something that we want to undo
I don't think we can count on websites eventually getting updated in this way. We've made other decisions to work around old libraries before.
23:37
<ljharb>
If there was a way to do a Get without traversing the prototype chain then I’d use that, and it’d be immune to this breakage.
23:37
<littledan>
https://npmjs.com/get-intrinsic does, because it's using getOwnPropertyDescriptor
I kinda feel like expert polyfill authors will have an easier time working through these issues than ordinary JS developers who are doing x.constructor === Y
23:37
<ljharb>
I don't think we can count on websites eventually getting updated in this way. We've made other decisions to work around old libraries before.
in this case it’s a concrete list of known customers so i think we can count on it more than usual.
23:38
<ljharb>
I kinda feel like expert polyfill authors will have an easier time working through these issues than ordinary JS developers who are doing x.constructor === Y
if we never ship constructor then they just won’t do that
23:39
<Michael Ficarra>
leaving them as accessors is also a perfectly fine state
23:40
<ljharb>
i don’t agree, unless that’s the pattern we’re going to consistently follow elsewhere, at least in new things
23:40
<Michael Ficarra>
when this exact kind of breakage happens, yeah, that's probably the plan
23:41
<littledan>
Ultimately I think either option is OK and am just unconvinced by the strong arguments on both sides
23:42
<littledan>
I don’t know how we should make decisions in these cases. “First one to back down” seems like not completely optimal..
23:43
<Michael Ficarra>
there are no strong arguments in either direction here
23:43
<ljharb>
We can also wait.
23:44
<Michael Ficarra>
Chrome was not willing to hold off on shipping this proposal while we wait for more of those customers to upgrade, if that's what you're suggesting
23:45
<ljharb>
what would chrome do if we didn’t suggest a change?
23:45
<ljharb>
presumably they’d pick one of the three options, or “break them anyways”
23:45
<Michael Ficarra>
they would not break these websites
23:46
<Michael Ficarra>
they would probably move forward with one of these two options
23:46
<Michael Ficarra>
I don't speak for Chrome though
23:46
<littledan>
presumably they’d pick one of the three options, or “break them anyways”
I think the hope is that TC39 will make a recommendation
23:46
<littledan>
This is sort of our job

23:46
<littledan>
It is not clear to me what we should be waiting for
23:47
<littledan>
If it works, I would rather choose Jordan’s preferred option than wait for something more beautiful to come along. This just isn’t a big or complex design space; we already understand it, I think
23:47
<ljharb>
i agree that’s our job, but if we can’t agree on the recommendation then isn’t the recommendation to either wait or break?
23:48
<littledan>
i agree that’s our job, but if we can’t agree on the recommendation then isn’t the recommendation to either wait or break?
I don’t think we should be complacent about this state of not agreeing; we should figure out how to agree, rather than calling ourselves virtuous for being thoughtful
23:48
<ljharb>
this isn’t like mootools or something where there’s thousands of sites with no good way to contact them. I think we won’t have to wait very long.
23:49
<littledan>
this isn’t like mootools or something where there’s thousands of sites with no good way to contact them. I think we won’t have to wait very long.
Sorry but this is where we have heard explicit disagreement from Chrome. I think we should focus on choosing between your option and Michael’s
23:49
<Michael Ficarra>
yeah I'm glad you can have such confidence, but Chrome can't make calls based on how confident Jordan is
23:50
<littledan>
(By your option I mean omitting the properties)
23:50
<Michael Ficarra>
the call they made is that they're not willing to wait any longer
23:50
<ljharb>
i mean they could :-) they just didn’t/wont
23:51
<littledan>
OK, so, can we just go with the flow of that and try to make a decision between omitting them and using getters? I don’t see what it would serve to push back on Chrome here
23:53
<Michael Ficarra>
honestly, I think the web compat issue (at least this particular one) will be resolved in another 3-6 months
23:53
<Michael Ficarra>
but if we omit them, we have to come back to this later and see if it's web-compatible to add data properties
23:53
<Michael Ficarra>
and if it's not, we'll have to add the accessors anyway
23:54
<Michael Ficarra>
whereas, if we add accessors now, we don't ever have to revisit this if we don't want to
23:55
<Michael Ficarra>
crucially, I don't have to be the one to revisit this and Chrome doesn't have to be the one to risk web breakage again for basically no benefit to anyone