02:23
<Richard Gibson>

it looks like the validation order of receiver.set(source, offset) is basically

  1. Throw if receiver is not a TypedArray.
  2. [in ToIntegerOrInfinity] Throw if offset fails to coerce to a Number.
  3. Throw if offset coerces to negative.
  4. [later steps are in SetTypedArrayFrom{TypedArray,ArrayLike}]
  5. Throw if receiver is associated with a detached ArrayBuffer.
  6. [only in SetTypedArrayFromTypedArray] Throw if source is associated with a detached ArrayBuffer.
  7. [only in SetTypedArrayFromArrayLike] Throw if source fails to coerce to an Object or fails to return a "length" that coerces to a Number (which is otherwise clamped to the inclusive interval from 0 to 2**53-1) for later steps).
  8. Throw if offset is infinite or offset plus the length of source exceeds the length of receiver.
  9. [only in SetTypedArrayFromTypedArray] Throw if receiver and source have different [[ContentType]] (one BigInt while the other is Number).

It makes sense to reject infinite offset at the same place as rejecting finite-but-too-big offset (since both are effectively the same issue), but the overall behavior would be better if it were consistent with how-we-work#119 and performed all validation of offset in the same place after getting the length of source—i.e., moving steps 2 and 3 above to immediately before step 8 (and ideally also moving step 9 above to immediately after step 6). Such error reshuffling would indeed be normative, but probably safe (if perhaps too minor to be worthwhile).

03:46
<ljharb>
gotcha, that all makes sense
03:46
<ljharb>
i'll leave it be i suppose
17:48
<ljharb>
ok, related - can someone convince me that step 25 isn't broken? https://262.ecma-international.org/12.0/#sec-settypedarrayfromtypedarray my 1:1 implementation fails a test on Float32/Float64 arrays, but passes if i unconditionally do step 26
17:50
<shu>
can you elaborate more on what you mean by "broken"
17:51
<shu>
that reads okay to me?
17:52
<ljharb>
so, step 25 basically copies byte by byte - but i'm wondering if perhaps limit isn't correct or something
17:53
<ljharb>
the example i'm working with is new Float32Array([1, 2, 3]).set(new Float32Array([10]), 1) which invokes SetTypedArrayFromTypedArray
17:54
<ljharb>
it's quite obviously possible that my GetValueFromBuffer or SetValueInBuffer implementation is wrong - since it's only wrong for Float32 and Float64, it might be in RawBytesToNumeric somewhere
17:54
<shu>
step 26 also uses limit
17:55
<shu>
the only difference is whether you advance the index 1 byte at a time or ElementSize at a time
17:55
<ljharb>
true
17:55
<ljharb>
ok then i'll have to dig into my other implementations, thanks for confirming
17:55
<shu>
gl
17:56
<ljharb>
altho - can you help me find an example where skipping step 25 entirely would break, so i can add a test case?
17:56
<shu>
NaNs is the canonical example in my mind
17:56
<shu>
25 says preserve NaN bit patterns
17:56
<shu>
26 says don't have to
17:56
<ljharb>
ahhh ok, hm
18:29
<ljharb>
phew, ok found the problem, thanks for the confirmation
18:30
<shu>
what was the issue?
18:53
<ljharb>

for // 6. Let rawValue be a List of elementSize containing, in order, the elementSize sequence of bytes starting with block[byteIndex]., i was doing var rawValue = $slice(new $Uint8Array(arrayBuffer), byteIndex, 0, elementSize); // step 6

and i changed it to var rawValue = $slice(new $Uint8Array(arrayBuffer, byteIndex), 0, elementSize); // step 6

19:02
<shu>
ah ha