08:36 | <Rob Palmer> | Hongbo is suggesting the iterator protocol has a high performance overhead (25x perf loss). Do we know if this is cost is effectively mandated in the spec, or is this just a unfulfilled optimization opportunity in engines? https://www.moonbitlang.com/blog/js-support#benchmark-code-execution-speed-exceeds-javascript-by-25-times |
09:35 | <rbuckton> | My hope is that engines could optimize Array iteration, but I'm not sure about other cases |
09:45 | <rbuckton> | With something like iterator helpers, engines could theoretically optimize some parts of iteration knowing the shapes of the inputs and the whole of the graph of iteration operations. It's no small task, though, as it requires verifying that no intermediate steps are observable (proxies, user-defined iterators, patched methods, etc.). |
14:59 | <littledan> | Engines brought up the overhead of the iteration protocol at the most recent TC39 meeting, as a source of hesitation for the pattern matching proposal's semantics |
15:00 | <littledan> | engines sometimes can reduce or eliminate the overhead in particular cases (e.g., for-of loops over Arrays, as long as you didn't mess with Array.prototype too much) but these optimizations are fragile and difficult to generalize |
15:00 | <littledan> | I think if we were to do the iteration protocol today, we'd do it differently. But at this point, it'd be expensive to have multiple iteration protocols... |
15:01 | <bakkot> | see some discussion earlier: https://matrixlogs.bakkot.com/TC39_Delegates/2024-04-25#L21 |
15:02 | <bakkot> | that said there is a lot of room for optimizing iterators, in many cases without much in the way of performance cliffs |
15:02 | <bakkot> | it's just a lot of work |
15:09 | <bakkot> | (also it is extremely unlikely to ever be as fast as a bare loop, even with a huge amount of work) |
15:16 | <bakkot> | if we are interested in making iterator helpers faster, something we could do (along the lines of Keith's suggestion I linked above) is make all the { value, done } pairs yielded by a given call to an iterator helper be the same object |
15:18 | <bakkot> | so like
|
15:19 | <bakkot> | this would avoid most of the overhead and no one would ever notice |
15:19 | <bakkot> | but it is, as littledan indicates, conceptually quite gross |
15:46 | <mgaudet> | Hongbo is suggesting the iterator protocol has a high performance overhead (25x perf loss). Do we know if this is cost is effectively mandated in the spec, or is this just a unfulfilled optimization opportunity in engines? Both? The iterator protocol imposes a lot of complexity; some of that complexity can be optimized through heroic work in JS engines (and has been!)... but the heroics mean that it's costly to do, particularly in any generalizable fashion. I haven't looked at Iterator Helpers in a long while, but I'll bet they could certainly have more optimization applied over time, but I suspect a similar story applies: Could we make them faster? Sure, but that work displaces other work, and so we need to see it as important enough. |
15:56 | <bakkot> | by coincidence I just read an article which touches on performance of array destructuring, which is the same problem https://www.figma.com/blog/figmas-journey-to-typescript-compiling-away-our-custom-programming-language/#performance-issues-with-array-destructuring |
17:07 | <shu> | down with iteration protocol |
17:16 | <littledan> | if only we said "iterables can't contain undefined", then we would have .next() simply return undefined when it's done. Problem solved! |
17:16 | <shu> | if we are interested in making iterator helpers faster, something we could do is make all the |
17:50 | <littledan> | is this like, inside you there are two iterator results, one is the object x, and the other is also x, you are x ? |
18:21 | <Chris de Almeida> | savaged again by copy-by-reference! |
18:31 | <ljharb> | JS is always pass by value, though :-) |
18:41 | <Chris de Almeida> | tfw reference is value |