14:12
<Himanshu Shubham>

Thanks for resource. really appreciate it.
I can see here that they are calculating the new start and then returning the new object with that start. It is O(1) if I am not wrong.

May I know why you said the GC complexity is gross?

17:03
<shu>
by that i meant that we'd prefer that this optimization didn't exist
17:05
<shu>
because it is fundamentally kind of dangerous: you used to have a GC object in the heap at location p, and now you've split it into two
17:05
<shu>
this kind of object surgery is easy to get wrong and easy to forget in other parts of the system, adding complexity
17:06
<shu>
for example, if you're marking the heap concurrently with the mutator in another thread, what if left trimming happens at the same time for an object you're scanning?
21:20
<Himanshu Shubham>
While I might not grasp all the technical details, I do get the main idea. So, does SpiderMonkey's optimization in this area also introduce potential bugs, or did they manage it differently? The article I was referring to https://jandemooij.nl/blog/some-spidermonkey-optimizations-in-firefox-quantum/
21:28
<bakkot>
All engine optimizations introduce potential bugs.