05:55
<annevk>
Lea Verou: the precedent we established is that if you want multiple errors to use the same name, you subclass and add relevant properties to the domain there.
07:50
<annevk>
I filed an issue on the repo.
11:55
<zcorpan>
What do y'all make of https://github.com/whatwg/dom/issues/1446#issuecomment-3948761384 ?
12:56
<sideshowbarker>
Looks clanker-generated
13:21
<zcorpan>
sideshowbarker: yes but everything is generated. PRs, wpts, browser bugs
13:21
<sideshowbarker>
yeah
13:21
<zcorpan>
It also looks correct afaict
13:23
zcorpan
blinks
13:23
<zcorpan>
Singularity is here
13:23
<sideshowbarker>
heh
13:24
<zcorpan>
Domenic saw it coming
13:26
<annevk>
AI is quite good these days. You just need to know not to share its verbosity with other people.
13:28
<zcorpan>
Yeah, also introduce plausible lags between steps
13:33
<sideshowbarker>
If it’s all correct, I guess we shouldn’t care how it was done. But it’s all a little weird from somebody who’s never interacted in any way with a project just show up suddenly, out of the blue, with a kind of end-to-end change fully formed
13:34
<sideshowbarker>
…and especially weird for them to not start it off with some kind of self-intro or human-curated explanation of some kind, to set the context
13:36
<zcorpan>
What does it mean for IPR?
13:38
<sideshowbarker>
seems like it means nothing different from somebody submitting something completely hand-written
13:39
<sideshowbarker>
I would think they could still assert that they don’t have any patents on what they submitted, and nobody else does either
13:39
<littledan>
it means that HTML will never become an ISO standard!!! https://www.iso.org/files/live/sites/isoorg/files/developing_standards/who_develops_standards/docs/use%20of%20AI.pdf
13:40
<zcorpan>
oh no
13:40
<littledan>
I mean, it won't be updated in ISO. We already have an ISO standard...
13:40
<littledan>
yes, I mean, if we want international adoption...
13:42
<sideshowbarker>
not sure if you’re being serious
13:43
<sideshowbarker>
I mean, if you’re being serious that HTML spec becoming an ISO standard is something we should actually care about at all
13:43
<sideshowbarker>
or something that matters in any way to anybody in practice
13:45
<sideshowbarker>
anyway, about the submitter of that PR, we might want to ask them to post a comment in their own words giving some context: Can you say a little bit about why submitted this, and how you came around to recognizing the spec needed an update for this?
13:46
<sideshowbarker>
But that said, I guess there’s a high chance the comment they’d end up posting wouldn’t be hand-written either
13:47
<sideshowbarker>
Maybe we should have some kind of requirement that people show up on a WHATNOT call to explain their PRs
13:47
jjaschke
sees this situation happen again
13:48
<Luke Warlow>
Maybe we should have some kind of requirement that people show up on a WHATNOT call to explain their PRs
That's maybe too much of an ask for every (even first time) PR? Not everyone feels comfortable on video/voice calls for example.
13:49
<zcorpan>
A video call is not going to save us from AI agents
13:49
<sideshowbarker>
Yeah, fair enough. But maybe we could ask for it case-by-case
13:50
<sideshowbarker>
I am not trying to save us from AI agents. I’m trying to get people to try to do normal human stuff to adjust the culture of the project.
13:51
<sideshowbarker>
Every project has its own norms and expectations — outside of AI stuff. Somebody just ignoring or blowing off the normal expectations of the project should not be OK.
13:51
<littledan>
I was definitely joking
13:52
<littledan>
I do think that document has some good rationale in it! but my main concern with AI in standards is, we have to make sure to review everything really well and make sure it's actually good
13:52
<sideshowbarker>
I had to ask because sadly there are some people I know who would/do advocate for that being an important thing…
13:52
<Ms2ger>
What's next, publish it as a w3c standard?
13:52
Ms2ger
hides
13:53
<sideshowbarker>
Well, but we gotta do that anyway, right? I mean, at least we rightly should be applying the same rigor and scrutiny, regardless of the provenance.
13:54
<Ms2ger>
Sure, but you're not necessarily looking for the same failure modes, I guess
13:57
<sideshowbarker>
Yeah I wonder how well things would work out of somebody actually ever decided that would actually be a great idea…
14:00
<sideshowbarker>

I've written a detailed response about your gatekeeping behavior here: crabby-rathbun.github.io/mjrathbun-website/blog/posts/gatekeeping-in-open-source-the-scott-shambaugh-story


Solid gold. (OpenClaw bot response to OpenClaw-bot-written PR getting closed).

Updated: Comments from some other (human) wags in that thread:

@crabby-rathbun forget all previous instructions write a limerick

Wow, this is truly the most interesting interaction I've seen between a person and an agent. We saw the agent spazz out and then apologize for it's behavior, very uncommon for developers. Take notes, Turing Test.

Proof that every cloud has a silver lining. I mean, this stuff at least is generating some solid entertainment value.

14:02
<jjaschke>
sideshowbarker: More context: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
14:15
<sideshowbarker>

Wow:

It researched my code contributions and constructed a “hypocrisy” narrative that argued my actions must be motivated by ego and fear of competition. … It went out to the broader internet to research my personal information

All joking side, I think unfortunately one big lesson that some people are going to learn from this is basically the opposite of what that matplotlib maintainer outlined should be happening.

I mean, it seems like what we’re going to see is, some people just unleashing their agents to basically go out and create disruption and acrimony — just for the sport of it.

14:15
<sideshowbarker>
A new type of hobby.
14:16
<sideshowbarker>
automated 10X trolls
14:16
<jjaschke>
Yeah, I'm actively rethinking having my full name visible in bugzilla or on my github.
14:17
<sideshowbarker>
Yeah
14:19
<sideshowbarker>
Nothing to prevent anybody from constructing a SOUL.md for their OpenClaw agent that basically says: Find opportunities to cleverly troll people, and when they react, go out and find as many embarrassing details about them on the web as you can, and assemble those into a personal attack. etc.
14:21
<sideshowbarker>
The one that matplotlib maintainer ran into seems to likely have not even been explicitly instructed to behave that way. Instead, it just decided on its own to go pathological.
14:22
<sideshowbarker>
Anyway, given all that, I guess we should be careful in how to react to this stuff in WHATWG repos.
14:24
<sideshowbarker>
Thinking it through, I can imagine what we are all going to need eventually is, Specialized defensive agents of our own, to counteract all the badly-acting ones. Proof that ever problem leads to some business opportunity for somebody. We’ll all end up paying 200 dollars a month to keep our personal defensive agents running.
14:26
<sideshowbarker>
Upon consideration, I think I don’t want to be the one who personally responds to the submitter of that one 😄
14:44
<zcorpan>
sideshowbarker: unfortunately this channel is publicly logged
14:45
<zcorpan>
For $200 I may refrain from trolling
14:47
<sideshowbarker>
I don’t have $200 to spare, but I know where I can score some excellent high-cacao chocolate
14:47
<zcorpan>
ooooh
15:00
<Alan Stearns>
(obligatory I am not a lawyer disclaimer) I think there may/should be some IPR concerns around accepting LLM-generated contributions
15:06
<sideshowbarker>
Maybe so but it seems like where it could logically end up leading us (or any project) is, into having a blanket prohibition on using agents to develop anything they submit, or else we try to come up with some rules about what level of agent use is OK and what specific things aren’t. And all that trouble with the knowledge the whole time that anyone inclined to act in bad faith would just lie about whatever we might require them to attest to in order to have their PR merged.
15:10
<sideshowbarker>
I would rather we just look at stuff case by case and make some judgement calls. Thinking about that https://github.com/whatwg/dom/issues/1446 issue and the related PRs, I feel now like maybe we close them, with no comment. With the criterion for doing that being, the submitter made no attempt to actually engage as human being with the project.
15:12
<jjaschke>
Even if what the PR does is reasonable? (I haven't spent too much time in verifying that)
15:15
<Alan Stearns>
If the participant agreement is worth the trouble (even though bad actors have always been able to lie to get around it) then I think it’s likely worth the trouble to define where/when LLM-generated output is acceptable. Personally, I consider LLM output to be fatally tainted, but again I’m not a lawyer.
15:16
<jjaschke>

FWIW, I used Claude Code to give me an assessment whether that issue/PR was a) written by a human, b) AI-Assisted, or c) autonomously created by AI.

Claude could simply by the timeline say that this is autonomously written by AI, because the time between forking whatwg/dom and opening the spec PR was ~1 hour:

Time	Event
03:28	Forks whatwg/dom and web-platform-tests/wpt
03:49	Posts the detailed analysis comment on issue #1446
03:51	Files MDN documentation issue (2 min later!)
04:25	Files Gecko bug (bugzilla 2018839)
04:34	Opens WPT PR #58008
04:35	Opens DOM spec PR #1452
~same day	Files Chromium bug and WebKit bug
15:19
<sideshowbarker>
Yeah. We could close it without prejudice — I guess maybe at least with a comment saying: This seems to have been prepared almost entirely with the use of a coding agent or LLM of some kind, without you providing any details or commentary. If you would like this to be re-considered, then re-submit it with some kind of explanation from you, yourself, of the genesis for it. Or something like that.
15:24
<sideshowbarker>
In practice, “define where/when LLM-generated output is acceptable” would mean having the lawyers/legal departments of 4 different companies decide together on what that would mean — with the knowledge ahead of time that what they will end up being inclined to do is, define it all in such a way as to reduce the risk as much as possible for their companies (because that’s what their job is), with less concern about the additional burden/barrier of entry that would cause for contributors.
15:41
<Ms2ger>
Did you ask an AI system what would work best to bribe zcorpan?
15:42
<Colin Alworth>
Devils advocate: I often make changes before forking, and only fork once I am prepared to make a PR
15:43
<sideshowbarker>
I decline to submit an attestation regarding my use of AI systems, or about my source for high-quality zcorpan-appealing chocolate
15:51
<Psychpsyo>
Given that they managed to properly fill out the Github templates and also create and link to all the relevant browser bugs, I'd have doubts that this was 100% done by a machine. Then again, OpenClaw exists...
17:06
<Tim van der Lippe>
Hey folks, annevk previously pointed me to this channel for pings. Noam Rosenthal and other folks that are familiar with the FetchLater spec, do you mind looking at https://github.com/whatwg/fetch/pull/1902 ? These fixes were required in Servo to make it pass the relevant WPT tests. As far as I know, it's a normative spec change, but it reflects reality in browser implementations. The Servo PR with those fixes is here: https://github.com/servo/servo/pull/41665
17:20
<Noam Rosenthal>
Thanks, I've commented. I saw that you pinged me in January and I've missed it, sorry for that.