04:10 | <annevk> | Shannon Booth: I see you're implementing URLPattern. Would be curious if you agree with the test changes I propose in https://github.com/web-platform-tests/wpt/pull/51316 |
10:14 | <Shannon Booth> | Indeed, I've been working on it a little when I can find time outside of day job. it makes the '{ "protocol": "http", "port": "80 " }' test fail for me, and the 'https://{sub.}?example{.com/}foo' one pass. Will investigate why it's different. Could very well be an issue on my side, I'm sitting at 335 pass 15 fail overall, there are some issues I still need to debug. |
12:19 | <Shannon Booth> | I left a comment on your URL merge request with what seems to be a related case that is currently failing locally |
16:18 | <smaug> | How does chromium track intermittently failing tests? Maybe I'm just using wrong keywords when trying to find some information from issues.chromium.org |
16:19 | <smaug> | annevk: you might know if webkit is doing something like that |
16:20 | <smaug> | The background is that it might be useful to see if some tests are failing in multiple engines - especially if the tests are written in a way where they rely on certain scheduling even though the spec doesn't require that. |
20:20 | <annevk> | Shannon Booth: cool, thanks for taking a look. |
20:22 | <annevk> | smaug: in LayoutTests there's multiple TestExpectations files (one global and then one per platform) where we track skipping tests for flaky reasons. No standardized annotation I'm afraid, but if you have a file name it should be easyish to find there. |
20:23 | <annevk> | smaug: https://github.com/WebKit/WebKit/blob/main/LayoutTests/TestExpectations (note that some of the things that are globally skipped are enabled on specific ports, we use that for certain types of form control tests for instance) |
22:05 | <jarhar> | How does chromium track intermittently failing tests? Maybe I'm just using wrong keywords when trying to find some information from issues.chromium.org |
22:05 | <jarhar> | On source.chromium.org |
22:06 | <smaug> | ah, thanks |
22:06 | <smaug> | jarhar: and there is something to hint about intermittent issues? |
22:07 | <smaug> | Perhaps [ Failure Pass ] ? |
22:07 | <jarhar> | If the test is flaky then it is almost certainly listed in TestExpectations. If it's in TestExpectations anywhere, then we most likely ignore the test basically |
22:07 | <jarhar> | Failure pass at the end of it means that we accept failure or pass results from the test |
22:08 | <jarhar> | Apparently "pass" is implied though which is unfortunate |
22:08 | <jarhar> | Timeout is another thing that can go in the square brackets |
22:09 | <jarhar> | If the test is failing consistently and isn't flaky or timing out, then instead of being in TestExpectations we might have another file next to it that ends with -expected.txt which contains the failing test cases/assertions |
22:09 | <smaug> | ok, and one of the tests I was looking at earlier today is there 🙂 |
22:10 | <jarhar> | Great! Let me know if you want me to run the test for you to see what it prints out or how flaky it is |
22:11 | <smaug> | we're fixing that test, well jjaschke is |
22:14 | <smaug> | it is part of interop-25 though, but it expects certain scheduling which isn't defined, so need to make it less flaky |
22:43 | <annevk> | smaug: FYI: https://github.com/whatwg/xhr/pull/394#issuecomment-2735275248 |
22:45 | <smaug> | but the binding changes should be enough, I think. The relevant c++ is generated from the webidl interface |
22:46 | <annevk> | smaug: including the internal fields to store the data? Because if those are unsigned long long as they are in WebKit, you'll get rounding where you should not have rounding, and such. |
22:47 | <annevk> | It'd be pretty neat if you generated that much though. Might be possible for Events... |
22:48 | <smaug> | yeah, there is event specific codegen, works with normal kinds of events |
22:49 | <annevk> | Ah okay, very nice. |
22:49 | <smaug> | (and searchfox can find the generated code) |