2012-04-01 [22:45:28.0000] back, fwiw [22:45:32.0000] abarth: why so? [22:59:46.0000] Hixie: we're just reading about seamless and srcdoc [22:59:55.0000] Hixie: reading how the two interact [03:31:12.0000] https://dvcs.w3.org/hg/domcrypt/shortlog is somewhat confusing [03:49:45.0000] So, exciting tip: if you're taking your computer from America, and it has a power supply that supports both 110V and 220V, make sure that before you hook it up to 220V, you check that you don't have to flip a switch or anything. [03:50:05.0000] Otherwise you might need to make an unexpected trip to the computer store. [03:50:24.0000] (Apparently, the Hebrew term for a power supply is ספק כח.) [03:50:25.0000] thanks edison [05:43:56.0000] https://bugzilla.mozilla.org/show_bug.cgi?id=641821#c59 [05:44:07.0000] didn't Gecko argue against that initially? [05:44:09.0000] oh well [05:44:33.0000] good to see people read the spec carefully when implementing... [05:45:04.0000] Gecko is a rendering engine, it can't argue against anything :) [05:50:23.0000] you know what I mean [05:51:31.0000] You know Mozillians' comments are almost always in a personal capacity :) [05:52:46.0000] it was either sicking or smaug so it doesn't matter much in this case [05:57:37.0000] Speak of the devil [05:58:51.0000] /me is not a devil, just a friendly dragon [05:59:31.0000] annevk: argue agains what? [05:59:53.0000] against [06:00:34.0000] /me needs to still file bugs found in the spec during the week [06:00:55.0000] having two records for a replace operation [06:01:38.0000] annevk: in some cases there sure are many records [06:01:53.0000] there is the record removing the new node from its old parent [06:02:09.0000] then record adding to the new parent [06:02:25.0000] the latter can be combined with removing the old node [06:02:55.0000] I need to still check what latest webkit does [06:03:03.0000] /me doesn't know how to update chrome [06:10:12.0000] I was referring to the comment in that bug... [06:10:17.0000] it was not about adopting [06:10:27.0000] you'll always need a separate record for that [07:48:58.0000] When is .isTrusted supposed to be true? [07:49:09.0000] /me wonders if input events from execCommand() should have it true or false [07:49:29.0000] false [07:49:36.0000] Okay, why? [07:50:54.0000] Should it be true if the execCommand() is triggered by a user action (e.g., hitting a key triggers document.execCommand("insertText"))? [07:52:29.0000] hmm, I guess input is some kind of corner case [07:52:44.0000] isTrusted is true when the spec says to dispatch the event [07:52:54.0000] e.g. readystatechange on XMLHttpRequest will have it set to true [07:53:01.0000] The spec for execCommand() says to dispatch the command. [07:53:07.0000] The author can invoke execCommand(), of course. [07:53:31.0000] annevk: seems like it should be true in this case, if the event object is constructed by execCommand and can't be arbitrarily manipulated [07:53:54.0000] that is, it's always seemed to me that !isTrusted means "this object was constructed by hand" [07:54:02.0000] or something like that [07:54:07.0000] yeah, something like that [07:54:10.0000] :) [07:54:34.0000] not sure it's actually needed [07:55:28.0000] hmm [07:55:37.0000] form.click() results in !isTrusted [07:55:54.0000] (in FF) [07:55:58.0000] which seems odd [07:56:16.0000] at least to how i've intuitively viewed isTrusted [07:56:28.0000] (personally I've never found isTrusted to be terribly useful, anyway...) [07:57:32.0000] so in Gecko it might be user-initiated; something to do with their XBL impl [07:57:46.0000] (when are you ever not required to trust other scripts on the page, anyway?) [07:58:09.0000] XBL [07:58:28.0000] xbl isn't relevant to web pages... [07:58:53.0000] I'm not saying having isTrusted makes sense [07:59:08.0000] weird, didn't know webkit (or at least chrome) doesn't have form.click (or maybe I did and I forgot, since I've been over this territory before) [07:59:09.0000] maybe file a bug on DOM 3 Events? [07:59:25.0000] I just specced isTrusted because it was there... [07:59:28.0000] i'm sure it won't be removed so I won't waste my time [07:59:40.0000] the new editor is far more reasonable [07:59:50.0000] not important enough (to me) to spend time arguing for that anyway (if anyone else wants to, go for it) [07:59:50.0000] and we could at least learn what it's for [07:59:51.0000] 3249 // Click() is never called from native code, but it may be [07:59:51.0000] 3250 // called from chrome JS. Mark this event trusted if Click() [07:59:51.0000] 3251 // is called from chrome code. [07:59:56.0000] yeah I might [08:01:31.0000] if it doesn't do anything meaningful, it could also just always be true, which would be less likely to break pages (still not guaranteed, though) and simplify things [08:01:50.0000] hmm [08:01:56.0000] is it even supported in every browser? [08:02:18.0000] someone might still be using it for other unintended purposes, like "did I initiate the event or did the user" [08:02:51.0000] (which you can always do by tacking a property on the event when you create it, of course) [08:03:36.0000] seems to not be there in chrome [08:04:25.0000] don't have ie9 handy [08:06:19.0000] If it's in D3E, IE probably has it [10:20:43.0000] TabAtkins, nicely done [10:22:43.0000] that does look pretty [10:29:10.0000] oh my goodness. [10:59:08.0000] "Design" [10:59:09.0000] "W3C" [10:59:14.0000] I think my brain just imploded. [10:59:57.0000] Just wow. It's really quite well done. [16:13:12.0000] Hixie, on HTMLCanvasElement the "_callback" argument can now be named just "callback" (argument identifiers don't need to be escaped when they're any of the names at http://dev.w3.org/2006/webapi/WebIDL/#prod-ArgumentNameKeyword) 2012-04-02 [17:05:18.0000] wow, nobody in webgl understands how web specs work *at all* [17:05:22.0000] "I think that we should mandate that as soon as a feature becomes available without prefix, support for the prefix should be dropped." [17:05:29.0000] as if specs can force browsers to do things [23:13:22.0000] can anyone give me an example showing the difference between s and del elements, where content that would be appropriate for one is not for the other? I’d like to check I understand the difference… [23:26:56.0000] doesn't the spec have examples? [23:49:24.0000] zcorpan: the spec uses an old price being replaced by a new price for s, and completed todo items, closed bugs, replaced words, and a removed table column for del. However I’m not sure why del wouldn’t be appropriate for the old price, or s for e.g. replaced words [23:51:00.0000] for example, if an online newspaper misquoted someone, they’d want to show both the incorrect text (for context) and the new text. semantically it seems like s and ins, but I’d probably use del and ins to use @datetime on del [23:53:52.0000] s+ins seems bogus [23:56:58.0000] if you fixed a misquote, it seems like a deletion and addition, so del+ins (if you want to keep the misquoted text, otherwise just remove the misquoted text altogether) [23:57:23.0000] (and don't use ins in that case) [23:57:48.0000] zcorpan: s seems possible for a misquote under “no longer accurate”, but I’m also iffy about it. The only examples of s are obsolete prices (with new sale price) and sold out events. I’m wondering what other content would be appropriate for s [23:59:05.0000] a misquote wasn't accurate to begin with :-) [23:59:59.0000] :) any other examples you can think of? [00:01:33.0000] *shrug* [00:02:26.0000] zcorpan: thanks [00:08:04.0000] false descriptions [00:08:29.0000] he's a piece of work nice guy [00:08:37.0000] or some such [00:12:50.0000] name changes, maybe [00:12:50.0000]

Introducing Crap Dirt Dash

[00:13:51.0000] hsivonen: did you just write "when they" as "one day"? [00:13:59.0000] hsivonen: because otherwise I'm not sure what you wrote [00:16:50.0000] annevk: yes. speech recognition for the lose [00:17:42.0000] oh, if it was all dictated that's quite good then I guess [00:18:45.0000] annevk: did you get anywhere with big5? [00:19:35.0000] I haven't done much during the weekend, I have some kind of headache I can't seem to get rid of [00:20:13.0000] :-/ [00:21:13.0000] hmm [00:21:16.0000] Simons-MacBook-Pro:Dotnetdotcom zcorpan$ grep -aFic "big5" web200904 [00:21:16.0000] 8882 [00:21:50.0000] it occurs in content too, but that's a lot higher [00:23:06.0000] actually it's not [00:23:24.0000] do you want the subset with pages that contain "big5"? [00:23:26.0000] if I search for big5 through the data you gave me I get 8511 [00:24:40.0000] oh [00:26:01.0000] hmm, but if I do the same grep I get 5780 [00:26:54.0000] that's weird. does the i flag not work with F? [00:28:09.0000] I think my Python script might be wrong somehow [00:28:57.0000] grep -aPc "[bB][iI][gG]5" web200904 [00:28:58.0000] 8892 [00:29:43.0000] grep -aFizHZ "big5" web200904 > big5-all.txt resulted in a 91MB file [00:30:21.0000] yeah my Python script must be wrong [00:31:01.0000] though how exactly... [00:31:03.0000] bytes = open("big5.txt", "rb").read() [00:31:03.0000] find = b"big5" [00:31:03.0000] found = b"" [00:31:03.0000] c = 0 [00:31:03.0000] for b in bytes: [00:31:03.0000] if b.lower() in find: [00:31:03.0000] found += b.lower() [00:31:04.0000] else: [00:31:05.0000] if found == find: [00:31:05.0000] c +=1 [00:31:06.0000] found = b"" [00:31:06.0000] print c [00:31:52.0000] /me is uploading it zipped (18MB) over 3G [00:32:07.0000] why are you on 3G? [00:32:17.0000] on the train [00:38:16.0000] with the script above I get more results than you [00:38:22.0000] 11k [00:39:07.0000] zcorpan: is the grep thing just substring matches or are there other conditions? [00:40:14.0000] your script seems to miss a count in e.g. "big55" [00:40:14.0000] should be just substring [00:41:02.0000] then I would get even higher numbers [00:41:32.0000] /me switching trains [00:41:39.0000] running it on the 29MiB file gives 11k results [00:41:44.0000] oh well [00:46:52.0000] http://lists.w3.org/Archives/Public/www-tag/2012Apr/0010.html [00:49:01.0000] my MacBook crashed hard [00:49:14.0000] lots of pretty colored lines on the screen [00:49:35.0000] annevk: textedit finds 11669 "big5"s in big5.txt [00:49:51.0000] annevk: and 15564 in big5-all.txt [00:50:02.0000] annevk: maybe the grep count counts "lines" [00:50:32.0000] that's the same amount I found [00:50:35.0000] exact [00:51:01.0000] great, so my scripts are not entirely buggy [00:51:20.0000] it's buggy for the input i gave :-) [00:52:40.0000] hence "not entirely" [00:55:12.0000] the problem is that I haven't really found a way to analyze the data properly [00:55:38.0000] there's about a 1000 files in what you gave me [00:56:35.0000] and over 600000 potential code points in the PUA / HKSCS range [00:57:27.0000] so far I thought of writing a tokenizer to strip out the HTTP related bits and store the HTML bits each in one file [00:57:39.0000] then maybe have some statistics on a per file basis and study a couple of them [01:01:09.0000] hsivonen: "There really are spoons." [01:02:54.0000] MikeSmith: did I sent more misdictated email? [01:03:06.0000] hsivonen: no, see the reply [01:03:11.0000] from Pat Hayes [01:03:31.0000] to the www-tag message you cited [01:03:34.0000] I see [01:05:03.0000] bz implementing ruby support? [01:05:08.0000] https://bugzilla.mozilla.org/show_bug.cgi?id=256274 [01:05:26.0000] www-tag is top of my list of lists I am considering to unsubscribe from [01:05:40.0000] oh, maybe he's just merging existing patches to the tip [01:07:26.0000] MikeSmith: looks like it. it has been on his list of things to do [01:07:33.0000] ok [01:07:38.0000] good to see [01:07:45.0000] original patch seems to be from Hajime Shiozawa [01:08:18.0000] annevk: simon.html5.org/dump/big5-all.txt.zip [01:09:20.0000] annevk: IIRC, the original needed some additional attention from bz [01:15:56.0000] thanks zcorpan, even more data I don't know what to do with :p [01:19:55.0000] is the list of encoding names in the Encodings spec the complete list that the spec will contain? [01:20:38.0000] I realize some of the ones you have there you haven't specced out yet, but I just mean is that the complete outline at least [01:20:39.0000] MikeSmith: unless someone finds another encoding we need to add [01:20:46.0000] OK [01:21:05.0000] MikeSmith: and it has been suggested to remove the remaining IBM encodings [01:21:12.0000] oh [01:21:13.0000] as well as iso-2022-cn [01:21:21.0000] why? [01:21:27.0000] stronger case for the latter [01:21:33.0000] because not all browsers implement them [01:22:02.0000] OK [01:22:40.0000] so the criteria for what's included is that it's limited to the set of encodings that all browsers support, right? [01:22:41.0000] zcorpan: seems you did not upload a complete file [01:22:54.0000] zcorpan: it downloads 14MiB and does not recognize it as a zip file [01:23:29.0000] MikeSmith: or something that's needed for compatibility [01:23:40.0000] OK [01:24:01.0000] MikeSmith: and potentially something that's really useful, but "better than utf-8" has not been found yet to my knowledge :) [01:26:03.0000] annevk: so that's basically what I already told Richard but I think at some point he's going to ask you himself, and maybe ask about specific encodings [01:27:04.0000] MikeSmith: cool [01:27:31.0000] annevk: would publishing this as a deliverable of the i18n WG be an option? [01:27:51.0000] that would put it in the same case as the charmod spec [01:28:14.0000] does HTML5 normatively reference charmod? or do any other specs? [01:28:44.0000] I mean it would put it in the same status as charmod as far as W3C publication [01:31:15.0000] I thought i18n didn't do RECs? [01:31:26.0000] but charmod is a REC [01:31:31.0000] yeah [01:31:33.0000] so that might be okay [01:31:52.0000] I'm not so clear on status of charmod [01:31:59.0000] is it actually a REC? [01:32:03.0000] http://www.w3.org/TR/charmod/ says so [01:32:13.0000] a frequently violated REC [01:32:29.0000] yeah, I see it is [01:33:51.0000] oh sweet [01:33:58.0000] the original Prince of Persia code is going online [01:34:04.0000] I hope someone makes that playable in a browser [01:35:00.0000] annevk: try again [01:35:49.0000] trying again [01:39:26.0000] I think it worked [01:44:12.0000] it did and it seems I hit the bug zcorpan found in my script [01:44:24.0000] /me finds 15537 hits for big5 [02:30:14.0000] in the larger dataset not every file uses correct HTTP line endings [02:31:23.0000] Breaking news: people violate HTTP [02:32:00.0000] the problem is I need to change my simplistic tokenizer [02:32:01.0000] HTTP working group being treated for shock. [02:32:23.0000] I guess I should eat 0D when followed by 0A and otherwise use 0A [02:32:58.0000] or not worry about the larger dataset for now [02:33:09.0000] /me does that [02:46:35.0000] only a third of the files defines HTTP level charset [02:46:51.0000] of which a tenth is not big5/big5-hkscs [02:47:32.0000] and of those 34 a couple are bogus, some utf-8, iso-8859-1, ms950, and x-ms950-hkscs [02:47:58.0000] bogus is actually either the empty string or b"null" [02:48:06.0000] (i.e. those bytes, no quotes) [02:48:22.0000] oh, and one euc_kr [02:48:38.0000] which is also bogus, as it should be euc-kr to be recognized [02:49:50.0000] the way I search for charset is somewhat bogus too btw, but the cheat is justified for the dataset :) [03:02:53.0000] zcorpan: hmm [03:03:02.0000] zcorpan: are we sure they give the raw data? [03:05:19.0000] zcorpan: if I open a couple of test pages, decoding them as utf-8 gives better results :/ [03:06:40.0000] what's the dataset source again? [03:07:25.0000] ah http://dotnetdotcom.org/ [03:09:20.0000] to be clear, in processing I only opened files with the "b" flag set [03:13:36.0000] Philip`: know anything about that? [03:15:18.0000] annevk: also for big5.txt ? [03:22:42.0000] zcorpan: you mean big5-all.txt? [03:22:55.0000] zcorpan: this was on big5-.txt [03:24:56.0000] big5.txt [03:26:20.0000] I looked at the first 50 files [03:27:21.0000] lots have big5 in HTTP set [03:27:29.0000] but no big5 in the actual data [03:33:43.0000] annevk: i meant big5.txt. big5-all.txt was zipped so might have been tampered with by zipping or unzipping, was my thought [03:35:37.0000] it seems sort of plausible they have done normalization given the zero byte delimited files [03:35:59.0000] but it's clearly not great for this [03:38:13.0000] I think I'll email the dotnetdotcom guys just to be sure [03:38:32.0000] textwrangler can't open big5.txt, but has no problems opening utf-8 files with nulls [03:41:51.0000] well, there's no encoding conversion going on locally [03:41:52.0000] bytes = open("big5.txt", "rb").read() [03:41:54.0000] and [03:41:59.0000] newbytes = open("test-" + str(c) + "." + charset + ".html", "wb") [03:42:27.0000] the rest is just iterating over, testing on, and writing bytes [03:52:39.0000] emailed dotdot [04:13:36.0000] zcorpan: so yeah e.g. the euc_kr file which has big5 in , has EF BF BD as byte sequence which is UTF-8 for FFFD and is nothing in either other encoding [04:14:03.0000] annevk: ok :( [04:14:37.0000] and it has the same sequences in big5.txt as it has in my split out files [04:14:42.0000] when I use a hex editor [04:14:58.0000] too bad [04:15:16.0000] I could prolly write a custom utf-8 decoder to find out why big5.txt cannot be opened in TextWrangler, but I'm not sure that's worth it [04:44:01.0000] zcorpan: you could maybe quickly verify to be a 100% sure by checking some utf-16 data in the set [04:44:11.0000] zcorpan: as additional sanity check [04:46:20.0000] The dotbot data can't contain any UTF-16 pages since it can't represent 0x00 bytes [04:46:51.0000] doh [04:47:14.0000] /me never tried looking to see what they actually do with 0x00 bytes (maybe reject that page, or drop those bytes, or truncate, or whatever) [04:47:25.0000] windows-1252 data with octets over > 0x7F works too [04:49:38.0000] grep -aPc "^Content-Type\s*:\s*text/html\s*;\s*charset\s*=\s*[\"']?utf-16" web200904 [04:49:39.0000] 0 [04:54:19.0000] annevk: you could see what you find in http://webcrawl.s3.amazonaws.com/web.short.gz [04:55:01.0000] annevk: maybe grep screws things up [04:57:08.0000] no [04:57:16.0000] search for windows-1252 [04:57:29.0000] first octet sequence I find in that document is C3 96 [04:57:46.0000] it's a German document, and in UTF-8 that is Ö [04:58:12.0000] and that is followed by sterreich so I think it is indeed normalized :( [04:58:26.0000] /me notes that grep can screw things up if you don't run it with LANG=C [04:58:52.0000] I'm just looking through the file zcorpan pointed out in a hex editor [04:58:57.0000] nothing grep can screw up here [04:59:08.0000] so sad panda face [04:59:27.0000] big sad panda face [05:00:06.0000] so i guess you need to go shopping for a different data set, or do a crawl yourself [05:03:36.0000] Probably wouldn't be that hard to do a custom crawl that just got the kind of data you want (i.e. rejected any pages that aren't the encoding you care about without storing them) [05:05:15.0000] Depends if by "crawl" you mean actually parsing pages and following links and trying to get a not disasterously biased dataset, or just downloading random pages from lists on dmoz.org or wherever [05:05:59.0000] Well the extent that you need to actually parse pages to do it is rather limited [05:06:09.0000] You just need to find things that look like URLs [05:06:13.0000] Doesn't the dotbot data give you the URLs as well? [05:06:39.0000] Ms2ger: yes, but many urls are probably dead by now [05:06:48.0000] http://s3.amazonaws.com/alexa-static/top-1m.csv.zip might be a useful starting point [05:06:59.0000] what is that? [05:07:10.0000] alexa's top 1 million sites [05:07:10.0000] Sounds like Alexa's top 1 million pages [05:07:34.0000] is there a way to get all the URLs from the dotbot pages who have big5 somewhere? [05:07:37.0000] it'll only give you front pages [05:08:03.0000] 'cause then I could just download the dotbot pages again [05:08:11.0000] jgraham: Most are relative URLs, so you need some way to resolve them properly, and it's quite possible the URLs include non-ASCII characters so you need to decode the pages first [05:08:24.0000] and those that don't 404 and are still big5 would be useful [05:09:11.0000] Philip`: can you think of a good way to do that? [05:09:15.0000] and you probably want to avoid following links like [01:32:38.0000] Y U NO WORK?! [01:35:47.0000] but given an index (which I have) and a function to convert a point in an index to a byte sequence (which is not too hard), testing encoders should be fairly straightforward [01:35:50.0000] except in Gecko [01:41:51.0000] hmm [01:41:58.0000] you have to account for duplicates somehow too [05:33:26.0000] Hixie: I think most do. [05:34:53.0000] Would be really cool to have in the css though, -- not too nice if you don't have tab-complete. fill-the-container-but-for-the-purposes-of-shrink-wrap-act-as-if-you-had-a-width-of-0 -- guess it'll also be hard to remember exactly. :P [05:44:12.0000] seems like my alternative shift_js math is correct [06:28:47.0000] oh hey, that thing I was talking about the other day is called a fencepost error [06:28:48.0000] http://en.wikipedia.org/wiki/Off-by-one_error [09:49:58.0000] jzaefferer or scott_gonzalez if you're around, wanted to ask how many files total are in your test suite [09:50:56.0000] MikeSmith: About 300 [09:51:03.0000] oh OK [09:51:06.0000] so that's fine [09:51:34.0000] Hixie: You can get that behavior by saying that the intrinsic width is 0, but it defaults to width:fill (defined in Writing Modes). [09:52:26.0000] jzaefferer: I put together a command-line validation client that won't require you to run the service to validate your files. But I'm waiting on hsivonen to get back to review the code before I land it [09:52:26.0000] TabAtkins: oh, cool [09:52:33.0000] TabAtkins: is that anywhere close to being implemented? [09:52:52.0000] jzaefferer: it will reduce your validation time to a few seconds [09:52:59.0000] Nobody's touched it, but it's just "what width:auto does for blocks", so implementation is trivial once someone cares about it. [09:54:13.0000] jzaefferer: minimum of 4 seconds or so to validate your 300 files, but validation time for each file will be reduced to 20ms or at most 100ms I think [09:54:31.0000] MikeSmith: That's great. [09:55:26.0000] will require a single jar file [09:55:31.0000] about 18MB [09:56:46.0000] which is the other thing I need to talk to hsivonen about [09:57:11.0000] we currently don't actually distribute any third-party cod [09:57:15.0000] *code [09:57:48.0000] TabAtkins: bummer [09:58:25.0000] scott_gonzalez: making a single jar available requires distributing 3rd-party code, so I want to make sure hsivonen is OK with that before we do it [09:58:54.0000] ok [10:14:25.0000] Hixie: Are you using this for a spec, or for something real? [10:14:31.0000] real [10:14:43.0000] i come across it all the time [10:14:46.0000] Ah, never mind then. [10:15:15.0000] whenever i have something that shrinkwraps, e.g. a dialog or something, but it has to shrinkwrap around some widgets, and yet the dialog also contains text that can wrap [10:15:24.0000] and i'm happy for the text to wrap at whatever width the dialog ends up at [10:16:28.0000] Ah, but the dialog instead fills the parent, because the text is long enough to force that shrink-wrapping beahvior. [10:16:32.0000] Interesting. [10:35:40.0000] what should happen if you click or mouseover an element that is outside a modal dialog [10:35:44.0000] should i just ignore the event? [10:36:05.0000] or should i do something more subtle, like only kill click events or something [10:38:03.0000] i think i'll just kill all user interaction events and prevent all focusing of elements outside the modal subtree and its ancestors [10:38:34.0000] I think that's legit. [10:40:17.0000] i'm going to introduce inert="" as well while i do this, so that you can have semi-modal dialogs as some people have requested [10:40:26.0000] and make it use the same infrastructure [10:55:49.0000] ok inertism (inertia? :-) ) blocks user interaction events and makes things not focusable [10:55:57.0000] anything else it should block while i'm at it? [10:55:58.0000] (bbiab) [10:57:33.0000] Hixie: I don't recall the cross-browser specifics, but you might need to go into detail about how find/highlighting works while an inert element is open. [10:58:58.0000] A quick test shows that in Chrome if an element has focus and you bring up the find interface, you can search for text on the page and then press escape to create a range around the first result of the find, which moves focus to the range. [10:59:31.0000] I recall doing that a lot when I was testing modal plugins a few years ago to see how well they worked. [11:16:41.0000] annevk, smaug___, sicking: re: MutationObservers attributeFilter discussion. [11:16:56.0000] rafaelw_: yes? [11:16:59.0000] question: MutationRecord.name (when type="attribute") [11:17:14.0000] in the case of xml, is that prefix:localName [11:17:17.0000] or just localName [11:17:18.0000] ? [11:18:13.0000] looking [11:18:18.0000] i.e. what information is reported in the MutationRecord. [11:18:22.0000] ? [11:18:33.0000] rafaelw_: note that attributes in the xlink namespace might not have a prefix [11:18:41.0000] so they can have .name === .localName [11:21:23.0000] rafaelw_: currently .attributeName is set to the name per spec, not localName [11:21:33.0000] rafaelw_: lemme look what we do in our impl [11:23:15.0000] WebKit appears to report localName right now. [11:23:17.0000] ;-( [11:23:47.0000] yeah, we do the same in gecko [11:23:50.0000] i prefer that behaviro [11:23:54.0000] behavior [11:24:02.0000] reporting only localName? [11:24:09.0000] .name is rarely useful from a correctness point of view [11:24:45.0000] actually, especially in the case of mutation observers, when you are often observing someone else's code, it makes much more sense to ignore prefixes [11:24:53.0000] yeah, we only report localName [11:25:07.0000] but won't it be ambiguous what happened? [11:25:19.0000] you may not know what attribute changed? [11:28:54.0000] Hixie: The word you're looking for is "inertness". [11:29:59.0000] Hixie: What happens if you start selecting non-inert text, and drag into inert text? [11:30:17.0000] Hixie: And did you handle accesskeys/etc? [11:46:53.0000] sicking: ^^. if only localName is reported, isn't that potentially ambiguous? [11:54:16.0000] rafaelw_: ? [11:54:45.0000] hey. trying to settle the attributeFilter question. [11:55:04.0000] the thing I was asking is what does MutationRecord.name report if type='attribute' [11:55:08.0000] name or localName. [11:55:19.0000] apparently the spec says name, but both of us implemented localName. [11:55:26.0000] yes [11:55:36.0000] and my question is: isn't that potentially ambiguous for XML? [11:55:38.0000] because localName actually makes sense ;) [11:55:48.0000] i.e. you won't know which attribute changed. [11:55:57.0000] how would it be more ambiguous than name ? [11:56:27.0000] if an element has foo:bar & foo2:bar and you get told that 'bar' changed. [11:56:29.0000] you need to check namespaceuri + localName [11:57:01.0000] i'm assuming that namespaceURI would be the same for both foo:bar and foo2:bar. [11:57:36.0000] maybe i'm missunderstanding how this all works. it looks to me like there are three things: namespaceURI, prefix and localName [11:58:00.0000] where name == prefix:localName [12:00:00.0000] smaug___: am I misunderstanding something? [12:00:18.0000] I don't think so [12:00:20.0000] :) [12:00:36.0000] rafaelw_: remember, namespaced attributes may not always have a prefix [12:00:50.0000] can the example i gave above occur? [12:01:38.0000] I would assume no, but I'm not actually sure [12:02:50.0000] rafaelw_: not possible http://www.w3.org/TR/REC-xml-names/#uniqAttrs [12:02:53.0000] you assume it doesn't currently, or it *can't*. [12:03:42.0000] foo:bar and foo2:bar where foo and foo2 are prefixes for the same namespace isn't possible [12:06:09.0000] i see. [12:06:26.0000] i get it know. [12:06:39.0000] thank you. ok, I agree. localName seems like the right thing. [12:10:43.0000] do we need a bug to change to spec to say that MutationRecord.name reports localName only? [12:15:08.0000] rafaelw_: I assume the spec would need to be changed in order to fix that that attributeFilter too [12:15:18.0000] not sure if annevk prefer separate bugs [12:17:20.0000] rafaelw_: btw, is the GC handling I proposed ok to you ? [12:17:29.0000] it still seems wrong to me to standardize on local name given that most APIs around non-namespaced attributes care about qualified name [12:17:46.0000] e.g. Attr.name, setAttribute(), getAttribute(), hasAttribute(), etc. [12:20:11.0000] smaug___: yp. GC handling is right. It's what we implemented (though we didn't have tests and it turns out we had a bug -- which is now fixed). [12:28:40.0000] scott_gonzalez, TabAtkins: thanks, will consider those points [12:32:57.0000] rafaelw_: looking at attributeName + attributeNamespace should make it unambiguous [12:33:00.0000] rafaelw_: sorry, i was unclear. We gecko only looks at localName + namespace of attributes [12:33:04.0000] rafaelw_: what we don't look at is the name (== localName + prefix) [12:33:08.0000] rafaelw_: looking at the name is generally more a convenience thing, since it's a single string trying to describe a tuple [12:33:14.0000] rafaelw_: but it's more error prone since it breaks down if someone is using different prefixes than you think they are [12:34:51.0000] rafaelw_: in other words, I think the behavior webkit and gecko has implemented is better than what the spec does since it's less error prone in all cases, and only harder to use in extremely rare edge cases [12:38:21.0000] i get it now. thanks. [12:38:22.0000] i agree. [12:55:57.0000] rafaelw_: cool [14:13:51.0000] Hm. If you were writing a parser for CSS, for a spec, would you do it as a flat tokenization phase followed by a full tree-build phase, or a more intelligent tokenization phase that handles some elements of the syntax, followed by a somewhat light tokenization phase? [14:14:05.0000] s/light tokenization/lighter tree-building/ [14:15:52.0000] Basically I'm wondering if I should handle CSS's rule that statements/blocks can't end while there's an unmatched ([{ on the stack at the tokenizer or the tree-builder level. [14:20:13.0000] Hrm. I think I should do the flat tokenization approach, which means rewriting some things. [14:33:45.0000] TabAtkins: imho the quicker you move things from dealing with characters and strings to something more abstract, the better, so i'd go with a light tokeniser at the top that turns things into tokens, and then some sort of processor on top of that to get something structured [14:34:11.0000] TabAtkins: but that's just because i like dealing with strongly typed tokens more than with strings and characters :-) [14:35:33.0000] Hixie: I think you're right. Having to switch levels between raw characters and tokens is annoying. [14:36:04.0000] also don't forget you can have multiple levels, not just two [14:36:26.0000] like, one level to get tokens, one to wrap the tokens up into nested blocks, and then finally one to actually process the blocks [14:36:30.0000] It means the tokenizer has to do a bit more work in some cases, because I have much less contextual knowledge (I can't just say "oh, you're starting a selector. consume until you see '{' or EOF.) [14:36:37.0000] Yeah. [14:36:55.0000] yeah some of your tokens might be a bit special [14:37:08.0000] in CSS it's not so bad though because the escaping is the same everywhere and there's little ambiguity iirc [14:37:25.0000] e.g. an ident is an ident everywhere, whether it's a tag name or a property name or a media query type [14:37:42.0000] Yeah. [14:37:47.0000] (might be some exceptions but i can't think of any) [14:37:48.0000] Selectors are... weird. [14:38:43.0000] I think I need to parse ".foo" as a DELIM followed by an IDENT. :/ [14:38:43.0000] can't you do almost everything using only the tokens ident, punctuation ([, (, {, :, +, >, spaces, etc), strings ("...", '...'), and comments (/*...*/)? [14:39:03.0000] oh and numbers [14:39:04.0000] Yeah, maybe. [14:39:15.0000] so the tokeniser can do \unescaping [14:39:27.0000] i dunno i'm saying this all from memory :-) [14:39:35.0000] anywho [14:39:43.0000] I want to be a touch smarter to I'll directly get PERCENTAGE and DIMENSION rather than NUMBER + IDENT, but basically yeah. [14:40:57.0000] you need context for that iirc [14:41:03.0000] consider font: 1em 1em; [14:41:20.0000] which is equivalent to font: initial; font-size: 1em; font-family: "1em"; [14:41:43.0000] Nah, that's fine actually. font-family parses a bunch of idents. We regularized that last year. [14:41:47.0000] No special parsing rule there. [14:41:55.0000] really? font: 1em 1em is invalid now? [14:41:57.0000] I think it might allow numbers/dimensions too. [14:42:24.0000] The point is that font-family doesn't require anythign special until you actually parse the declaration itself. [14:42:33.0000] No special bheavior is needed at the lower levels. [14:42:35.0000] my point is that you have to treat "font:1em 1.0em" and "font:1em 1em" differently [14:42:46.0000] but i might be wrong i guess :-) [14:43:08.0000] should be easy to test what browsers do now that they support @font-face [14:43:16.0000] Ah, yes, it's all idents. [14:43:24.0000] font-family: 1em; is now invalid. [14:46:18.0000] good to know [14:46:21.0000] i wonder who implements that :-) [14:47:07.0000] webkit gets it right [14:47:22.0000] gecko too [14:47:24.0000] nice! [14:49:43.0000] Yeah, it was tracked in the CSS2.1 testsuite once we made the change, so people should have adjusted quickly. [15:38:00.0000] well accesskeys are proving a mite difficult [15:38:53.0000] i guess i could just make commands Disabled if they are inert [15:41:33.0000] That makes sense, I suppose. [15:42:59.0000] that would be awesome actually [15:43:09.0000] it would automatically disable everything in a menu that referenced commands in that section [15:43:16.0000] even if the menu wasn't inert [16:09:51.0000] so, is feras planning to ignore everyone and go ahead with his broken design? welcome to the future, where web api design is still done by blunt coercion [16:12:01.0000] zewt: ? [16:12:12.0000] oneTimeOnly [16:14:56.0000] oh, the blob url stuff? [16:15:03.0000] yeah [16:15:07.0000] it's so broken [16:15:11.0000] it makes me sad [16:15:19.0000] implement it first and ship earlier [16:15:23.0000] that's how it works [16:15:33.0000] basically it's the old story: microsoft implements something broken; microsoft goes "here's the api!"; everyone goes "this is broken, here's how to fix it"; microsoft puts fingers in ears and runs away [16:15:51.0000] welcoem to the web [16:18:55.0000] heh, tickets like this help make sure I don't waste time filing bugs on firefox https://bugzilla.mozilla.org/show_bug.cgi?id=641509 [16:19:35.0000] basically about fifteen thousand people going "taking the message out of onbeforeunload breaks our stuff" and several explanations of why it's harmless to show it, and it gets ignored for a year then closed without reading any of it [16:30:18.0000] anyone know what i should do at the aria level for inert="" and inert subtrees generated by modal dialogs? [16:30:24.0000] s/generated/established/ [16:31:03.0000] Hixie: is "inert" close in function to "disabled"? [16:31:15.0000] yeah. i was thinking aria-disabled="" might work. [16:31:41.0000] does "Used in Roles:All elements of the base markup" mean that it can be applied to elements regardless of role? [16:31:56.0000] yes [16:33:02.0000] ok, aria-disabled it is. [16:52:24.0000] what exception should i throw if you try to showModal() a dialog that's already showing? [16:52:40.0000] (if you show() a dialog that's already showing, i just do it. but it seems likely that showModal()ing twice is a bug.) [16:56:17.0000] what makes it the same? Having the same URL? (Including query and hash?) [16:56:33.0000] same element [16:57:04.0000] (this is not showModalDialog()) [16:57:09.0000] (it's .showModal()) [16:57:17.0000] ah [16:58:15.0000] i guess NotSupportedError 2012-04-10 [17:05:23.0000] "not supported" depending on state sort of seems odd [17:06:16.0000] InvalidStateError then? [17:06:19.0000] InvalidStateE...yes [17:06:32.0000] k [17:25:58.0000] ok, is taking shape [17:26:13.0000] probably be done tomorrow, if no hidden surprises come up [17:27:00.0000] hidden surprise dialogs are no good [17:27:05.0000] heh [17:27:21.0000] oh, wait, i forgot about hte magic form stuff [17:27:23.0000] that'll take longer [17:29:34.0000]
[17:29:47.0000] where the submit button closes the dialog and sets dialog.returnValue to submit button's value [17:30:13.0000] with all the form validation stuff happening as well of course [17:30:26.0000] hmm [17:31:42.0000] or i guess we could just have them use
[17:31:46.0000] but that seems lame [17:32:20.0000] then again, maybe instead of method=dialog, we should have method=none so it would work outside dialogs too [17:32:31.0000] and just do the closing-dialog behaviour be a magic thing if you happen to be inside a form? [17:32:51.0000] ok i'll think about it. if anyone has any ideas, paste them here or on the wiki. [17:32:51.0000] bbl. [18:32:49.0000] Hixie - regarding having a form submit button close the dialog etc., could we do the same for modeless pop-up-window "dialogs"? That is, have some way for a form in a child window to close the window upon successful submission? [18:33:10.0000] The specific use-cases I'm thinking of here are quite similar to the dialog use-cases, e.g. Plancast.com pops-up a twitter sign-in window, and many tweet buttons/links also pop-up a window for the user to complete their tweet and submit. Would be great if that submit could somehow close the child window. [18:34:08.0000] (without requiring JS to be one for it to work, note Twitter's tweet actions themselves work without JS but the pop-up windows still require the user to manually close them when done) [20:33:00.0000] site:whatwg.org/specs gives me tons of links under http://www.whatwg.org/specs/web-apps/2009-10-27/, http://www.whatwg.org/specs/web-apps/current-work/.w3c-html-core/ and other noise now ... wish thsoe would either be blocked from search engines or moved out of /specs [20:37:12.0000] if you give me a robots.txt that blocks what you want to block, i'll happily add it [20:47:08.0000] well, I have no idea what all of the stuff under there is, heh [21:49:59.0000] i find it painfully discouraging that i actually have to spend time arguing for not baking manual word-wrapping into a text format in 2012 [21:51:06.0000] feels like i'm trying to argue a language designer out of putting line numbers on every line and having GOTO N be the primary form of flow control [22:26:04.0000] rniwa: ping on UndoManager spec... what DOM mutation events / observer notifications are required to fire on undoing an action? [22:26:40.0000] DOMAttrModified & friends [22:26:48.0000] WeirdAl: hi. yes. [22:27:04.0000] :) I'm wondering if they're all required [22:27:05.0000] WeirdAl: mutations made by the undo manager are regular DOM mutations [22:27:18.0000] WeirdAl: they're. [22:27:32.0000] WeirdAl: however, DOM mutation events are deprecated API so I can be careless. [22:27:41.0000] mutation observers are replacing them [22:27:53.0000] so, no, you can't ;) [22:27:54.0000] WeirdAl: they should certainly be included in the mutation records that mutation observers receive [22:29:46.0000] Should there be at least a token mention about mutation observers in section 3.1? [22:30:26.0000] /me is planning on implementing a "partial DOM" which is non-compliant in many respects, but UndoManager he'll probably want to fully implement [22:33:32.0000] WeirdAl: that makes sense. [22:33:47.0000] WeirdAl: jsdom? [22:33:50.0000] WeirdAl: or dom.js? [22:33:55.0000] neither [22:33:59.0000] oh, i see. [22:34:04.0000] and no, not envjs [22:34:29.0000] I'm going off the deep end... I've been forced to conclude that I pretty much have to write my own for a special purpose [22:34:53.0000] /me thinking maybe I can use WeirdAl's partial dom for https://plus.google.com/105748986001435560355/posts/aDV61jgSNXj [22:35:12.0000] forget it, you won't want it :) [22:35:22.0000] WeirdAl: ? [22:35:37.0000] it won't be implementing HTML :D [22:35:42.0000] WeirdAl: I see. [22:35:47.0000] that's interesting [22:35:50.0000] at least, not for probably a year or so [22:36:09.0000] I'm looking more at the XML world... it's cleaner :) [22:36:31.0000] I see. [22:36:41.0000] WeirdAl: but I hear that XML world is doomed [22:36:48.0000] maybe it is [22:37:04.0000] but I think that's because we just don't have good tools to edit it [22:37:11.0000] we have tools [22:37:19.0000] they're just not that good at it [22:38:45.0000] more specifically, the XML languages we humans are most likely to edit - XHTML, MathML, SVG, XUL, XBL, etc. - those are the ones for which the tools frankly need a lot of work [22:40:36.0000] true [22:40:48.0000] WeirdAl: I want a good editor for MathML. [22:41:14.0000] best I've seen for free is Amaya, and it's rather painful to work with [22:41:56.0000] yeah... [22:41:59.0000] believe me, I'm working on a new kind of XML editor in my spare time, one where new XML languages are literally like Firefox addons [22:42:55.0000] but to get there, I'm trying to build the tools to build those addons [22:46:56.0000] Hixie: the status box in http://www.whatwg.org/specs/web-apps/current-work/multipage/editing.html#editing-0 looks like a bogus edit [22:51:10.0000] WeirdAl: i'll look forward to it :) [22:51:14.0000] zcorpan: feel free to fix it :-) [22:59:23.0000] Hixie: i don't know what the previous state was... but, changed to something less inaccurate [22:59:35.0000] thanks [00:45:02.0000] I wish the source code of the spider from http://dotnetdotcom.org/ was available so we could create a fresh index [01:11:41.0000] what's the best way to represent an ordered dictionary in JSON? [01:12:27.0000] it's about mapping one offset to another [01:13:23.0000] e.g. 0:80, 36:A5, 50:B8, .. [01:13:39.0000] nested array? [01:13:45.0000] seems kind of ugly [01:14:53.0000] array containing (the dictionary, an array of the keys in order) [01:17:53.0000] I guess since the keys are sortable I could also do that during lookup... [01:19:57.0000] kind of annoying that only gb18030 needs this special kind of index whereas all other encodings can do with a simple index [01:20:17.0000] China, you're annoying! [01:22:31.0000] back later [01:46:21.0000] rniwa: what kind of environment are you looking for an editor? personally I have a biased view on the worls and use emacs for everything, but firefox addon http://www.maths-informatique-jeux.com/blog/frederic/?post/2010/11/14/Mozilla-MathML-Add-ons is quite promising, or shockingly enough, Word isn't bad, there are several others it depends what you want... [02:03:48.0000] Well done, public-webapps! [02:04:00.0000] January to March 2012 ... 1337 messages [02:04:44.0000] nice [02:11:04.0000] since the gb18030 index is so different from other indexes, should I give it a different name? [02:11:13.0000] and if so, suggestions? [02:22:35.0000] Ms2ger: can you remind me, was somebody working already on testharness.js-enabling the canvas test suite? [02:22:45.0000] Indeed [02:22:47.0000] I was! [02:22:56.0000] It's in a bug somewhere [02:22:56.0000] oh cool [02:23:05.0000] Waiting for Philip` [02:23:06.0000] w3c bug? [02:23:08.0000] Yep [02:23:10.0000] oh [02:23:20.0000] waiting on Philip` to just land your changes? [02:23:47.0000] To see if it makes sense to him [02:23:58.0000] OK [02:24:04.0000] /me looks for the bug [02:24:48.0000] found it [02:24:53.0000] https://www.w3.org/Bugs/Public/show_bug.cgi?id=14191 [02:25:28.0000] annevk: index-gb18030-Y-U-DIFFERENT???.txt [02:29:22.0000] heh [02:29:48.0000] so the way gb18030 works is that there are 207 ranges [02:30:45.0000] and by computing a number from the four bytes sequences [02:30:56.0000] the ranges are basically offsets [02:31:12.0000] consisting of "offset, code point offset" [02:31:32.0000] you then find the last range whose offset is equal or less than the computed index [02:31:43.0000] and then you do computed index - offset + code point offset [02:31:54.0000] and you have a code point [02:32:25.0000] on top of that you exclude computed indexes between 39419 and 189000 and anything greater than 1237575 [02:32:41.0000] the webgl test runner is nice [02:32:42.0000] https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/webgl-conformance-tests.html [02:33:09.0000] so I guess I'll just call it index-gb18030 like the others and explain in prose look works differently [02:33:25.0000] and in JSON I'll store it as an object of which you need to sort the keys yourself [02:33:51.0000] and then publish the JSON for the indexes as indexes.json [02:34:03.0000] and publish the index.py which generates the index-*.txt files [02:34:09.0000] merge the single-byte encodings into it [02:34:31.0000] and have a separate encodings.json which lists the encoding names and labels [02:38:35.0000] http://i.imgur.com/vynW8.png o_O [02:39:14.0000] david_carlisle: btw, is MathML support still not enabled in Chrome? [02:39:43.0000] annevk: heh [02:39:49.0000] annevk: it's a pity we can't see the nationality of those people on the pic... i bet i know what the results will be :) [02:41:00.0000] Dutch? :) [02:41:17.0000] david_carlisle: OK, I see that it's still not [02:41:23.0000] MikeSmith, the webgl test harness, otoh, sucks badly [02:41:43.0000] MikeSmith: No, but there is a new person working on the mathml in the webkit codebase and google people are in teh reviewing loop this time, so things are looking better [02:41:45.0000] Ms2ger: yeah, I got about 10% through it and it crashes ever time [02:41:58.0000] Blame your graphics driver [02:42:02.0000] crashes in every browser I have tried with so far [02:42:45.0000] david_carlisle: given that it's enabled in Safari, I wonder why not in Chrome. Is the issue that it's too incomplete at this point? [02:43:15.0000] Because Chrome has *some* quality standards? [02:43:19.0000] Ms2ger: i had an idea of something a bit further west ;) [02:43:49.0000] david_carlisle: pages like http://www.mozilla.org/projects/mathml/demo/texvsmml.html seem to mostly render as expected [02:43:55.0000] in Safari [02:43:56.0000] MikeSmith: The stated reason is that it did not have relevant security review, whether there was actual security concerns or if that is just teh trump card to play to avoid doing anything I am not in a position to say:-0 [02:44:05.0000] ah, OK [02:44:28.0000] annevk, hmm, fascinating how you were just looking at that page :) [02:45:44.0000] recent changes to the WHATWG Wiki is one of the few speed dial things I have [02:45:50.0000] so sometimes when I open a new tab I click it [02:46:26.0000] Ms2ger: found in source of WebGL test page: [03:49:48.0000] not sure what raw bytes means actually [03:49:57.0000] the alert just had the character there [03:50:32.0000] if I encodeURI() it I get utf-8... [03:51:00.0000] i guess you should try following the link and see what the server gets [03:52:03.0000] pretty sure IE will just put raw bytes on there [03:54:23.0000] unsolved problems from 2009 http://annevankesteren.nl/2009/03/urls [03:55:13.0000] oh yes [03:55:14.0000] http://lists.w3.org/Archives/Public/www-style/2009Mar/0321.html [03:55:36.0000] "There are some small interop issues. IE6 (not sure about newer versions) sends the query string as raw UTF-8 bytes rather than having them percent-escaped." [04:01:27.0000] foolip, I think that's an option. I would guess not many people like this option though. Don't ask me for a reason because I truly don't know. But I think what would be more realistic and useful is to get IE people's commitment on shipping 'big5' to mean 'big5-hkscs'. Perhaps they have clear reasons for not doing so. [04:03:25.0000] As my friend (Timothy Chien in the mailing list) told me, this might not be doable because IE might refuse to use anything besides the system's mapping tables. [04:05:07.0000] hmm http://stackoverflow.com/questions/6763799/utf-8-encoding-issue-in-ie-query-parameters#comment8020739_6763799 [04:05:15.0000] sounds like this is some kind of WTF area [04:09:18.0000] should we make the spec look at namespaced attributes? http://software.hixie.ch/utilities/js/live-dom-viewer/saved/1460 [04:11:26.0000] zcorpan: because? returning null there is fine no? [04:14:10.0000] we look at elements with prefixes, but not at attributes with prefixes [04:14:34.0000] seems inconsistent [04:15:21.0000] that's only so you don't have to look for xmlns attributes [04:15:33.0000] and only at xmlns:* attributes [04:15:50.0000] the spec looks at xmlns attributes [04:16:26.0000] the spec even picks up xmlns="" in no namespace; maybe that's bogus [04:16:44.0000] you previously argued to do the simplest thing possible here [04:16:58.0000] yeah, i did [04:18:12.0000] annevk, I did some URL in CSS testing last week too → http://lists.w3.org/Archives/Public/www-style/2012Apr/0204 . I was very amused by how IE9 handles the url "%" :p [04:18:35.0000] but innerHTML uses these methods [04:19:37.0000] but i guess looking at namespaced attributes is overkill [04:21:33.0000] maybe innerHTML shouldn't be using these methods in the first place [04:22:31.0000] You tell me :) [04:23:05.0000] URLs doing decoding so massively different is annoying for testing decoders [04:23:25.0000] well, the problem is mostly IE I guess [04:24:12.0000] I wish Microsoft hang out in an IRC channel somewhere [04:33:37.0000] oh sweet [04:33:39.0000] IE is the worst [04:34:28.0000] it does indeed use "?" over the wire [04:35:36.0000] but to script it exposes the character itself [04:36:22.0000] so some kind of normalization happens at the network layer, whereas other browsers do normalization upfront [04:39:47.0000] so basically testing IE would involve network loads [04:40:04.0000] fffffffuuuuuuuuuuuu [05:04:28.0000] annevk++ :) [05:22:03.0000] and another quirk gets some tests http://simon.html5.org/test/quirks-mode/table-cell-width-calculation.html [05:22:17.0000] time for coffee [05:23:00.0000] Who has time to WONTFIX https://www.w3.org/Bugs/Public/show_bug.cgi?id=16711 with strong enough arguments to prevent escalation and reopening? [05:25:08.0000] another question, who can make this fast: [05:25:11.0000] results = [] [05:25:11.0000] for(i = 0; i < 0x10FFFF; i++) { [05:25:11.0000] t.href = "?" + cp_str(i) [05:25:11.0000] results.push(uhex(i) + "\t" + t.search) [05:25:11.0000] } [05:25:11.0000] r.textContent = results.join("\n") [05:25:52.0000] zcorpan++ [05:28:11.0000] until 0x10000 goes fast enough, but then CPU starts spinning mad [05:28:49.0000] hsivonen: how about "Or else...?" :-) [05:30:30.0000] annevk: what's cp_str? [05:30:46.0000] function cp_str (cp) { [05:30:46.0000] if(cp < 0x10000) [05:30:46.0000] return String.fromCharCode(cp) [05:30:46.0000] cp -= 0x10000 [05:30:46.0000] return String.fromCharCode(0xD800 + (cp >> 10), 0xDC00 + (cp & 0x3FF)) [05:30:46.0000] } [05:32:17.0000] return String.fromCharCode(0xD800 + (cp >> 10)) + String.fromCharCode(0xDC00 + (cp & 0x3FF)), maybe? [05:34:50.0000] that is faster? [05:35:05.0000] hsivonen: added a comment [05:36:00.0000] hsivonen: and resolved INVALID for good measure [05:36:38.0000] It could be, up to you to test :) [05:38:46.0000] hmm [05:39:09.0000] annevk: What browser are you testing in? [05:40:00.0000] Actually, nvm, won't hit that bug anyway. [05:40:05.0000] Firefox/Opera Next/Chrome [05:40:11.0000] CPU is spinning mad [05:41:39.0000] Chrome / Opera done [05:41:42.0000] Opera responsive [05:41:47.0000] in Chrome the page cannot really be used [05:41:55.0000] Firefox is not done, but has some kind of responsive page [05:42:28.0000] hmm, looks like Firefox quit at E5F33 ?%F3%A5%BC%B3 [05:42:52.0000] annevk: thanks [05:44:39.0000] I guess once I add a way to filter out results that encode per default error handling ("?", "&#...", URL encoded utf-8 bytes) this might be okay [05:44:50.0000] annevk: maybe you can have two nested loops to generate all possible surrogate pairs [05:44:56.0000] Internet Explorer would still not be tested but bah [05:45:48.0000] zcorpan: seems so unlogical that simple math would be the bottleneck here [05:45:53.0000] illogical* [05:46:18.0000] /me points at the topic [05:46:24.0000] kennyluck, are you sure that ptt.cc is actually purely Big5-UAO? Isn't it more likely that it's a random mix of Big5-HKSCS and Big5-UAO depending on what software/browser was used to post? [05:47:04.0000] passing through bytes provided by whoever posted seems scary [05:48:00.0000] hsivonen, indeed, but surely it's what 99% of the Web does? [05:48:24.0000] 99% of the Web is scary [05:49:16.0000] /me nods [05:49:28.0000] foolip, indeed. It would depend on what software is used to post, which this is case are telnet clients. As I said, only 5% of the sub-forums are affected by this. [05:49:38.0000] if there's something good about Java, it's that it converts to UTF-16 upon input, so you don't get to pass through bytes [05:50:14.0000] That's a pretty big 'if' :) [05:50:22.0000] kennyluck, do you have any contact with the site operators? it would be interesting to analyze a random selection of pages that use the byte sequences in question... [05:53:39.0000] var high, low; [05:53:39.0000] for (high = 0xd800; high < 0xdc00; ++high) { [05:53:39.0000] for (low = 0xdc00; low < 0xe000; ++low) { [05:53:39.0000] doStuff(String.fromCharCode(high, low)); [05:53:39.0000] } [05:53:39.0000] } [05:54:25.0000] zcorpan, was that for me? [05:54:32.0000] for annevk [05:55:38.0000] foolip, yes. I do, actually, a friend of a friend. What would be best way to do this? Let me ask for a mail address for me. But what kind of analysis do you want to do? If you just want a random selection of the pages, the subpages of http://www.ptt.cc/bbs/C_Chat/index.html that has Japanese in it would do. Or do you want to get a list of softwares that are used to post to PTT? [05:56:01.0000] I think a bunch of them are open source telnet clients. [05:56:13.0000] s/for me/for you. [05:57:06.0000] kennyluck, ideally, a list of every single page that uses any of the byte sequences that differ between hkscs and uao, but if that's hard to produce I guess a big enough random selection should allow me to find them myself [05:57:37.0000] I thought Firefox was supposed to have a non-blocking UI? [05:57:52.0000] kennyluck, even better would of course be to ask if they're aware of the problem and if they would be prepared to make some changes to fix it [05:59:45.0000] foolip, that is… well, not very easy I would assume. It is basically a non-profit site, partly run by National Taiwan University, and I never think they actively maintain the underlying BBS software. [06:00:09.0000] kennyluck, so perhaps just scraping it ourselves would be easier? [06:00:20.0000] kennyluck, are there sub-forums in or about Cantonese? [06:00:31.0000] foolip, good point. Yes. [06:01:20.0000] It's basically a community site for everything. [06:01:25.0000] kennyluck, do I need a special version of Windows to test if these pages display properly in non-Firefox browsers, with special font support? [06:02:26.0000] even the U+0000 - U+007F range has some incompatible stuff, but that's due to using URLs I guess [06:04:02.0000] foolip, I have no idea. I've been not using Windows for a while, but I would guess you can install the big5-uao package on every Windows. It pretty much just replaces the system's big5 encoding table, or this is what I heard. [06:05:19.0000] kennyluck, do you have any idea where one would find this package? Googling "big5-uao" mostly finds Mozilla-related things... [06:05:50.0000] In any case, unless this is installed on >50% of Taiwan Windows computers, I really don't see how it could change matters much... [06:06:53.0000] foolip, you can google the phrase "unicode補完計畫". This is one page teaching this → http://www.techbang.com/posts/3350-let-win7-perfect-display-japanese-web-page [06:07:37.0000] kennyluck, ah, that's the "Unicode 補完" you mentioned in your mail (which was very helpful, thank you!) [06:08:30.0000] Wow, the steps are crazy. I wonder how many people go throughout these. [06:10:56.0000] kennyluck, yeah, replacing CP950 in 18 steps really doesn't seem like something many would do... [06:11:42.0000] that it's available does suggest the problem might be more widespread :/ [06:12:19.0000] kennyluck, also, the example used on that page (forum.tw.fdzone.org/viewthread.php?tid=324495) looks like it's only Japanese kana, which big5-hkscs would fix too [06:12:56.0000] I think we should try to find a better sample of Taiwanese pages, but using what source? [06:13:25.0000] Yeah, I wonder why people don't teach big5-hkscs instead. MS people are clearly not doing good evangelism. [06:14:08.0000] all you get back from MS people is "use utf-8 or utf-16" [06:14:18.0000] even if you ask questions specific to big5 :) [06:14:26.0000] kennyluck, any advice you can give about how to proceed in order to minimize breakage for Taiwan users would be most helpful... :/ [06:15:07.0000] annevk, I assume that we don't have any locale-dependent mappings and that you won't consider spec'ing that? [06:16:08.0000] if that's what's required we'll do that [06:16:17.0000] currently we don't have that [06:16:19.0000] Mr. Pragmatic :) [06:17:07.0000] I know I don't want it, because I use both Hong Kong and Taiwan sites with an English locale... [06:18:49.0000] it sounds pretty sucky indeed [06:19:01.0000] but we already have local-dependent defaults [06:20:06.0000] Yeah, i am pretty sure what they (Yuan and Timothy) want is what I said: "big5"="big5-uao" for zh-TW browsers. [06:24:40.0000] foolip, ok, so in case you are interested. PTT has a UTF-8 gateway "ssh bbsu⊙pc", which transcodes the content correctly for the page I gave you. This means that changing http://www.ptt.cc/ shouldn't be very hard *in theory*. [06:26:07.0000] But really, I think the best way to make progress on this is to consult MS people working in Taiwan. They probably understand the problem better, esp. why they don't try to stop the grassroot big5-uao effort... [06:34:49.0000] I reached out to their encoding expert, but he couldn't offer much help [06:35:05.0000] kennyluck: see http://lists.w3.org/Archives/Public/www-archive/2012Mar/thread.html#msg46 [06:38:32.0000] annevk, foolip, hold on. I need to understand this a bit more. When you say IE treat 'big5'='big5-hkscs'. Do they render the Kanas or not? (Kanas are not rendered correctly in by Win7 Laptop, I wonder if this is locale-dependent.) [06:39:04.0000] kennyluck, they have a single mapping that uses PUA, what it renders like really depends on the installed fonts [06:39:19.0000] for windows-874 Chrome maps fullwidth ASCII back to ASCII, for windows-1252 they don't [06:39:37.0000] even though the MingLiu_HKSCS font comes with Win7 by default, I'm not sure if it's used by IE [06:40:22.0000] foolip, OK. now I see why people want to install the big5-uao package. [06:41:12.0000] I think that a lot of these pages don't render correctly in most browsers, but we want a proper mapping to Unicode to make them work as well as they can... [06:41:35.0000] Is there a similar package that does the *correct big5-hkscs* which doesn't map to PUA (HK-2008?) [06:43:18.0000] kennyluck, Microsoft has stopped providing the hacked code page and just ship MingLiu_HKSCS with Win7, but I'm not exactly sure if any special steps are needed to make IE use it [06:43:32.0000] A Hong Kong user would probably know. [06:44:39.0000] foolip, yeah. I mean, as a normal user, I would be happy to learn something simple just to turn the Kanas on instead of the 18 steps 'big5-uao'. [06:55:39.0000] Or I have the feeling that fixing this part of evangelism might be just too difficult. I wonder how bad it is if we just do locale-dependent 'big5'='big5-uao'. If we don't fix the whole thing, we'll just go into a suboptimal that "いま俺の顔生涯最高にキモい自信がある" (uao) is rendered as "いま俺の𡟺生涯最高にキモい自信がある" (hkscs) [06:56:46.0000] unless special fonts are used big5-uao is already broken in Chrome [06:56:50.0000] and Internet Explorer [06:57:03.0000] combined they have about 80% of the Taiwan market [06:57:39.0000] Chrome about 15% [06:57:46.0000] Firefox 10% last I looked [06:58:02.0000] and Chrome does not use the table from Windows [06:58:31.0000] kennyluck, the question is really if that page already works for a majority of Taiwan users. If it only works in Firefox, I'm not sure we should introduce locale-dependent mappings in order to fix it [06:59:39.0000] yeah, it seems better that all browsers break it so that other pages become less broken and there's an incentive to fix the content [07:00:10.0000] does anyone have a good source for the top million sites or something that I can scrape for taiwanese sites? [07:00:39.0000] My hunch is that even in Taiwan, using the big5-hkscs mapping will be correct more often. [07:01:54.0000] there's http://s3.amazonaws.com/alexa-static/top-1m.csv.zip but it's only front pages [07:02:15.0000] I fear that it's not just the content that's broken. The fact that we still see pages educating people to install the big5-uao package is not neglectable. [07:03:50.0000] kennyluck, I guess all we can do right now is speculate on how big the problem is. I'll try to get some kind of random scrape of the Taiwan Web... [07:03:53.0000] foolip, I don't even find a clue on how to turn on big5-hkscs in IE as 'big5'. How could that be true? [07:04:14.0000] kennyluck, it's always the case, big5 always maps to the same PUA code points [07:04:42.0000] and the MingLiu_HKSCS font shipped with Win7 has glyphs for those code points [07:05:02.0000] I don't know what font you need to make those PUA code points match Big5-UAO [07:05:17.0000] kennyluck: I just read that article, did you see how in the end it says in Chrome you should simply select the big5-hkscs override? [07:05:57.0000] kennyluck: and that it's pretty much only written for PTT [07:06:44.0000] I need to go back to that page again... [07:06:48.0000] so if even the Taiwanese suggest big5-hkscs in Chrome, it seems like using that by default would be an improvement [07:06:57.0000] here is the URL http://www.techbang.com/posts/3350-let-win7-perfect-display-japanese-web-page [07:08:04.0000] so sad that PUA is used for de facto standardization [07:08:20.0000] not surprising of course [07:08:46.0000] hsivonen, killing it for Big5 looks in the real of the possible, fortunately [07:09:05.0000] PUA is the vendor prefixes of Unicode [07:09:41.0000] yay, http://s3.amazonaws.com/alexa-static/top-1m.csv.zip has 2930 .tw domains [07:10:33.0000] now do a site: search for each domain to get the top n pages of each domain :-) [07:11:05.0000] zcorpan, can that be automated? [07:11:42.0000] dunno, i guess most search engines have anti-DOS measures [07:11:59.0000] obviously, ptt.cc is not on the list of top .tw sites, but I don't know how else to make a list like this :/ [07:12:40.0000] hmm, don't some search engines have a feature to search for pages in a particular language? [07:12:59.0000] I would love a proxy server that saved all requests to disk so I could just let my web browser visit these pages, follow a few links, and then I have the pages to analyze on disk... [07:17:23.0000] zcorpan: Pick a common word in that language and then search for it? [07:21:46.0000] weehee, all encoders are slightly different [07:22:42.0000] ^^ sarcastic [07:28:59.0000] looks like Qt got memes before Opera or IE: http://qtmemes.tumblr.com/ [07:30:42.0000] It seems to be getting quite metamemetic [07:30:47.0000] well that'll be today's quirk testing [07:31:48.0000] Opera: [red penguin] Follows standards. [blue pinguin] Alone. [07:33:01.0000] foolip, so this is what I think as a user. 1) I open http://www.toysdaily.com/discuz/forum-24-2.html 2) I can't see the Kana presumably because of reasons related to fonts 3) I happen to know and only know 'big5-uao' as the way to turn Kana on 4) I start outputting Japanese content because I can read it now 5) I now output 'big5-uao'. [07:33:10.0000] http://qtmemes.tumblr.com/post/20183979051/we-have-qtwebkit-that-means-we-count-right [07:33:21.0000] So yeah, intercepting 2) seems workable, but it really depends on IE. [07:34:01.0000] Otherwise, I would still be tempted to install 'big5-uao' and then create incompatible content. [07:35:30.0000] you cannot output big5-uao in Gecko I think [07:35:39.0000] their encoder is restricted [07:35:46.0000] annevk, ah. Good point. [07:41:57.0000] also, Chrome users are apparently advised to just use big5-hkscs [07:42:02.0000] Firefox users are not mentioned... [07:45:24.0000] annevk, in any case I agree that expanding big5 to big5-hkscs or the intersection of big5-hkscs and big5-uao is making good progress, but I am not sure if it's optimal. [07:47:28.0000] foolip: did you look at "firefox" vs "firefox-hk"? [07:47:35.0000] foolip: ignoring PUA [07:47:56.0000] rniwa: ping [07:48:04.0000] rniwa: where should I send comments about undomanager [07:48:39.0000] whatwg⊙wo has been used thus far [07:49:24.0000] (re. you cannot output big5-uao in Gecko I think) But I might still output 'big5-uao' from IE. I think the extended mapping installed by the package isn't unidirectional like Gecko, but I am not very sure. [07:50:00.0000] oh yeah, with that package IE can do damage [07:55:16.0000] I guess we should figure out a) what's incompatible between hkscs and uao and b) how widespread uao is. [07:58:52.0000] but not today, I have some things to do [08:05:57.0000] hsivonen: I wonder how much would break moving away from the PUA for Big5. [08:24:11.0000] gsnedders: everything that would break would already be broken in Opera; and the pages we looked at would work better with hkscs [10:01:56.0000] http://www.corp.google.com/~jsbell/rampart - added left/right mouse click to place/rotate (in addition to space/control); added territory capture. (algorithm requires you can't surround the "seed" location, hence bisecting water) [10:02:08.0000] Whoops, that would be the wrong channel [10:03:52.0000] (curses, our team's top secret plans to dominate the world via an obsession with an obscure 90's arcade game have been leaked) [10:16:29.0000] <[tm]> jsbell: 'you can't surround the "seed" location, hence bisecting water' [10:17:25.0000] <[tm]> I'm pretty sure you stole that from a Wallace Stevens poem [10:18:43.0000] heh [10:29:21.0000] smaug____: hi, i just replied to your message [10:29:29.0000] smaug____: sorry, I forgot to close my IRC client :\ [10:30:12.0000] rniwa: thanks. It was just a first quick read-through [10:33:07.0000] smaug____: btw, since i've started hosting it on w3c repository, it might make sense for us to make public-webapps the place for discussion [10:33:17.0000] AryehGregor: what do you think? [10:33:38.0000] sadly, nobody had replied to my email about chartering undomanager in the editing community [10:33:42.0000] so not sure what's happening there [10:33:48.0000] but... [10:34:28.0000] rniwa: yeah, webapps should be ok [10:34:33.0000] rniwa, as long as you don't make it public-html :) [10:34:49.0000] (If I could get File system out from webapps, and undomanager in...) [10:35:08.0000] Did you comment on the charter? :) [10:35:47.0000] oh, yes, if the discussion happened in public-html, I would promise to not send any comments :) [10:41:49.0000] Ms2ger: I thought I did but maybe I didn't use that particular word or wasn't clear about it :\ [10:42:08.0000] rniwa, hmm? [10:43:13.0000] rniwa: so, input element and textarea could just always have undomanager [10:44:33.0000] rniwa: DOMTransactionEvent doesn't really make it more undo related... [10:44:43.0000] smaug____: i know. [10:45:01.0000] smaug____: we should rename it to something like UndoRedoEvent [10:45:06.0000] rniwa: where is the event dispatched? [10:45:13.0000] to the Undomanager itself? [10:45:19.0000] or to the element? [10:45:31.0000] smaug____: to the element [10:45:43.0000] hmm [10:45:44.0000] smaug____: "When the user agent is required to fire a DOM transaction event for a DOM transaction t at an undo scope host h, the user agent must run the following steps:" [10:45:52.0000] right [10:45:53.0000] smaug____: so that it can bubble [10:46:17.0000] why does it need to bubble? [10:46:35.0000] nm [10:46:37.0000] smaug____: so... an important use case is to do something in response to undo/redo [10:46:40.0000] yes, it should bubble [10:46:51.0000] smaug____: for that, you don't necessary want to attach event listeners on all elements with undoManager [10:48:16.0000] smaug____, Ms2ger: since you're already here... do you know if Mozilla imports W3C tests and create reference files for them? [10:48:43.0000] smaug____, Ms2ger: we want to import CSS test suite but don't want to add thousands of pixel results. [10:49:00.0000] I've written some for 2.1 a while back [10:49:05.0000] I don't know about css tests [10:49:15.0000] Ms2ger: oh yeah? [10:49:24.0000] Ms2ger: do you know where they're located? [10:49:31.0000] Boring ones, for padding, IIRC [10:49:44.0000] https://bitbucket.org/ms2ger/css-tests [10:49:55.0000] Ms2ger: if you already have reference files, ideally, we don't want to re-invent reference files ourselves [10:50:07.0000] since Mozilla folks surely have more experience writing reference files [10:50:14.0000] I need to figure out how to get them reviewed and into the WG's repo [10:50:31.0000] Ms2ger: oh, so they're not in Mozilla's repository? [10:50:46.0000] No [10:50:56.0000] okay. so i guess we have the same problem then. [10:51:09.0000] We don't import any non-reftest CSS tests, I don't think [10:51:20.0000] Ms2ger: okay. makes sense. [10:51:26.0000] /me tries to figure out some non-DB-related synonym for transaction [10:51:51.0000] smaug____: we use UndoStep internally in WebKit [10:52:03.0000] that sounds ok [10:52:07.0000] annevk: hi annevk [10:52:09.0000] in gecko we do use transactions [10:52:21.0000] rniwa: undomanager is TransactionManager [10:52:38.0000] smaug____: yeah, but it's getting quite confusing in the world where we have IDB's transaction :\ [10:53:03.0000] smaug____: we could just throw in "undo" prefix as well [10:53:07.0000] yeah. undo/redo/undoupdated ... [10:53:07.0000] so like undo-transaction [10:53:08.0000] rniwa: good evening [10:53:39.0000] annevk: does opera import CSS2.1 test suite as pixel tests? (i.e. generate images)? [10:54:05.0000] annevk: we want to import newer css test suite but we've realized that they don't have reference files [10:54:17.0000] and we don't really want to generate thousands of png files :( [10:54:29.0000] yeah, I objected to the CSS WG doing that [10:54:34.0000] and then the whole group got mad [10:54:37.0000] and glazou blamed Opera [10:54:51.0000] and then they went ahead with their pixel tests instead of doing the test suite right... [10:54:57.0000] annevk: :( [10:55:04.0000] gsnedders prolly knows what we do internally [10:55:23.0000] annevk: what if we said we don't want to import tests that are not reftests? [10:55:40.0000] I think TabAtkins actually said it didn't matter to Google [10:55:42.0000] /me wonders if webkit community's decision will have an impact on CSS WG [10:55:49.0000] but that would be a wise change of position I think [10:56:05.0000] annevk: yeah, pixel results make very little sense [10:56:07.0000] rniwa, the policy for CSS3 is reftest-only, IIRC [10:56:24.0000] annevk: it incurs way too much maintenance cost. [10:56:27.0000] Ms2ger: good! [10:56:39.0000] Microsoft didn't care, and Gecko/WebKit people were kind of holding back because glazou blew up [10:56:39.0000] TabAtkins: do you know the details? [10:56:39.0000] Ms2ger: it's reftest or testharness.js now [10:56:58.0000] astearns: sweet [10:57:07.0000] annevk: I see. [10:57:34.0000] Ms2ger, annevk: it might make sense for Gecko/WebKit people to push CSS WG to have reference files [10:57:41.0000] astearns, well, requiring reftests for JS tests would be silly [10:58:01.0000] since neither of us want to import tests that are not reftests [10:58:18.0000] Ms2ger: more silly than not allowing JS tests at all [10:58:21.0000] Ms2ger: testharness.js aren't reftests, right? [10:58:49.0000] rniwa: correct [10:59:06.0000] rniwa: and we just checked in testharness.js support in WebKit [11:01:16.0000] And so did I for Gecko :) [11:01:59.0000] rniwa, I can't speak for Mozilla, and I barely do anything to do with CSS, but you have my vote :) [11:07:58.0000] rniwa: We have it as screenshot tests, yes [11:08:11.0000] rniwa: We've converted some to reftests, but not entirely [11:08:28.0000] rniwa: Never submitted to the WG after it became kinda obvious the WG didn't want them lest it delay REC [11:08:41.0000] gsnedders: :( [11:08:46.0000] gsnedders: that's my fear was well. [11:09:02.0000] rniwa: The policy for CSS3 is reftest only, pretty much [11:09:06.0000] gsnedders: it seems like they won't accept our patches to add reference files even if we submitted them [11:09:20.0000] gsnedders: do you have your reference files publicly available somewhere? [11:09:31.0000] rniwa: You can try, and argue that these tests will likely become part of CSS3 module test suites [11:09:52.0000] rniwa: I don't think so. [11:10:31.0000] Now that 2.1 is a rec, I guess we can get them in [11:10:38.0000] rniwa: But group the tests by their screenshot, and you realize CSS 2.1 test suite is in large part identical references. [11:10:53.0000] (Some of the refs in my repo are gsnedders's, I should note) [11:10:55.0000] Ms2ger: Yeah, I guess, but I've moved on from caring about it now. [11:11:19.0000] I've argued for this before, and I can't be fucked fighting to get them in. [11:11:31.0000] Anyway, if the WG doesn't want them, I'd prefer a shared fork, though [11:11:48.0000] Ms2ger: yeah, that's what i'm getting at [11:12:02.0000] Ms2ger: there's no point for each of us to re-do all the work [11:12:07.0000] If you don't want to take the work that people have done, even though your requirements insist on them in future, then that's your own damned problem. [11:12:10.0000] Ms2ger: if we can just share the results [11:12:21.0000] rniwa: That's the exact argument I made to the mailing list two years ago. [11:12:27.0000] Nobody cared. [11:12:29.0000] Literally. [11:12:29.0000] gsnedders: :( [11:12:41.0000] gsnedders: that's astoundingly annoying [11:12:44.0000] Not even Mozilla people, who were the only others at the time to have a running reftest system. [11:13:03.0000] WebKit people were inteested, but mainly in an in-the-future way. [11:13:18.0000] gsnedders: it's possible that future has come :D [11:13:28.0000] gsnedders: since we DO support reftests now [11:13:38.0000] the future is here! rniwa has added support for reftests :) [11:13:49.0000] Some of my refs may well assume 96dpi on tests that shouldn't. [11:13:57.0000] astearns: Yeah, I'm well aware. :) [11:14:09.0000] gsnedders, astearns: we've had our own support for reftests, but we only added the support for W3C style reftests last winter [11:14:29.0000] rniwa: Search for emails from me to public-css-testsuite and w3c-css-wg if you want the background [11:14:39.0000] gsnedders: okay. [11:14:39.0000] rniwa: Yeah, you didn't even have that when I was working on this :) [11:15:11.0000] gsnedders, Ms2ger: anyway, it'll be great if we could share reference files even if W3C doesn't accept them [11:15:43.0000] gsnedders, Ms2ger: for webkit, I want to make sure we don't invent our own reference files that don't adequately exercise tests [11:15:45.0000] rniwa: https://lists.w3.org/Archives/Member/w3c-css-wg/2010JulSep/0222.html is where I gave up [11:15:50.0000] (Member only, etc) [11:15:57.0000] gsnedders: oh... it's member only [11:16:28.0000] gsnedders: i hear member-only mailing lists are terrible places to live in [11:17:10.0000] I hear some people are more polite if their emails are archived in public [11:18:09.0000] i hear some people are more polite if they aren't glazou [11:18:19.0000] No comment [11:18:46.0000] rniwa: I may have sent some to public-css-testsuite [11:19:17.0000] (chaals's response to that email is the truth, for those of you who can read them) [11:19:25.0000] (references, that is) [11:19:26.0000] rniwa, I'm happy to help out writing some references, btw, but you'll probably have to poke me :) [11:20:43.0000] rniwa: http://lists.w3.org/Archives/Public/public-css-testsuite/2010Sep/0030.html [11:20:53.0000] "Attached is a diff to convert 830 tests to reftests (with a mere four [11:20:54.0000] references!)." [11:21:06.0000] gsnedders: thanks! [11:21:24.0000] gsnedders, Ms2ger: I'll probably post something back on webkit-dev about this if you guys don't mind [11:21:31.0000] The whole situation was ridiculous, really. [11:21:36.0000] we're having a big debate on how to import css tests [11:21:50.0000] rniwa: tl;dr of the debate? [11:21:52.0000] rniwa, gsnedders, those are in my repo too [11:21:54.0000] and this reftest vs. pixel test thing is one major issue [11:22:06.0000] Ms2ger, gsnedders: so... we want to import tests as reftests [11:22:31.0000] You don't want to go through all 10k tests and label them as pass or fail (or in your case, pass or xfail). [11:22:33.0000] but some of us (mainly me) don't want to invent our own reference files because there is a chance that our own reference files don't exercise the tests adaquately [11:22:56.0000] rniwa, isn't http://28.media.tumblr.com/tumblr_m284vvtgQT1rqvy12o1_1280.jpg argument enough in favour of reftests? :) [11:23:02.0000] e.g. there could be a bug that affects both the test and the reference file same way and end up hiding the bug [11:23:06.0000] /me may have borrowed summer interns for a day to label them [11:24:32.0000] Ms2ger: brilliant [11:24:36.0000] Note to self: don't do internship at Opera the summer after MS dumps its tests [11:24:51.0000] Also: http://28.media.tumblr.com/tumblr_m1yh3dbdHf1rqvy12o1_r1_1280.jpg [11:24:57.0000] Ms2ger: Heh, we're almost certainly never going to bring in a large dump of screenshot-based tests ever again [11:27:06.0000] could some of you take a look at https://bugs.ecmascript.org/show_bug.cgi?id=277#c2? /cc zcorpan [11:27:45.0000] IIRC all browsers except Firefox respect http://wiki.whatwg.org/wiki/Web_ECMAScript#Identifiers but it’s been a while since I tested [11:27:47.0000] v\u0061r x = 0 [11:27:50.0000] Orly [11:28:15.0000] Ms2ger: that is good one [11:28:17.0000] /me cringes at the thought of all that [11:28:35.0000] /me goes to his office [11:28:47.0000] matjas, unsurprisingly, Allen wasn't too happy to see you mention that :) [11:29:27.0000] hey, I never said this should become part of ECMAScript [11:29:45.0000] just how most browsers seem to do it [11:30:22.0000] matjas: Then that's not relevant for es-discuss :P [11:32:23.0000] I just mentioned it casually :) It was a Spidermonkey-specific thread anyway (as no other browser allows `function function() {}` etc.) [11:32:33.0000] /me nerdrages [11:32:57.0000] jgraham: Any idea whether you found the identifier nonsense needed for web compat? [11:37:39.0000] var tru\u0065; /* "Expected identifier" error in IE9 */ console.log(fals\u0065) /* "Syntax error" in IE9 */ [11:37:46.0000] so Fx and IE agree [11:53:39.0000] Looks like pointer lock landed in Gecko [11:58:20.0000] yup [12:38:59.0000] abarth: any movement on http://www.w3.org/2011/webappsec/track/issues/6 ? [12:39:35.0000] Hixie: we talked about it in the telecon yesterday [12:39:52.0000] so i should still do the refactoring? [12:39:55.0000] as far as I can tell, everyone wants it, but moz hasn't implemented it and is trying to block the spec from progressing until they finish [12:39:56.0000] just making sure :-) [12:40:07.0000] abarth: ok so my plan with sandbox flags is as follows: [12:40:42.0000] - make it so that browsing contexts have the set of flags [12:41:07.0000] - make it so that documents have two copies of the set of flags: one called something like "the CSP flags" and one called the "effective flags" [12:41:35.0000] the effective flags is the boolean-or of the flags from the CSP flags, the browsing context flags, and all ancestor browsing context flags [12:41:48.0000] hm no that doesn't work [12:41:56.0000] csp needs to also affect decendant browsing contexts [12:41:58.0000] let me try again [12:42:00.0000] plan: [12:42:58.0000] hmm [12:43:21.0000] the basic idea is i move the real set of flags to Document [12:43:28.0000] so all the security checks use that [12:43:35.0000] and those are set by being fed from the various other sources [12:43:44.0000] CSP, containing