02:07
<jarek>
Hi
02:08
<jarek>
why 'no-display' and 'no-content' values for 'overflow' property are marked in red here:
02:08
<jarek>
http://www.w3.org/TR/css3-box/
02:09
<jarek>
I mean: http://www.w3.org/TR/css3-box/#overflow
02:11
<jarek>
MDN does not seem to be mentioning them anywhere
02:38
<kennyluck>
jarek, I strongly suggest you not read that spec. I haven't seen it being discussed for a year or so.
02:39
<kennyluck>
If a statement is marked in red, that means there are issues.
07:11
<MikeSmith>
hsivonen: http://bugzilla.validator.nu/show_bug.cgi?id=875 (filed today) is yet another report caused by the misleading error+elaboration message for subtypes of the input element
07:12
<MikeSmith>
I'm wondering what you think of the idea of just allowing all the input attributes on all subtypes in the schema, and moving all the error reporting to Assertions.java
07:13
<MikeSmith>
and special-casing the elaboration message
07:14
<MikeSmith>
to output the same elaboration message from all input types
07:15
<MikeSmith>
maybe the elaboration message for input could be a <dl> list where each <dt> is an attribute name and the <dd> is a list of input types for which that attribute is allowed
07:25
<hsivonen>
MikeSmith: it least the elaboration needs to be type-specific. Maybe even the error message could be intercepted to avoid having to resort to Assertions.java
07:25
<MikeSmith>
OK
08:07
<hsivonen>
who is "james" in http://www.w3.org/2011/11/02-jquery-minutes.html ?
08:07
<hsivonen>
jgraham?
08:13
<hsivonen>
huh? Does Windows Server edition not have an H.264 Media Foundation decoder or why does <video> not work in IE9 there?
08:13
<hsivonen>
(per slides from the above minutes)
08:19
<hsivonen>
roc: did you notice that drawing DOM stuff to canvas was among the top requests paul_irish and ycats got from Web authors?
08:19
<roc>
no
08:19
<roc>
link?
08:21
<franksalim>
this one? http://paulirish.com/2011/what-feature-would-improve-the-web/
08:25
<hsivonen>
franksalim: yes
08:25
<hsivonen>
roc: also slide 11 of https://docs.google.com/present/view?id=ajdqczcmx5pv_148ggbxbfg2&pli=1
08:25
<roc>
easy to ask for
08:25
<roc>
not so easy to fix the security problems with
08:27
<hsivonen>
roc: right
08:31
<roc>
hsivonen: do you know anything about dvcs.w3.org?
08:31
<roc>
we actually do have a way to render DOM elements to canvas
08:32
<roc>
by the way
08:32
<roc>
canvas.drawImage(new Image("data:text/xml,<svg ...><foreignObject ...>...</foreignObject></svg>"), 0, 0);
08:33
<hsivonen>
roc: I have pushed a test to dvcs.w3.org once
08:33
<roc>
did you have to do anything special to get credentials?
08:33
<roc>
my W3C password, that works for reading member emails etc, doesn't work for dvcs
08:33
<roc>
AFAICT
08:33
<hsivonen>
roc: you need to be a participant in a WG and use the same credentials you use to see behind the Member-only paywall
08:34
<roc>
hmmm
08:34
<hsivonen>
roc: odd. you are a participant in the HTML WG after all
08:34
<roc>
am I?
08:34
<roc>
I don't know
08:34
<roc>
or does my ability to post to public-html indicate that I am?
08:34
<roc>
I'll email dbaron
08:34
<hsivonen>
roc: you are
08:34
<hsivonen>
http://www.w3.org/2000/09/dbwg/details?group=40318&public=1
09:03
<heycam>
hsivonen, yes that was jgraham
09:18
<hsivonen>
heycam|away: thanks
09:45
<MikeSmith>
roc: if you're still around, I can help with dvcs.w3.org perms
09:45
<roc>
cool
09:45
<roc>
my username is 'rocallah'
09:45
<MikeSmith>
OK
09:45
<MikeSmith>
w
09:46
<MikeSmith>
which repo?
09:49
<roc>
hg/audio
09:49
<MikeSmith>
OK
09:50
<MikeSmith>
roc: please try it now
09:55
<roc>
here goes
09:55
<roc>
great, that worked!!
09:55
<MikeSmith>
super
09:59
<roc>
thanks
10:03
<MikeSmith>
no problem
10:11
<roc>
http://robert.ocallahan.org/2011/11/drawing-dom-content-to-canvas.html
10:11
<roc>
paul_irish: ^^^
15:02
<manu`>
Heads-up, Data Driven Standards Community Group launches at W3C: http://manu.sporny.org/2011/data-driven-standards/
15:02
<manu`>
and the link to the group: http://www.w3.org/community/data-driven-standards/
15:07
<jgraham>
manu`: I agree entirely with the idea that this methodology is a good one and should be adopted, but I don't quite see what the value of a community group is in this situation
16:00
<AryehGregor>
smaug____, do you think we can get rid of Range.detach()? http://www.w3.org/Bugs/Public/show_bug.cgi?id=14591
16:08
<smaug____>
hmm
16:08
<smaug____>
it is indeed useless
16:08
<smaug____>
or, not quite
16:09
<smaug____>
but anyway quite strange method
17:14
<dglazkov>
good morning, Whatwg!
17:25
<manu`>
jgraham: re: http://www.w3.org/community/data-driven-standards/ the advantage, imho, of a community group is to create a place where people can gather - one list that discusses how to technically achieve the goal and document how it is done.
17:26
<manu`>
We'd like to start building tools that allow folks to just plug in regular expressions that are run on a monthly/quarterly basis... we're thinking of using CommonCrawl to do the first sets of crawls
17:26
<manu`>
and then utilizing someone like 80legs.com for the next few crawls.
17:27
<manu`>
We're planning on doing a crawl to see where and how RDFa and Microdata is being used/abused...
17:28
<manu`>
I've been trying to get something like this going for about 6 months now, but every time we try to partner w/ someone, it falls through... and it's become increasingly difficult to wrangle everyone involved.... so, CG seemed like a good place to gather and promote the ideas.
17:29
<manu`>
I think we'll use the wiki to gather services and document how to write crawls for each service... probably have a github account in time for crawling template storage... so that you can just copy the crawling template (map/reduce job) to an Amazon Elastic Map Reduce instance and hit "GO!"
17:30
<manu`>
make sense?
17:49
<gsnedders>
brucel: Yeah, the postoffice brokenness is definitely new :\
17:53
<gsnedders>
manu`: There's a few options. One is just to use data from dotdotbot, though that's a fairly old dump.
17:53
<gsnedders>
manu`: The other option is just to crawl pages from some public index like dmoz
17:54
<gsnedders>
I mean, yeah, you won't get quite the amount of data you could from Google's index, but probably still enough to be reasonable
17:54
<gsnedders>
I guess the problem is finding the pages that actually use RDFa/Microdata, though, really
18:07
<manu`>
gsnedders: do you know how much data dotdotbot has? or dmoz? CommonCrawl has tens of terabytes of data (5 billion pages?) 80legs claims to crawl the entire crawl-able web every 3 months (but does not store the data - they just process as they crawl) - there is also WebGrep - but they don't support regexes, just static string matching.
18:08
<gsnedders>
manu`: dotdotbot has just 13GB, but for most purposes it's a representive sample of the web as a whole (which probably means you'll find almost no RDFa in it).
18:08
<gsnedders>
manu`: dmoz just process as they go, but you can get all the URLs and crawl yourself enough for a representitive sample
18:09
<gsnedders>
The problem with RDFa/Microdata is going to be making sure you get a representitive sample of them and not of the web as a whole.
18:09
<gsnedders>
(or rather the subset of the web that uses RDFa/Microdata)
18:09
<manu`>
yes, true... but seeing as how nobody really has any public data yet... we have to start somewhere.
18:09
<erlehmann>
i need a kissology schema
18:09
<erlehmann>
does any one have one?
18:09
<manu`>
one of the things we're considering is running a test on each crawling service... to see how representative each one are.
18:10
<manu`>
s/one are/one is wrt to the other/
18:10
<erlehmann>
i'd like this to work using RDFa <http://daten.dieweltistgarnichtso.net/src/wer-kuesst-wen/?json=internet-elite.json>;
18:10
<erlehmann>
;)
18:11
<erlehmann>
but i cannot seem to find any good graphical RDF aggregators
18:11
<gsnedders>
manu`: The other alternative is just browser extentions and process what users visit
18:12
<manu`>
erlehmann - I remember somebody doing something like that w/ FOAF some time ago, but I can't remember the URL for it...
18:12
<erlehmann>
manu`, i remember lish daelnar doing the same thing with pico -w
18:12
<erlehmann>
sexchart.org/sexchart.9.43
18:12
<erlehmann>
http://sexchart.org/sexchart.9.43
18:13
<manu`>
gsnedders: yes, that would be a good alternative as well - but I'm suspect as to how easy it will be to have people install software to crawl the web on our behalf.... I think it's a good idea, just that writing the software and getting a community built around it seems to be more difficult than utilizing these large web crawls.
18:13
<erlehmann>
but the point is that a graphical RDFa aggregator could show a social graph quite nicely
18:13
<erlehmann>
why are the large crawls needed?
18:14
<manu`>
erlehmann: absolutely, it would be nice to have... one of our engineers tried using gource to do a visualization of some RDF data we had... it was a good idea that was never finished.
18:15
<manu`>
re: large crawls - representative sample... we're not just concerned about RDFa/Microdata... I think it would be good for the HTML5 spec... allowing features to be killed off more easily (see the latest <time> fiasco - which I agreed w/ the removal of time, but w/o data it's hard to make a case for /any/ removal)
18:16
<manu`>
re: http://sexchart.org/sexchart.9.43 - that made my eyes bleed.
18:51
<erlehmann>
time is removed?
18:51
<erlehmann>
oh noes.
18:51
<erlehmann>
i have to subscribe to the newsletters more often
18:52
<erlehmann>
manu`, the problem with relationship graphs is that every single person i know that can do more than i can do does not want to code a tool that may or may not ruin their supposedly monogamous relationship or show embarassing ex partners.
18:52
<erlehmann>
so i do not really have any help with coding.
18:53
<erlehmann>
but lots of people saying “one should be able to dispute RDF assertions”
18:53
<erlehmann>
facepalm m(
19:02
<matjas>
why does `foo &amp bar` (missing semicolon) render as an ampersand?
19:04
<bga_>
&<i />amp <- hack :)
19:06
<zewt>
&amp;amp;amp;amp;amp;amp;amp;
19:06
<bga_>
%)
19:07
<matjas>
zewt: that renders as `&amp;amp;amp;amp;amp;amp;`, which makes sense, since there’s a semicolon following the first
19:07
<matjas>
i’m just trying to understand the other case
19:08
<zewt>
i know, heh
19:08
<zewt>
matjas: i'd just guess web-compatibility
19:09
<bga_>
matjas may be its same as < tr >
19:09
<bga_>
^ its not tag
19:13
<matjas>
http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenizing-character-references
19:14
<matjas>
bga_: the weird thing is that `foo &amp bar` is actually valid (even in HTML4)
19:14
roc
wonders if paul_irish reads IRC
19:34
<kennyluck>
matijsb, no it's not. It's a parse error.
19:38
<matjas>
kennyluck: that’s what i thought after reading http://www.whatwg.org/specs/web-apps/current-work/multipage/tokenization.html#tokenizing-character-references, but validator.nu and http://validator.w3.org/check (in HTML 4.01 strict mode) don’t complain at all
19:39
<matjas>
oh wait, my bad, validator.nu does complain
19:39
<matjas>
/ignore me!
19:39
<matjas>
but in HTML4 it seems to be valid, unless that’s a bug in the validator
19:40
<kennyluck>
I know nothing about HTML 4.01 so you might be right about the HTML 4.01 part.
19:44
<matjas>
http://validator.nu/?doc=data%3Atext%2Fhtml%3Bcharset%3Dutf-8%2C%3C%21DOCTYPE+html+PUBLIC+%22-%2F%2FW3C%2F%2FDTD+HTML+4.01%2F%2FEN%22+%22http%3A%2F%2Fwww.w3.org%2FTR%2Fhtml4%2Fstrict.dtd%22%3E%3Ctitle%3ETest%3C%2Ftitle%3E%3Cp%3Efoo%2520%26amp%2520bar&parser=html4&showsource=yes validator.nu’s HTML 4.01 validator says it’s a parse error too
19:44
<matjas>
must be a bug with http://validator.w3.org/check
19:47
<AryehGregor>
What's the best way to do new content attributes where we want values of "true", "false", and "inherit"? Like contenteditable or spellcheck?
19:48
<kennyluck>
What about "true", "false", and "auto"? I am thinking about dir=auto.
19:49
<AryehGregor>
I'm just wondering which way would be preferred for new attributes.
19:50
<Philip`>
manu`: http://philip.html5.org/data/dotbot-20090424.txt for dotbot
19:50
<Philip`>
manu`: (http://philip.html5.org/data/pages-using-rdfa.txt is seemingly RDFa-using pages from that)
19:52
<Philip`>
manu`: I think dmoz had like 5M distinct URLs when I last checked; average page size (if you have an upper limit of maybe 1MB) is about 30KB if I remember correctly, so it's like 150GB to download all those pages, which isn't particularly problematic
19:54
<Philip`>
manu`: With the dotbot data (~0.5M pages) it only took me a few minutes to parse and analyse every page, on a single quad-core machine, so no need for fancy cloud-based map-reduce processing until you have maybe an order of magnitude more pages or want instant results
19:56
<Philip`>
matjas: "&amp" without semicolon is listed in http://www.whatwg.org/specs/web-apps/current-work/multipage/named-character-references.html#named-character-references so it gets parsed (but I think it's always a syntax error if you use any of the ones without semicolons)
19:57
<matjas>
Philip`: mind = blown, I had missed that, thanks!
19:57
<matjas>
i also tested with raquo but that has an semicolon-less entry as well
19:58
<Philip`>
matjas: (The details of the list are due to compatibility requirements - semicolons are required wherever the spec can get away with it)
20:12
<manu-db>
Philip`: Thanks for the link... looking at it now.
20:14
<manu-db>
Philip`: Do you think that the dotbot and dmoz sample sets are large enough to give a decent representation of usage on the Web?
20:15
jgraham
wonders if it is worth the effort of trying to kill <details>
20:15
<jgraham>
Or at least putting it on ice for now
20:16
<jgraham>
(because I am not convinced that it is possible to implement well at this stage)
20:16
<jgraham>
(due to styling problems)
20:17
<paul_irish>
roc: i do read IRC :) thx for the awesome post.. badassjs already wrote it up http://badassjs.com/post/12473322192/hack-of-the-day-rendering-html-to-a-canvas-element-via
20:17
<scor>
Philip`: this data is from 2009?
20:18
<roc>
paul_irish: great, thanks!
20:19
<roc>
paul_irish: it may be possible to work around Webkit's data: URI bug by using a BlobBuilder to construct the SVG image and getting a Blob URI for it
20:21
<Philip`>
manu-db: Depends what you're measuring usage of - if you assume the data is a uniform random sample of the web, and you determine that N pages have some property, you can easily calculate the error bars on N, and I've completely forgotten the details but I think it's reasonably accurate when N is at least a few dozen, for samples of this scale
20:22
<manu-db>
Philip`: Right - college level statistics and all... but the assumption being made is that the data in dmoz and dotbot is a uniform random sample of the Web... and I'm not convinced that it is.
20:22
<Philip`>
manu-db: Yeah, they're definitely not
20:23
<manu-db>
to put it another way - we've done analysis on the Sindice database and a few others and found that even those large data sets have some bias.
20:23
<Philip`>
At least the dotbot data might be a uniform sample of some defined population (i.e. the set of pages they crawled), though not necessarily a useful population
20:23
<Philip`>
There's an infinite number of web pages so it's impossible to even have the concept of a uniform sample
20:24
<manu-db>
right... so, right now, we're looking at doing a 80+TB crawl - it'll cost us around $100 USD ... if the numbers that we find there match the dmoz / dotbot data - then we can assume that dmoz / dotbot have an acceptable randomized sample of the Web...
20:24
<jgraham>
I was going to say, what do you mean by a fair sample in this case?
20:25
<manu-db>
"fair sample" - I don't know... right now, I'm just wondering how much deviation there will be among the crawlers for the same question... like: "how may <time> elements are there in the data set?"
20:25
<Philip`>
There are various problems like dmoz having a zillion nytimes.com pages at some point in the past (if I remember correctly), which can be improved by e.g. limiting number of pages per domain
20:25
<manu-db>
(and then you divide the occurences by number of pages, etc.
20:26
<manu-db>
jgraham: I think one of the problems is that we don't know what a "fair sample" looks like... there is no metric for determining a fair sample...
20:27
<manu-db>
for the reasons that Philip` states... I tend to shrug when people ask that question. I don't have a good answer - thus the need for the Data-Driven Standards work...
20:27
<Philip`>
Probably the most useful sample is the set of pages visited by users in a day, multiplied by the importance they assign to each visit
20:28
<Philip`>
A more feasible approximation is the set of pages visited by users in a day
20:28
<jgraham>
Indeed. I can't even begin to imagine what you would say. If you show usage of <time> in 1% of all pages but those 1% are all wordpress blogs that will be upgraded with the next security release, is that a significant number of people or not?
20:28
<manu-db>
I'm sure you could factor their page rank in there somewhere.
20:28
<Philip`>
dmoz probably has a lot of bias towards old sites (because they were entered a long time ago) that nobody visits nowadays
20:29
<jgraham>
manu-db: You might be able to get that data more directly e.g. by getting a browser maker to add element counters to their data collection tools
20:29
<manu-db>
jgraham - yes, answering questions like that is difficult... usually getting data just creates more questions that you want to ask the data...
20:29
<Philip`>
dotbot probably has a lot of bias towards deep database-driven sites with large numbers of pages
20:29
<Velmont>
jgraham: Well, -- themes are not really always that easily upgraded.
20:29
<Philip`>
(and most pages are visited very rarely)
20:29
<jgraham>
(the user-determined average)
20:29
<manu-db>
jgraham: good idea - but getting browser vendors to move on stuff like this is a very long and painful process... we haven't had much luck with it in the past.
20:30
<jgraham>
Right, there's probably a bunch of reasons that's a bad idea
20:30
<Velmont>
jgraham: Also, I have <time> on all pages of universitas.no which has quite a lot of pages (a news paper). It's a fish in the sea, but I guess many othershave it.:]
20:30
<jgraham>
But it is the only thing that comes close to the rather-reasonable definition of "fair" (i.e. usage weighted) that Philip` gave
20:30
<Philip`>
If you're measuring stuff like <time>, then that's extremely rare and very recent, so I expect it'll be very hard to find and depend hugely on the sample method, whereas usage of something like longdesc has probably not changed much for a decade so it doesn't matter so much where you look
20:31
<jgraham>
Velmont: It was only an example
20:31
<Philip`>
scor: It is
20:31
<jgraham>
(but does indicate another problem with getting data from browser vendors which is that it is useless without URLs and they shouldn't hand those over)
20:34
<Philip`>
I suppose the other problem is that even when you have pretty accurate and detailed and multiply-reproduced data about e.g. longdesc, people ignore it
20:34
<manu-db>
You could artificially create some sort of relevance rank by applying the Google page rank of a domain to the URLs that you scan of that domain... but even that is pretty hand-wavingly band... that could give you a relevance value for the markup for a particular page.
20:35
<manu-db>
In any case - I don't think the questions we're going to be asking are that detailed at first.
20:35
<manu-db>
We just want to know "How many sites will break if we remove X?"
20:36
<manu-db>
and then at least we have data... that we can argue endlessly about the significance of those sites.
20:36
<zewt>
be nice if google could do things like dom-level queries, heh
20:36
<zewt>
google xpath
20:36
<manu-db>
the thing that concerns me is that we don't even have the basic set of data right now.
20:37
<manu-db>
zewt: So, I was looking into the map-reduce stuff and if there was a Python HTML5 DOM (which there is), you could do those types of queries on the Common Crawl data set.
20:37
<Philip`>
I don't think you want to use Python - Java is about a hundred times faster for this
20:38
<Philip`>
(where "this" includes HTML5 parsing)
20:38
<jgraham>
Philip`: I think you mean "I don't think you'd want to use a python *parser*)
20:38
<jgraham>
s/)/"/
20:38
<manu-db>
Well... you can use Java too... I just try not to unless absolutely necessary.
20:38
<bga_>
:)
20:38
<bga_>
use pure C!
20:38
<Ms2ger>
You can use C++, as soon as hsivonen finishes his standalone parser :)
20:39
<Ms2ger>
(Along with HTML5 becoming a recommendation?)
20:39
<manu-db>
unfortunately, Hadoop is written in Java... which is what Amazon's Elastic Map Reduce crap uses... so, no C++ love there.
20:41
<Velmont>
The h264 js decoder, would be nice to see a theora js-based decoder, seeing as theora has lots less complexity. Then maybe I could retire using java applet cortado for showing videos to legacy browsers.
20:41
<Philip`>
manu-db: Annoyingly, trying to prove a negative ("there aren't many significant pages that will break if we remove X") seems massively harder than a positive (which you can prove by demonstrating there are N affected pages in this sample) :-(
20:42
<Philip`>
(For a negative, people will always argue your sample may be missing many significant pages, and will probably be right)
20:43
Philip`
isn't entirely sure it's worth the effort of trying to do the former
20:44
<manu-db>
Philip`: I don't quite follow, mind elaborating?
20:46
<Ms2ger>
Absence of proof isn't proof of absence, and stuff like that
20:48
<Philip`>
If you say "our search found nobody using <time> so we can safely remove it", someone will say "but this major site over here uses it", or "your sample is 3 months old and there was a load of publicity 2 months ago that will have encouraged thousands of people to use it", or "you ought to try looking in .gov sites because I have a hunch they might use it", etc
20:51
<Philip`>
(whereas if you say "this list of three hundred sites uses <time>, and based on the sample size there's probably at least a thousand times that many in the full collection that was sampled, so it's too expensive to remove it", then nobody will disagree)
20:51
<manu-db>
right
20:52
<Philip`>
(so the latter is easy and can produce usable results to help ensure compatibility in language design, but the former seems to end up frequently in endless discussions about the methodology)
20:55
<manu-db>
yes, that's true. However, having data across 80TB of data is better than not... especially if we can understand how randomized these sample sets are...
20:55
<manu-db>
I'm not saying that you won't have people saying that you sampled the wrong data set...
20:56
<manu-db>
but by having a pretty solid data set and methodology, you can convince the more rational people among us about a trend.
20:56
<manu-db>
where solid data set >= 80TB of data or 5 billion pages
20:56
<manu-db>
and methodology == the same question asked across dotbot, CommonCrawl, and 80legs.com gave roughly the same answer.
20:57
<manu-db>
(not saying that is easy to do... but it sounds better than what we're doing right now)
20:57
<manu-db>
(and it seems technically achievable for a very small investment of time and cash)
20:58
<Philip`>
Is the plan to update the data set regularly? (I'd imagine it's more useful to have one that's e.g. 10% of the size but updated every 3 months, so you can follow trends over time and detect usage of recent features)
20:58
<erlehmann>
manu-db, why not use html5lib?
20:58
<erlehmann>
for python?
20:58
<erlehmann>
i do not see what python dom stuff could do better. i feel dumb.
20:58
<Philip`>
(The dotbot data is kind of uselessly outdated now)
20:58
<manu-db>
Philip`: I think CommonCrawl updates their data twice a year... 80legs updates their data every 3 months.
20:59
<manu-db>
Philip`: It would be fun to see how much the dotbot data deviates from the frequently updated sample sets...
21:01
<manu-db>
erlehmann: Yes, you could use html5lib - except that some people say that it's slow (which translates into lots of $$$ on an Amazon Elastic Map Reduce Job on multiple terabytes of data)
21:02
Philip`
even has data saying it's slow :-)
21:03
<erlehmann>
oh
21:05
<jgraham>
It is slow
21:05
<jgraham>
This is not really an opinion :)
21:05
<devfil>
AryehGregor: hi, I'm trying to use your execCommand implementation but looks like it doesn't work on firefox :/
21:06
<AryehGregor>
devfil, I'll need a lot more details than that to debug the issue. First, what URL are you looking at? editor.html is *not* meant to be actually usable in practice at this point.
21:06
<Philip`>
(...Or at least I did have data - it's somewhere in the IRC logs, I'm just not sure where)
21:07
<AryehGregor>
(I'm about to leave for a while, but I should be back in an hour or so, so just be patient -- or continue this discussion by e-mail)
21:07
<devfil>
AryehGregor: I'm using nicEdit but instead of calling window.execCommand I'm calling myExecCommand, it works in chrome
21:07
<AryehGregor>
devfil, it will work somewhat in recent Chrome and Firefox, fail in some cases even in them, and fail horribly in other browsers. It's really meant for testing, so I don't expect it to be reliable in other contexts.
21:08
<devfil>
AryehGregor: yes, I know
21:08
<devfil>
AryehGregor: I'm using firefox 7.0.1
21:08
<AryehGregor>
devfil, try Firefox 9.0a2 or later.
21:08
<AryehGregor>
Probably won't matter, though.
21:09
<AryehGregor>
Also try giving me a test case or describing the exact problem.
21:09
<AryehGregor>
AFK, BBL.
21:15
<kennyluck>
How different is recent Chrome from WebKit now? I wonder
21:16
<dglazkov>
kennyluck: what do you mean?
21:18
<kennyluck>
dglazkov, well AryehGregor mentioned execCommand work somewhat in recent *Chrome* and Firefox. This makes me wonder how far is Chromimum from the WebKit trunk at the moment.
21:20
<dglazkov>
kennyluck: http://src.chromium.org/viewvc/chrome/trunk/src/DEPS tells you the current WebKit revision being used (see "webkit_trunk" variable value), and the first entry on http://trac.webkit.org/ will give you the latest WebKit revision.
21:20
<devfil>
AryehGregor: only the first time it doesn't work
21:20
<dglazkov>
kennyluck: so, right now it's about 70 revisions.
21:20
<TabAtkins>
kennyluck: Chrome pulls from the trunk, though. We don't fork, though a given release branches based on a particular trunk revision.
21:21
<kennyluck>
AryehGregor, so when you so "it fail horribly in other browser" I guess you don't include Safari running with the WebKit trunk?
21:22
<jgraham>
kennyluck: I expect AryehGregor can't run safari
21:24
<kennyluck>
TabAtkins, good to know. Thanks.
21:58
<karlcow>
http://twitter.com/0penweb
22:06
<AryehGregor>
kennyluck, Safari counts the same as outdated Chrome as far as I'm concerned. I'm talking about IE and Opera, and maybe mobile browsers.
22:06
<AryehGregor>
Also, I could run Safari for Windows on a VM if I cared enough.
22:07
<AryehGregor>
But it doesn't matter, Chrome works the same for my purposes.
22:22
AryehGregor
wonders why while (var foo = bar()) isn't allowed, while for (var foo = bar(); ...) is
22:23
<gsnedders>
AryehGregor: because it's a var statement, and for/for-in are a special case.
22:23
<AryehGregor>
Why have a special case instead of just allowing it in any similar place?
22:24
<smaug____>
AryehGregor: it is not similar place
22:24
<gsnedders>
AryehGregor: Because it's semantics are different in the for/for-in case.
22:25
<AryehGregor>
Hmm. I guess for (a; b; c) { d; } is logically the same as a; while (b) { d; c; }, so only the middle part is comparable.
22:26
<AryehGregor>
Huh, var a; a = [1, 2] && a[0]; doesn't work. Apparently it evaluates the whole expression, including the assignment, before it actually assigns?
22:26
<AryehGregor>
That's counterintuitive.
22:26
<AryehGregor>
Not like C, at least.
22:27
<Philip`>
Surely being counterintuitive is like C
22:27
<zewt>
C is one of the most intuitive languages you'll find
22:27
<AryehGregor>
C is amazingly intuitive.
22:27
<AryehGregor>
It's really, really simple.
22:27
<zewt>
^5
22:27
<gsnedders>
AryehGregor: logical and is evaluated by GetValue(LHS) and then GetValue(RHS)
22:28
<gsnedders>
IIRC
22:28
<AryehGregor>
And GetValue() of an assignment is the RHS of the assignment, without actually doing the assignment, I guess?
22:28
<Philip`>
Doesn't seem particularly intuitive when there's trivial stuff like "a = a++" where it's impossible to know what it'll do
22:28
<zewt>
AryehGregor: that's parsed as a = ([1,2] && a[0]), not (a = [1,2]) && a[0]
22:29
<AryehGregor>
zewt, oh!
22:29
<Philip`>
Also not intuitive: aliasing
22:29
<AryehGregor>
So I could do (a = [1, 2]) && a[0], and that would work?
22:29
<gsnedders>
AryehGregor: Yes
22:29
<zewt>
if you felt the need, heh
22:29
<AryehGregor>
Okay, now that makes sense.
22:29
<AryehGregor>
Although maybe I'll just stick with being verbose and having some extra function calls.
22:30
<zewt>
Philip`: having used C for a decade and a half or so, I've never felt the need to write "a = a++" :)
22:30
<AryehGregor>
Okay, so my reproducible crash in Chromium, which is a regression, has not had anyone pay attention to it in more than a week despite the fact that I provided detailed reproduction instructions and a crash ID? Seriously? http://code.google.com/p/chromium/issues/detail?id=101791
22:31
<zewt>
it's just a fairly isolated (and rare, for that language) language ambiguity
22:31
<gsnedders>
zewt: Well, isn't that technically undefined?
22:31
AryehGregor
pokes dglazkov and TabAtkins
22:32
<zewt>
by spec I'm not sure, but not that I'd defend that in particular--undefined things are based--it's just fairly isolated, in my experience
22:32
<zewt>
also are bad
22:32
<zewt>
(based? fingers on autopilot, apparently)
22:33
<gsnedders>
zewt: It ES (to take an example of where that's defined), it's a no-op if a is a Number
22:37
<TabAtkins>
AryehGregor: I don't have bug editting privileges on there, but I'll poke someone.
22:37
<AryehGregor>
TabAtkins, thanks.
22:37
<gsnedders>
*In
22:58
<AryehGregor>
TabAtkins, can you reproduce the crash?
22:58
<TabAtkins>
AryehGregor: Yup, and I put my own crash id in the bug report.
22:58
<AryehGregor>
Thanks.
22:58
<TabAtkins>
AryehGregor: https://bugs.webkit.org/show_bug.cgi?id=71737
22:59
<AryehGregor>
"You are not authorized to access bug #71737.
22:59
<AryehGregor>
"
22:59
<TabAtkins>
Oh, sorry, it's marked as a security bug.
22:59
<AryehGregor>
You should be able to CC me.
22:59
<TabAtkins>
AryehGregor: Still ayg⊙an?
23:00
<AryehGregor>
Yes, should be.
23:00
<TabAtkins>
k, you're cc'd
23:00
<AryehGregor>
Thanks.