00:02
<Domenic_>
heycam|away: ping https://github.com/w3ctag/promises-guide/issues/15#issuecomment-37483172
00:54
<heycam>
Domenic_, will reply today
01:16
<SamB>
does anything actually use meta description anymore?
01:19
<TabAtkins>
Yeah, it's used by Google for the site description, if there's no better signals.
01:19
<SamB>
you mean the blurb in the results?
01:20
<SamB>
or actual search matching?
01:46
<gsnedders>
I'd presume it's used for search matching if it's used as the blurb
01:59
<TabAtkins>
The blurb.
01:59
<TabAtkins>
I think it does indeed contribute to search matching, as well, but I have no real insight into that.
07:44
<zcorpan>
mathiasbynens: hmm now i need to look into making that up to date again
08:20
<zcorpan>
annevk: https://critic.hoppipolla.co.uk/r/924
08:27
<zcorpan>
jgraham: does things get confused if a reftest includes testharness.js?
08:34
<zcorpan>
how do reftests work, when is the screenshot snapped? onload? xhr doesn't delay onload, does it? is there a way to delay the snapshot? (this is on web-platform-tests)
08:34
<zcorpan>
Ms2ger: jgraham: ^
09:33
<Ms2ger>
zcorpan, I know Gecko has something... class=reftest-wait on the root?
09:33
<Ms2ger>
Not sure if wpt has something
09:42
<jgraham>
zcorpan: Yes, they get confused if you include testharness.js
09:42
<zcorpan>
jgraham: ok
09:43
<jgraham>
At least in an obvious way
09:43
<jgraham>
If you really need it in some way you can include it at runtime
09:43
<jgraham>
We don
09:43
<jgraham>
't have a reftest-wait equivalent at the moment
09:44
<jgraham>
But we probably need one (unless we can convince all browsers to implement something like Presto
09:45
<jgraham>
had to determine when the event queue was empty)
09:45
<jgraham>
(which we can't)
09:46
<zcorpan>
yeah personally i prefer a class on the root
09:50
<zcorpan>
jgraham: "This pipe can also be enabled by using a filename *.sub.ext, e.g. the file above could be called xhr.sub.js." does that sound good to add to pipes.rst ?
09:50
<jgraham>
Well despite my skepticism the Presto solution worked rather well
09:50
<jgraham>
But it isn't at all portable
09:50
<jgraham>
zcorpan: Yeah
09:51
<zcorpan>
jgraham: ok then i'll commit it. i thought i was going to be given a "create PR" button but it was "commit changes" so...
09:52
<zcorpan>
maybe i should create a new github user that is not an owner of anything
09:52
<zcorpan>
or whine to github to give me two buttons
09:56
<zcorpan>
wonder where to file a bug about github
09:56
<annevk>
zcorpan: somewhere in https://github.com/github/ I suspect
10:01
<zcorpan>
annevk: thanks... so which repo? :-P
10:01
<annevk>
zcorpan: "somewhere" :-P
10:01
<MikeSmith>
annevk: I still got a couple open PRs for changes to URL tests
10:01
<MikeSmith>
one's pretty easy https://github.com/w3c/web-platform-tests/pull/768
10:01
<MikeSmith>
https://github.com/w3c/web-platform-tests/commit/3a6bf6d2023053ad42c6a02c80ea4b1313becfaa
10:02
<MikeSmith>
the other one is just a port of more webkit URL tests https://github.com/w3c/web-platform-tests/pull/771
10:02
<MikeSmith>
for host canonicalization
10:02
<MikeSmith>
by way of smola
10:04
<annevk>
MikeSmith: looks good, be aware that some of this may change
10:04
<annevk>
but I guess it's better to have tests to compare the changes against than nothing at all
10:05
<annevk>
MikeSmith: reviewed through critic
10:05
<MikeSmith>
thanks
10:06
<zcorpan>
annevk: i used the contact form instead
11:24
<mathiasbynens>
zcorpan: support⊙gc or contact form is The Right Way™
11:25
<zcorpan>
mathiasbynens: excellent
11:25
<zcorpan>
mathiasbynens: i got a gold star so maybe i did something right
11:26
<zcorpan>
i guess "FUUUUUUUUUUUUUUUUUUUUUUUUUU U SUCK" was less than 140 chars
11:28
<annevk>
We should offer such stars to Kyle Simpson on the mailing list
11:46
<jgraham>
But with "Less than 140 thousand words"?
12:32
<zcorpan>
are wpt PRs still mirrored on w3c-test.org somewhere?
12:34
<MikeSmith>
zcorpan: yeah
12:34
<jgraham>
Yeah, under submissions/
12:34
<zcorpan>
ah there, thx
13:34
<MikeSmith>
about exceptions, I see the label "Legacy code exception field value" used in the DOM spec, can I assume that I shouldn't use those in current code?
13:34
<MikeSmith>
I mean http://dom.spec.whatwg.org/#error-names-0
13:35
<MikeSmith>
so I should use SyntaxError instead of SYNTAX_ERR
13:35
<jgraham>
Yeah
13:35
<MikeSmith>
this is in the context of what I should use with assert_throws
13:35
<MikeSmith>
jgraham: OK
13:36
<MikeSmith>
is there somewhere this it more explicitly stated?
13:37
<MikeSmith>
I mean where it's stated that the name should be used rather than the "Legacy code exception field value"
13:38
<Ms2ger>
It may be in the th.js docs?
13:39
<MikeSmith>
ok
13:39
MikeSmith
reads
13:40
<MikeSmith>
the thrown exception must be a DOMException with the given * name, e.g., "TimeoutError" (for compatibility with existing * tests, a constant is also supported, e.g., "TIMEOUT_ERR")
16:15
<dglazkov>
good morning, Whatwg!
17:08
<Hixie>
annevk: is there a way to get tracker to show more lines on its home page?
17:08
<annevk>
Hixie: ?limit=
17:08
<Hixie>
thanks
17:09
<annevk>
Hixie: use -1 with caution ;-)
17:09
<Hixie>
oh i misunderstood what it did and set it to 7000 :-)
17:10
<Hixie>
in other news, anyone got a good example of a simple table i could add to the spec that demonstrates sortable=""?
17:11
<Hixie>
i need some data that has maybe six lines with multiple numeric columns that aren't all in order and aren't all the same
17:12
<annevk>
Hixie: http://annevankesteren.nl/2007/09/tmb-overview
17:12
<annevk>
although that's not a great example
17:57
<marcosc>
Domenic_: WebIDL says "When a specification says to perform some steps once a promise is settled, the following steps MUST be followed:"
17:57
<marcosc>
however, the promises guide doesn't speak of "settling a promise"
17:58
<Domenic_>
marcosc: yeah, WebIDL kind of swooped in and started incorporating some of the stuff I envisioned the promises guide doing
17:59
<Domenic_>
promises guide's "upon fulfillment"/"upon rejection" <-> WebIDL's "once a promise is settled"
17:59
<marcosc>
:(
17:59
<marcosc>
the WebIDL is really impenetrable with regards to Promises :(
18:00
<marcosc>
Domenic_: not blaming you, obviously
18:00
Ms2ger
blames marcosc
18:00
<marcosc>
I also often blame myself
18:01
<marcosc>
if only I had not been dropped so much as a baby...
18:01
<Ms2ger>
You'd blame yourself a lot more?
18:02
<marcosc>
probably... it's like in one of those TV shows where someone gets hit on the head and they suddenly become much smarter
18:02
<marcosc>
go on, Ms2ger, try it!
19:40
<Domenic_>
sicking: glad someone said what i was thinking about push apis
19:40
<sicking>
Domenic_: :)
19:41
<sicking>
Domenic_: actually, glad you pinged. I talked to our perf guys yesterday about IO perf
19:42
<sicking>
Domenic_: His first reaction was "Use node.js streams, those guys understood perf".
19:42
<Domenic_>
hah! :D
19:42
<Domenic_>
that is basically what we are doing
19:42
<sicking>
Domenic_: though he was actually a bit sceptical to the whole idea of using streams to do disk IO
19:42
<Domenic_>
interestingly i think node streams are just starting to deal with the out-of-main-thread idea
19:43
<Domenic_>
or at least not-pass-through-the-C++/JS-barrier, which might be equivalent
19:43
<Domenic_>
hmm i wonder why
19:44
<sicking>
his argument was that often time when people stream, rather than read the whole file and then process, was that the "read small chunks at a time" adds so much overhead that it's a net perf loss
19:44
<sicking>
i.e. he was saying that if you are reading a file that's in the order of < 50MB, then reading that in 1K chunks adds so much overhead to the individual read calls, that the read calls turns into a bottleneck
19:45
<Domenic_>
hmm. my impression from node people is that the threshold is much less than 50 MB.
19:45
<sicking>
this does seem to vary from OS to OS. OSX is particularly bad apparently
19:46
<Domenic_>
there's also issues like how long it takes to parse 1 MB of JSON vs. incremental work on 100 1K chunks of JSON
19:46
<sicking>
(though i would have thought that windows was worse, but we didn't spend too much time on it)
19:46
<sicking>
right
19:46
<sicking>
though he was also saying that if you read in 32-64kB chunks this is less of an issue
19:47
<Domenic_>
well, yeah, optimal chunk size is up to the implementation
19:47
<sicking>
so it's possible that simply increasing the default chunk (is that right word?) size that would help a bunch
19:47
<Domenic_>
so it's up to each individual stream to decide how big the chunks they want to vend are
19:48
<Domenic_>
so if browser/firefox OS streams want to vend 32 kB chunks, that is totally cool
19:48
<sicking>
but if your files was large enough, then switching to something like a 2MB chunk size was needed for good perf
19:48
<Domenic_>
that works too
19:48
<Domenic_>
it's not part of the streams spec; the streams spec just says "here is how you queue up data; and here is how people can get data out of that queue"
19:48
<Domenic_>
but how big the data chunks are that you queue up is up to the individual stream
19:49
<sicking>
yeah, so maybe we can make the current API work
19:49
<Domenic_>
it'd make perfect sense for FS streams to be specified, either loosely to let implementations choose, or with some kind of algorithm like you're describing
19:49
<sicking>
the other thing he was saying was that reusing chunks, rather than always allocating new ones, also makes a big difference perf-wise
19:50
<Domenic_>
hmm what does that mean...
19:50
<Domenic_>
(api-wise)
19:50
<sicking>
arg, crap, gotta get food before next meeting
19:51
<sicking>
i have two proposals for how that can be done, but i'll have to get back to you
19:51
<Domenic_>
sweet
19:51
<sicking>
i'm free after 4pm
19:51
<sicking>
pacific
19:51
<sicking>
what timezone are you in btw?
19:51
<Domenic_>
eastern. but tonight i gotta speak at a meetup
21:29
<TabAtkins>
annevk-cloud: Can you do something about deprecating Selectors API 2, now that it's all been swallowed into DOM?
21:52
<jgraham>
So it totally seems possible that authors will rely on the size of chunks that they get back from streams
21:52
<jgraham>
(assuming it's detectable by the author, which I guess it must be)
21:53
<jgraham>
So if it's implementation defined that seems like a problem
21:53
<jgraham>
Domenic_: ^
21:54
<Domenic_>
jgraham: yeah, I agree it's a concern.
21:54
<Domenic_>
but then again, do authors depend on how much they get back from "streaming" XHR?
21:56
<jgraham>
I'm not sure. Is that as "reliable" as this (in the sense that a single implementation is always likely to return data in the same sized blocks)?
21:57
<Domenic_>
i'm not sure either.
21:57
<Domenic_>
although it sounds like it could vary here, even per-platform perhaps
21:57
<Domenic_>
e.g. macs vs. pcs
21:57
<Domenic_>
or big files vs. small files
22:00
<jgraham>
I'm not sure that makes it better. If it was *always* different people probably wouldn't rely on it. But if people expected 1kB from testing on WebKit/Mac they might well get unexpected breakage if IE/Windows returned 1Mb. Or if they got 1kB consistently for small test data and suddenly got 64kB for larger files in production.
22:32
<zewt>
jgraham: i seem to recall pointing out at some point that if we have a stream API, it should be based on requesting blocks of data with a given size and getting a callback when that amount is available, to ensure that block sizes of the implementation are never exposed
22:34
<zewt>
can't recall where (not that it matters)
22:36
<zewt>
seems like an obvious interop requirement though
22:40
<zewt>
might have been talking about streaming out (eg. streaming JS to the browser), which is also a problem (exposing the number of bytes the implementation requests at a time, etc. could cause interop issues), but a different one