00:25 | <wanderview> | Domenic: ok... I have data that shows multiple read() calls with a promise each is much slower than being able to read buffered chunks synchronously |
00:28 | <wanderview> | Domenic: talking 30x slower and up... thats with bluebird promises (SM promises are worse, as expected) |
00:30 | <wanderview> | Domenic: open this and look in your console: https://blog.wanderview.com/streams-promise-read/bluebird.html |
00:31 | <wanderview> | code is here: https://github.com/wanderview/streams-promise-read |
00:37 | <Domenic> | wanderview: needs I/O to be a real test |
00:37 | <Domenic> | All you are testing there is promises vs. loops |
00:38 | <Domenic> | Not the impact of promises on streams |
00:42 | <zewt> | i wonder what middle-managery person at mozilla thought having a "start a conversation" button in the toolbar was a cool idea |
00:53 | <wanderview> | Domenic: its the case I described before... the I/O was done previously and written to a pipe... now a consumer is reading from a pipe to process it (perhaps all in memory) |
00:53 | <wanderview> | Domenic: this is the case you said "needs citation" above |
00:55 | <wanderview> | Domenic: from here: http://logs.glob.uno/?c=freenode%23whatwg&s=8+Apr+2015&e=8+Apr+2015&h=citation#c944489 |
01:11 | <Domenic> | wanderview: I must have misinterpreted. That's not streaming at all... Just buffering. |
01:12 | <wanderview> | Domenic: so buffering is not supported in this stream model? |
01:13 | <Domenic> | wanderview: a 1000-chunk high water Mark is unrealistic |
01:13 | <Domenic> | of course it is, but you shouldn't use a stream when you're just buffering all data in memory anyway |
01:13 | <wanderview> | Domenic: I'm not saying 1000 is realistic... but 10 chunks is a realistic bugger |
01:13 | <wanderview> | buffer |
01:14 | <Domenic> | A pipe should terminate in I/O on one side or the other |
01:14 | <wanderview> | Domenic: I thought you were gone for the evening so I wrote my thoughts here: https://github.com/whatwg/streams/issues/320#issuecomment-91083647 |
01:15 | <Domenic> | If it takes x time to read from the pipe with promises, 0.04x time with batch, and 1000x time to put data in the pipe in the first place, I'm not too concerned. |
01:15 | <Domenic> | I kind of am gone, should probably turn off notifications :p |
01:16 | <wanderview> | Domenic: I feel like we've pessimized a common case in order to allow an optimization in obscure case later |
01:16 | <Domenic> | I do not think chunks being synchronously generated in a batch is common. |
01:17 | <Domenic> | Chunks come from somewhere, ultimately, perhaps after several transforms, but ultimately from I/O. This example does not show that. |
01:19 | <Domenic> | Just read your comment... 6 ms is a lot... Absolute numbers help. |
01:21 | <Domenic> | Except... 625 microseconds is actually 0.6 ms |
01:21 | <Domenic> | Oh, it's per chunk |
01:22 | <wanderview> | Domenic: yea, sorry... I was trying to make the number easier to compare... so I normalized per chunk |
01:22 | <Domenic> | Although I wonder if the code is just not hot enough for the optimizer to kick in for 10 chunks |
01:23 | <Domenic> | Maybe do 10 chunks in a loop or something |
01:23 | <wanderview> | Domenic: well, the higher number loops suggest the sync loop optimizes much better than the promise loop can be optimized... not surprising |
01:23 | <Domenic> | Or ten chunks every requestAnimationFrame, since eating the frame budget is the real concern |
01:23 | <TabAtkins> | Domenic: Yo, sorry for the digression, but random help here: it's a bad idea for an attribute to sometimes be updated sync and sometimes async, according to unknowable impl-specific criteria, right? |
01:24 | <Domenic> | I'm not concerned about the relative numbers (see my "x" comment above), but about eating 6 ms of frame budget |
01:24 | <Domenic> | TabAtkins: sounds bad, although I could imagine cases that fit that description which are probably ok? |
01:25 | <wanderview> | Domenic: if we expect this only to happen for modest buffer sizes... I don't see how we can expect the loop to get super hot |
01:25 | <TabAtkins> | Well, the case is whether a FontFace.status is set to "unloaded" or "loading". Currently it's always async, but jdaggett/heycam want it to be set syncly when possible (font is a data url, a blob url, a cached font, etc) |
01:26 | <wanderview> | Domenic: anyway, I have to go crash and sleep for 12 hours... talk to you tomorrow! |
01:27 | <Domenic> | Cached sounds skeevy.... Others sound somewhat reasonable |
01:27 | <Domenic> | wanderview: ok cool, I'll probably fork your thing and experiment |
01:27 | <wanderview> | Domenic: please do... just don't judge me by my javascript :-) |
01:29 | <wanderview> | Domenic: btw... looking at the results I think the jit kicked in at 100 chunks... there was an across the board improvement there... the sync loop got another boost from some optimization going from 1000 chunks to 10,000 chunks... but the promise loop did not |
01:29 | <wanderview> | in spidermonkey of cours.e.. don't know what chrome does |
01:30 | <Domenic> | TabAtkins: I think some of the normal zalgo hazards don't apply here because I can't see a way for code to be written that assumes always sync or always async |
01:31 | <Domenic> | wanderview: yeah, we need to make sure the code is hot before benchmarking. Kinda pointless to measure non hot code since it doesn't need to be fast. |
01:31 | <TabAtkins> | Cached I definitely see - easy for a dev to accidentally work with cached fonts, and write broken code for users. |
01:32 | <TabAtkins> | And I can see some browsers considering some types of urls as sync, while others dont'. |
01:32 | <Domenic> | TabAtkins: but what kind of code would run into this? I would think conditionals on ff.status would work in either case. |
01:35 | <TabAtkins> | Man, I dunno. It just feels super icky to have a line in a spec that says "If you want you can do this part sync, lol i dunno" |
01:36 | <Domenic> | Yeah, it would have to be normative which cases are sync |
01:50 | <wanderview> | Domenic: yea, you are right... let me add a call to the tests to prime the jit |
02:01 | <wanderview> | Domenic: I think I agree the read() promise is fast enough for browsers... but also agree with trevnorris that its probably inadequate for what node.js needs... not sure we can get something that works perfectly for both |
09:49 | <zcorpan> | ok any opinions on naming? document.scrollingElement? document.viewportElement? https://lists.w3.org/Archives/Public/www-style/2015Apr/0108.html |
09:55 | <paul_irish> | document.viewportElement sgtm |
09:56 | <zcorpan> | thanks paul_irish |
11:12 | <roc> | scrollingElement |
11:12 | <roc> | viewportElement isn't correct |
11:17 | <zcorpan> | roc: it will be correct when webkit/blink have fixed scrollTop, no? |
11:20 | <zcorpan> | i guess background/overflow have different rules, but i think things would work as intended if this API is used for those also |
13:02 | <zcorpan> | mathiasbynens: wanna polyfill http://dev.w3.org/csswg/cssom-view/#dom-document-scrollingelement ? |
13:04 | <mathiasbynens> | zcorpan: sounds like fun! will do |
13:05 | <zcorpan> | :-) |
15:55 | <calvaris> | Domenic: I am writing more tests |
15:55 | <calvaris> | and it seems that I can construct if I pass undefined to the constructor |
15:55 | <calvaris> | which should be equivalent to pass no arguments, right? |
15:58 | <Ms2ger> | For optional arguments, yes |
15:58 | <Ms2ger> | For required arguments, no |
16:08 | <mcnesium> | my new wordpress does have a valid rss feed but safari on iOS and OSX do not show the "reader" sign. any idea what might be wrong? http://mcnesium.com |
16:37 | <calvaris> | in this case, the object can be constructed with no arguments, so we can assume argument is optional |
16:37 | <calvaris> | can't we? |
16:38 | <wanderview> | Domenic: do you know why the promise cases in the benchmark all take longer to settle than the sync cases? |
16:38 | <wanderview> | Domenic: is that just variability from runnables in the event queue? |
16:38 | <calvaris> | Ms2ger: ? |
16:39 | <calvaris> | I guess it is not |
16:40 | <calvaris> | because in myFunction(myArgument = {}) {} it is different to call myFunction() than myFunction(undefined) |
16:44 | <calvaris> | no, it's the same thing |
17:45 | <wanderview> | Domenic: can you explain again why we need async read() always on getReader()... instead of making getReader().read() sync and getByobReader.read() async? |
17:46 | <wanderview> | sorry... I know you've explained before |
17:54 | <caitp-> | dunno his reasons, but if it's sometimes sync and sometimes async, that's not great for usability |
18:04 | <wanderview> | caitp-: no... its only async if you explicitly opt-in to the "bring my own buffer" optimization... and then its always async |
18:04 | <caitp-> | yes, and then if you pass that reader to something that doesn't know you brought your own or not, it may or may not be async |
18:18 | <wanderview> | caitp-: that reader has its own type... they are not compatible with each other already |
18:39 | <caitp-> | if they can be used the same way, you can probably expect that they would be |
18:49 | <wanderview> | caitp-: agree in principal, but not sure if its worth baking a performance penalty into getReader() just to satisfy aesthetic similarity to getByobReader() |
18:50 | <caitp-> | i'm sure domenic's reasons are better ones |
18:50 | <caitp-> | or at least, he's probably spending more time thinking about it =p |
21:36 | <wanderview> | hmm... does chrome has something like URLSearchParams? |
21:39 | <caitp> | doesn't look like it |
21:39 | <caitp> | well, "something like", probably just a JSObject |
21:41 | <caitp> | not even a JSObject |
21:41 | <caitp> | not exposed on the interface, and no TODO. should file :o |
21:47 | <caitp> | i guess it is filed |
21:51 | <wanderview> | yea... I guess that part of the URL spec just needs to be implemented |
21:52 | <jgraham> | Is there some way to tell if the origin of a page changed at some point (i.e. that document.domain was set) |
21:58 | <caitp> | there's no event triggered in the setter algorithm, or anything |