00:19
<aho>
are websockets now part of html5?
00:19
<aho>
http://dev.w3.org/html5/websockets/ <- this url seems to suggest that
00:19
<Hixie>
define "html5"
00:19
<aho>
http://www.whatwg.org/C <- that :P
00:19
<Hixie>
that's not "HTML5", that's Web Applications 1.0
00:20
<Hixie>
the Web Sockets API is part of Web Applications 1.0, the Web Sockets Protocol is not.
00:20
<aho>
well, it was html5 prior to that whole "dropping the version number thing", right?
00:20
<Hixie>
http://whatwg.org/html was labeled "html5" for a while
00:20
<Hixie>
but that's a subset of http://whatwg.org/C
00:21
<Hixie>
(the C stands for Complete)
00:21
<aho>
ah... ok
00:21
<Hixie>
the FAQ tries to explain it better if you're still confused :-)
00:24
<aho>
and the /protocol/ is part of... some ietf thing?
00:26
wilhelm
approves of the kitchen sink illustration.
00:26
<aho>
http://www.brucelawson.co.uk/2010/meet-newt-new-exciting-web-technologies/
00:26
<aho>
lets go with that
00:26
<aho>
css3? NEWT. webworkers? NEWT. canvas? NEWT. :>
00:30
<potatis_invalido>
Pronounced it'd sound like the Swedish word for pleasure.
00:30
<aho>
sounds about right
00:30
<aho>
:>
00:31
<potatis_invalido>
Haha yeah
00:32
<potatis_invalido>
What fun projects are you people working on? (With emphasis on the word fun)
00:32
<aho>
http://mbtic.com/ddd
00:33
<aho>
old site, old markup, ignore that :>
00:33
<potatis_invalido>
It's like the ice skating in Pokémon
00:33
<aho>
never played any pokemon game
00:33
<potatis_invalido>
but way cooler
00:33
<aho>
:)
00:34
<potatis_invalido>
I'm working on a Hammer Editor clone
00:34
<potatis_invalido>
for the web
00:34
<potatis_invalido>
Hammer Editor is the map editor for Half-Life and Source games
00:34
<potatis_invalido>
3D FPS games in other words
00:34
<aho>
ah
00:34
<aho>
i only used radiant (quake3's)
00:35
<aho>
and some really old one for quake1
00:35
<aho>
no idea how that one was called
00:35
<potatis_invalido>
I've tried radiant. Found it confusing
00:35
<aho>
it is
00:35
<aho>
got some curve to it
00:35
<potatis_invalido>
quark?
00:35
<aho>
probably :)
00:35
<aho>
it's been a while
00:36
<Hixie>
aho: web socket protocol is being done by the ietf hybi group
00:36
<potatis_invalido>
Did you finish any maps?
00:36
<aho>
i made some midair maps
00:36
<Yuhong>
"I wonder what Arjun Ray is doing these days. one could spend a weekend reading his ciwah posts from the late 1990s "
00:36
<Yuhong>
People have forgotten how the Netscape monopoly was bad.
00:37
<aho>
but that mod was experimental and only a dozen people ever played it :>
00:37
<Yuhong>
Back in 1995 or so.
00:37
<potatis_invalido>
I've been mapping Half-Life ~5 years now. Still haven't been able to produce anything worth looking at :P
00:37
<Yuhong>
It basically killed HTML 3.0, which existed as HTML+ before Netscape even existed.
00:37
<potatis_invalido>
too short attention span
00:38
<aho>
midair = typically just one room... maybe void... maybe lava... that's about it
00:38
<potatis_invalido>
No platforms or anything?
00:38
Philip`
vaguely remembers that he gave up on making Quake maps before any GUI editor was released, when you had to write the whole level's brush positions in a text file by hand (and then spend two hours computing the lighting maps)
00:38
<Yuhong>
It delayed adoption of style sheets for years, while they invented their own CENTER and FONT tags.
00:38
<aho>
RL only, crazy vertical knockback, the higher the enemy is in the air, the more damage they receive
00:39
<Yuhong>
(First draft of CSS dates around the time of Netscape 0.9)
00:39
<aho>
it's some qw game type
00:39
<potatis_invalido>
I made a room in the .map format for Half-Life once (it's the same format as for Quake)
00:40
<potatis_invalido>
I can't imagine what it'd be like making an entire map like that
00:40
<Yuhong>
Reading Arjun Ray's posts will make all that clear.
00:40
<potatis_invalido>
Are the DaDaDash levels randomly generated?
00:40
<aho>
ye
00:41
<aho>
there will be real ones in the future though
00:41
<potatis_invalido>
Ok, that explains why it's so easy :P
00:41
<aho>
and two extra twists
00:41
<potatis_invalido>
not getting any harder, I should have said
00:41
<aho>
http://i.imgur.com/EKhMK.png <- it sometimes creates something interesting though
00:41
<aho>
<:
00:42
<potatis_invalido>
Indeed
00:42
<erlehmann>
try building concave-convex shapes
00:43
<aho>
~4.5kb js.gz and 59.6 for the resource blob. it's fairly compact :)
00:43
<potatis_invalido>
Resource blob? Are all images in one file?
00:43
<potatis_invalido>
and sounds*
00:44
<aho>
everything
00:44
<aho>
it's just those two files
00:44
<aho>
check the net panel ;)
00:45
<potatis_invalido>
How did you pull that off? Do you generate data: URIs to load the sounds and images?
00:45
<aho>
http://mbtic.com/games/dadadash/dadadash-ogg.ibz
00:45
<aho>
it's like mxhr, but with an index
00:45
<aho>
name,type,length... and so forth... then a ';' followed by the b64 data uris
00:46
<potatis_invalido>
Interesting
00:46
<aho>
with gzip it's about the same size as a zip
00:46
<potatis_invalido>
I might consider doing something similar for my app
00:47
<aho>
we also tried some other stuff... like b64-ing on the client side, but that turned out to be too slow
00:47
<aho>
we also tried tar.gz :)
00:47
<potatis_invalido>
Do you need to b64 it though?
00:47
<aho>
way. too. slow.
00:47
<aho>
ye
00:48
<aho>
there is no other way to hand those bytes over to image or audio
00:48
<aho>
so, i do need b64 at some point
00:48
<potatis_invalido>
Is it impossible to use data: with binary data?
00:48
<zewt>
there's still no api for generating blobs manually, is there?
00:48
<zewt>
which is really what's wanted for this sort of thing
00:49
<zewt>
well ... you could XHR the whole file to get a File, and slice the individual files, but XHR isn't too great for resource loading
00:49
<aho>
i'm using xhr for this
00:50
<aho>
there isn't any other option, is there?
00:50
<zewt>
iirc some browsers simply never cache xhr :(
00:50
<zewt>
(opera?)
00:51
<zewt>
if you use xhr (xhr2, rather), and if the individual files aren't compressed (use HTTP compression instead), you could slice the individual Files from the XHR result, then use object URLs to load them as resources--so you never have to manipulate the data in JS directly
00:51
<zewt>
(havn't looked at how you're doing it)
00:52
<aho>
object urls?
00:52
<zewt>
URL.createObjectURL(file/blob) -> URL
00:53
<aho>
does that work with ff/opera/chrome/ie9?
00:54
<zewt>
ff4/chrome 9+ supports it, iirc; don't know about ie9, don't think opera does
00:54
<zewt>
also the API has been in minor flux so the name has changed (easy to work around, just fyi)
00:55
<potatis_invalido>
I can confirm that there's no URI object in Opera 11.10 Beta
00:56
<zewt>
webkitURL.createObjectURL in chrome 10
00:56
<zewt>
(and revokeObjectURL)
01:00
<zewt>
if you want to see why I hate data: URLs, try context menu->view image on a large canvas in FF :P
01:01
<zewt>
(which I *wouldn't* call a QoA issue: code that deals with URLs should not have to be engineered to deal with multiple-megabyte URLs)
01:02
<aho>
hmhm... interesting
01:03
<aho>
i only knew that something like that was in the works, but i didn't know how it's called nor how it was supposed to work :>
01:04
<potatis_invalido>
Does DaDaDash only generate games that can be solved?
01:04
<aho>
yes
01:04
<aho>
(the level is created in reverse)
01:04
<potatis_invalido>
Maybe that's what I need to do
01:04
<potatis_invalido>
Think in reverse
01:05
<aho>
ye, that's the idea... it's 100% pure backtracking :>
01:05
<potatis_invalido>
That did it.
01:05
<aho>
:)
01:06
<potatis_invalido>
It's quite entertaining for such a simple game
01:06
<zewt>
puzzle games just make me want to write programs to solve them for me
01:06
<potatis_invalido>
Haha
01:07
<potatis_invalido>
The thought does cross my mind from time to time
01:07
<potatis_invalido>
I'm a FreeCell adict.
01:07
<aho>
if you got java installed you can also try another overly pure game of mine: http://kaioa.com/jws/jnlp_na/fuzetsu.jnlp
01:07
<aho>
this one is 100% bullet scraping
01:07
<aho>
:>
01:08
<aho>
it's a 4k game, by the way (i.e. the whole thing is <= 4096 bytes)
01:08
<potatis_invalido>
How do you play? (I didn't miss a help screen, did I?)
01:09
<aho>
get close to the bullets, but dont touch them with the white dot in the middle
01:09
<aho>
that's it :>
01:09
<potatis_invalido>
oh, ok
01:09
<aho>
only risk/reward :)
01:10
<potatis_invalido>
HA! HA!
01:10
<aho>
evil, eh? ;D
01:11
<potatis_invalido>
Indeed
01:11
<aho>
there are 22 or 23 levels
01:13
<aho>
http://kaioa.com/k/double_winder.png <- the level editor :)
01:15
<potatis_invalido>
Wasn't it more work creating a level editor than it'd be doing it manually? (This is coming from someone who has almost no experience with Java)
01:16
<aho>
there are 8 parameters per emitter and there can be up to 3 emitters (mid, left, right)
01:16
<aho>
finding interesting values is a lot of trial and error
01:16
<aho>
the editor took only a few hours
01:16
<potatis_invalido>
Heh, no way I'd survive 3 of them
01:17
<aho>
http://kaioa.com/k/fuzetsu4.png
01:17
<aho>
http://kaioa.com/k/fuzetsu5.png
01:17
<aho>
:)
01:18
<aho>
creating an editor is usually worth it
01:19
<aho>
in this case it easily saved me from /days/ of change, compile, start, test :>
01:20
<potatis_invalido>
Right. I forgot that Java is compiled.
01:20
<aho>
http://www.youtube.com/watch?v=eJfO5Z2deKc <- integrated level editor :)
01:21
<potatis_invalido>
I love games where it's simple to make maps
01:21
<potatis_invalido>
like Worms, Cell Block or Super Mario War
01:21
<potatis_invalido>
or N
01:22
<aho>
imo it's important to have a very high turn-over rate
01:23
<aho>
doom3 was interesting in that regard
01:23
<potatis_invalido>
If you're a Half-Life player you might have heard of Entmod. It's a server-side plugin which allows players to modify and copy world objects. You can build houses, trains and stuff.
01:23
<aho>
you could just back and forth between the editor and the game
01:23
<aho>
and you also got the real lighting right off the bat :)
01:23
<potatis_invalido>
Sounds cool. I've been meaning to try Doom 3.
01:24
<potatis_invalido>
I love the old games.
01:25
<potatis_invalido>
I actually played an Ultimate Doom co-op game with a couple of friends a few hours ago
01:25
<aho>
doom3 is kinda meh imo .)
01:25
<aho>
but the engine was pretty interesting back then
01:25
<aho>
well, you can grab the game for 5 bucks nowadays
01:26
<potatis_invalido>
I read they plan to release the source code once Rage is done
01:27
<potatis_invalido>
should spawn some interesting projects
01:27
<aho>
right away?
01:28
<potatis_invalido>
"At the QuakeCon 2009, Carmack said that he planned to petition ZeniMax Media to release the id Tech 4 source upon the release of Rage (expected in 2011)"
01:28
<potatis_invalido>
id Tech 4 is Doom 3's engine
01:28
<aho>
ye, rage is tech5
01:29
<aho>
thought you meant tech5 :)
01:29
<potatis_invalido>
Oh
01:29
<potatis_invalido>
LOL
01:29
<potatis_invalido>
I'm normally not impressed by graphics and such but Rage looks pretty sweet from the few images I've seen
01:30
<aho>
hope we'll see some kind-of impressive webgl game some day :>
01:30
<aho>
or well... some real game would be cool for starters
01:30
<aho>
;)
01:31
<aho>
(the quake one doesnt count)
01:31
<potatis_invalido>
Damn it
01:31
<potatis_invalido>
I was just going to mention that
01:32
<potatis_invalido>
Yes, I'd like that too
01:33
<potatis_invalido>
JavaScript will probably have to get a little faster first though.
01:34
<potatis_invalido>
But if we give it a few years I'm sure something will pop up
01:37
<aho>
speed is ok-ish nowadays, methinks
01:38
<aho>
full screen is still missing (right?)
01:38
<aho>
and mouse grabbing, too
01:38
<aho>
also, audio is very lacking
01:39
<potatis_invalido>
I agree about the first three
01:39
<potatis_invalido>
but audio?
01:40
<zewt>
there was some talk recently about getting started on a fullscreen API, but most of the talk was "which WG to do it in"; I don't know if that went anywhere
01:40
<potatis_invalido>
3D audio might be a problem, now that I think of it
01:40
<zewt>
critical for <video> and games
01:40
<potatis_invalido>
position audio.
01:40
<potatis_invalido>
positional*
01:41
<zewt>
well, there's no API designed for game-audio
01:41
<potatis_invalido>
for simple audio you can just create a bunch of audio elements
01:43
<potatis_invalido>
HTML and related technologies are becoming more and more like traditional program environments
01:43
<zewt>
a full audio API is fairly complex, once you start getting into it (eg. positioning is just one part of environmental audio)
01:43
<potatis_invalido>
like Java and .NET
01:44
<aho>
there isn't even panning :)
01:44
<potatis_invalido>
So I don't think it's entirely unlikely we'd see a 3D audio API in this decade.
01:44
<potatis_invalido>
we'll*
01:44
<zewt>
also a tough topic: accurate sync
01:45
<aho>
there are also some issues with current implementations. typically there is too much latency and chrome is the worst offender... it goes silent after a few minutes :v
01:45
<zewt>
my work has been on music games for years, so i know some of the headaches involved with audio sync :P
01:51
<potatis_invalido>
Its's always something, isn't it? Remember that people once were impressed by Pong. :)
01:53
<aho>
opera does indeed not bother with caching that xhr thing
01:53
<zewt>
http://www.youtube.com/watch?v=T3BMqt00z9Y <- that's what I do (... no, that's not me); I don't have any hope of ever being able to make that sort of game in a web app :|
01:53
<aho>
even though the header says it expires in 2 years :I
01:54
<zewt>
audio apis tend to not give enough attention to sync, so making a game that depends on sub-10ms play-position accuracy is tricky
01:57
<zewt>
aho: go nag some opera devs, i hate that too :P
01:58
<zewt>
don't know off-hand if it's the only browser that does that
02:01
<zewt>
it's particularly odd, since opera is generally more aggressive about caching than other browsers--not more conservative
02:02
<zewt>
(and there's no way they don't know about it)
02:05
<aho>
just cross-checked with fiddler... it really does request the file anew
02:10
<aho>
http://www.stevesouders.com/blog/2009/08/11/f5-and-xhr-deep-dive/
02:10
<aho>
according to that expires in the future should be enough
02:11
<zewt>
i remember not being able to get data cached at all--but it's been a while and I'm not sure which browser I was having trouble with
02:11
<zewt>
it may have been some browser always revalidating even when told not to
02:11
<zewt>
(which is a lesser crime but still very bad, forcing a round-trip)
02:13
<zewt>
(let me know if you confirm expire/opera, btw, curious)
02:16
<potatis_invalido>
I'm calling it a night. It was nice talking to you.
02:16
<aho>
nn
02:16
<aho>
well, as i said i already use expires headers (2 years in the future) and it doesn't cache anything
02:16
<aho>
same thing with his test case
02:17
<aho>
(he talks about opera 10 though. i'm using 11.)
02:17
<Hixie>
nessy: yt?
02:17
<Hixie>
foolip: yt?
02:17
<zewt>
could also be one of the other 92 cache-related headers
02:17
<Hixie>
doublec: yt?
02:17
<doublec>
Hixie: yep
02:18
<Hixie>
doublec: so i'm looking at how to make MediaController work better based on the feedback so far
02:18
<Hixie>
doublec: dunno how much you've been following that
02:18
<Hixie>
doublec: (that's the multiple-synchronised-video/audio thing)
02:18
<doublec>
Hixie: Unfortunately I haven't had a chance to look at that yet
02:18
<aho>
http://pastebin.com/srkSQ2dk <- looks fine to me :f
02:19
<aho>
ehm
02:19
<aho>
wrong one
02:19
<aho>
:>
02:19
<Hixie>
doublec: k. well, quick overview: basically, it proposes an object that a <video> or <audio> element can be slaved to
02:19
<Hixie>
doublec: and all the slaved elements are forced to play at the same rate, and stall at the same time if any of them stall for network buffering, etc
02:20
<aho>
http://pastebin.com/KuBUSdJ1
02:21
<Hixie>
doublec: one thing my original proposal supported somewhat accidentally due to the way it was written is that if any of the media elements were set to loop, it would act as if the looping track was copied many times over infinitely in both directions -- think like a drum beat or metronome loop playing over a song
02:21
<Hixie>
doublec: based on feedback, i'm changing the api a bit to have the controlling object have a known duration, which of course doesn't work so well if any of the subtracks are looping
02:22
<Hixie>
doublec: i'm curious as to whether you have any suggestions on that front
02:22
<Hixie>
doublec: i'm thinking of basically making the looping tracks "fill" any time required to get them to fit the length of the longest non-repeating track
02:23
<Hixie>
doublec: kinda like a repeating background image
02:24
<doublec>
Hixie: we've got a Mozilla all hands next week where I'd like to gather the mozilla interested parties to discuss the api
02:24
<doublec>
Hixie: and then provide feedback
02:25
<Hixie>
doublec: in that case you should definitely let the htmlwg chairs know your timeline because they are saying we have to be done with proposals by this friday :-/
02:25
<Hixie>
doublec: i've been trying to explain to them that that's crazy but with minimal success
02:25
<doublec>
Hixie: that is crazy
02:25
<zewt>
are the chairs people, or are they actually like, lawn chairs
02:25
<doublec>
Hixie: is there a public-html thread about the deadline?
02:25
<zewt>
from people's opinions of them in here I'm no longer certain
02:26
<Hixie>
doublec: yeah. one minute, brb and can get it for you.
02:27
<jamesr>
Hixie: can't you just punt this all from html5 so the htmlwg doesn't care?
02:30
<othermaciej>
doublec: the deadline is effectively imposed by the LC deadline - we could punt the issue past Last Call, but currently Adrian Bateman has objected to that course of action
02:30
<Hixie>
jamesr: i did punt it from html5. then someone escalated it and now the chairs are insisting it must be done before their arbitrary last call deadline.
02:30
<othermaciej>
he suggested an extra week instead
02:31
<othermaciej>
it might be possible to give a one-week extension without blowing the LC deadline
02:31
<othermaciej>
I'm also not at all against postponing the issue if there is consensus to do that
02:31
<jamesr>
with the idea that designing an API in one week will produce a better result than waiting for it to be actually good?
02:31
<othermaciej>
I'm not sure why Adrian asked for the issue to be resolved in the very short time before Last Call
02:32
<othermaciej>
but folks should feel free to ask him
02:32
<othermaciej>
well, there's several API designs now which I think took more than a week each to create
02:32
<othermaciej>
I think it's a matter of refining them and seeing if differences can be eliminated
02:32
<Hixie>
i like how one vendor, who btw isn't going to ship anything for years given their ship cycle, is able to force an issue to be resolved in a few weeks but when we ask them for input we get nothing for months
02:32
<Hixie>
othermaciej, jamesr: any opinion on the thing above btw?
02:32
<jamesr>
i'm not familiar enough with media issues to comment intelligently
02:33
<Hixie>
othermaciej: consensus isn't the best way to make decisions
02:33
<Hixie>
othermaciej: also, why do we need consensus on postponing but not consensus on rushing?
02:35
<doublec>
I'd rather not include it at all than rush an api
02:35
<othermaciej>
Hixie: after reading the discussion thread, I kind of think the MediaController thing is ill-conceived - it's there in theory to avoid media elements having different master and slave modes, but in implementation terms, they have to have modes anyway
02:35
<othermaciej>
Hixie: and it results in an API that can't deliver what it promises
02:36
<Hixie>
othermaciej: it's there to avoid making the api asymetric
02:36
<othermaciej>
Hixie: so it seems like it's prioritizing purity over implementations
02:36
<Hixie>
othermaciej: it's prioritising for authors.
02:36
<othermaciej>
(I don't think it's a benefit to authors to give them an API that doesn't work)
02:36
<Hixie>
why would it not work?
02:36
<Hixie>
or rather what doesn't work?
02:36
<othermaciej>
well, I'm imagining how we'd change our built-in controls to work with this API
02:37
<othermaciej>
they'd have to basically always talk to the controller object and never use the API directly on video
02:37
<Hixie>
same as with all the other proposals
02:37
<othermaciej>
and without a master/slave relationship, there wouldn't even be an easy way to have a different set of controls for the auxiliary tracks
02:37
<Hixie>
do you have an example of what you mean?
02:37
<othermaciej>
I imagine this is true of most set of JS-authored controls that want to work generically, and not just for one page
02:37
<Hixie>
what's an auxiliary track?
02:38
<othermaciej>
for example, a sign language translation that displays in a separate area
02:38
<othermaciej>
it might be that there are use cases where there isn't one clear "main" display area, but in the accessibility use cases, there generally is
02:39
<Hixie>
if there are two video playback areas, one with "content" and one with "sign language", what controls do you expect to see on the two elements?
02:39
<Hixie>
the two areas, i mean
02:39
<othermaciej>
I'd expect the content one to have full controls that control the master timeline
02:40
<othermaciej>
I would expect the sign language track to have either the same, or possibly to have reduced controls that don't actually manipulate the timeline, to reduce confusion/complexity
02:40
<othermaciej>
I don't know what our UI people would prefer
02:40
<othermaciej>
also we might hack the built-in fullscreen control to try to take the whole synchronized group full screen
02:40
<othermaciej>
though I dunno exactly how we'd do that
02:40
<Hixie>
none of this sounds hard with the MediaController API
02:41
<othermaciej>
it's not especially hard, but it's not easier either
02:41
<othermaciej>
the APIs on individual media elements would basically turn into an obscure thing that you should almost never use
02:41
<Hixie>
same as if they're asymetric
02:41
<Hixie>
except for one of the elements
02:41
<Hixie>
where they'd be serving two purposes
02:41
<othermaciej>
(does the MediaController API let you designate one media element as the lead, and others as auxiliary?)
02:42
<Hixie>
it could
02:42
<Hixie>
doesn't currently
02:42
<othermaciej>
I think when elements are synchronized, then probably all of their APIs should control the master timeline
02:42
<Hixie>
in most cases there isn't a lead, as fdar as i can tell
02:42
<othermaciej>
it's probably true that for non-accessibility use cases, there isn't necessarily a lead
02:43
<othermaciej>
anyway, I don't like rushing the design of this either :-(
02:43
<othermaciej>
and I have to get food while there's still time
02:43
<othermaciej>
brb
02:49
<nessy>
Hixie: here
02:50
<Hixie>
nessy: trying to work out how to deal with looping of tracks in a multitrack situation, especially when the looping track is not the same length as the other tracks
02:50
<Hixie>
nessy: any ideas?
02:50
nessy
reading up on discussion about .. gimme a sec
02:55
nessy
ok..
02:55
<nessy>
so, it's all about what we want to see in the UI, I guess
02:56
<Hixie>
i doubt most cases with looping would have a ui
02:56
<nessy>
do we want individual slave tracks to have their own controls (API and displayed)?
02:56
<Hixie>
it's more about what cases might need looping
02:56
<nessy>
if it was for me, I'd disable individual looping
02:56
<Hixie>
most looping is likely to be used in things like games
02:56
<nessy>
in fact, all the functions that access the timeline, I would slave them together
02:57
<zewt>
is there really any practical use for looping except to loop an entire, combined media?
02:57
<zewt>
(eg. all tracks or none)
02:57
<nessy>
what zewt says...
02:57
<zewt>
the "metronome" thing is pretty contrived
02:57
<nessy>
what's your game use case?
02:58
<Hixie>
games use audio for all kinds of things
02:58
<zewt>
game audio is way beyond <audio> anyway
02:58
<Hixie>
e.g. background fire effects when you're in a room with a fire
02:58
<Hixie>
music
02:58
<Hixie>
explosions
02:58
<nessy>
the model I have in mind for multitrack is basically to replicate for external files what in-band would do - I don't think there is any in-band use for looping individual tracks
02:59
<Hixie>
have you ever used garage band?
02:59
<zewt>
(my audio engine allows beat-matching two looping music tracks to seamlessly switch from one to another; rather beyond anything current APIs would try to do, heh)
02:59
<nessy>
yeah, but this is not an API to create a drum machine or music tracks
02:59
<Hixie>
we'd probably want a better api for something like garage band, but for simpler cases of things like that it might make sense to just use audio
02:59
<nessy>
I'd want that problem to be solved by an audio API
03:00
<Hixie>
for the whole app, sure
03:00
<nessy>
or asked otherwise: would somebody that wants to implement a game or a drum machine really want to use a multitrack media resource approach?
03:00
<zewt>
imo, trying to address game use cases with <audio> without actually expanding it to be a full-blown sound engine API ... feels like design creep
03:00
<Hixie>
what i'm saying is that smaller-scale parts of that might well be simple enough that people would just use <audio> for it
03:01
<nessy>
they wouldn't use in-band mutltirack, certainly, so why try to bend external multitrack to support it?
03:01
<Hixie>
it's not about bending external multitrack
03:02
<Hixie>
given an external controller, these abilities just fall out if we do it right. the question is what is the right way to do it.
03:02
<nessy>
well, it creates a set of problems that I don't think we want to address in multitrack
03:03
<Hixie>
what problems?
03:03
<nessy>
such as independent looping, such as defining the duration, such as what happens with indpendent startOffset and with changes to playbackRate
03:03
<Hixie>
those aren't problems
03:03
<nessy>
what would be the duration of a multitrack resource that has a looping track?
03:03
<Hixie>
those are things we have to define anyway
03:03
<Hixie>
we can define them as "they do nothing" or we can provide a useful definition
03:03
<Hixie>
but either way we have taoaddress them
03:04
<nessy>
no, if we don't accept looping on an individual track, the duration of the overall composition is easier to determine
03:04
<nessy>
the problem space is smaller
03:04
<Hixie>
if you decide something, then the number of things you have to decide is smaller, yes
03:04
<roc>
I hear we need to give feedback by Friday
03:04
<nessy>
hi roc!
03:04
<Hixie>
but the number of things you have to decide in all is still the same
03:05
<roc>
what's up with that? We can't give decent feedback until we've implemented it, and believe it or not we won't have it implemented by Friday
03:05
<Hixie>
oh, that reminds me, i had to get a url for doublec
03:05
<roc>
nessy: hi
03:05
<nessy>
yeah, do write an email on the list that you also want more time to give feedback
03:05
<zewt>
FWIW, I'd have a duration *assigned* to looping tracks individually (which might be "infinite" if the user wants to loop forever); tracks loop for as long as they're told to; and the duration of the composition of many tracks is straightforward: the maximum duration of all tracks
03:05
<nessy>
I've already stated that multiple times
03:06
<roc>
which list?
03:06
<roc>
public-html?
03:06
<nessy>
yes
03:06
<nessy>
reply to Adrian's message I would say
03:06
<Hixie>
doublec, roc: http://lists.w3.org/Archives/Public/public-html-a11y/2011Mar/0214.html
03:07
<nessy>
search for "Timing of ISSUE-152"
03:07
<Hixie>
http://lists.w3.org/Archives/Public/public-html/2011Mar/0746.html is adrian's e-mail
03:07
<nessy>
well, the first one is on the a11y list and thus more an internal discussion, IMHO
03:07
<nessy>
yeah, that second one
03:07
<Hixie>
"internal"?
03:08
<nessy>
well, TF-internal discussion to at least get agreement within the TF
03:09
<Hixie>
agreement within the TF means nothing except that it's harder to discuss the issue once it goes to the whole group since the people in the TF are invested in their agreement :-)
03:09
Hixie
thinks the TFs are a bad idea, but that's probably old news
03:09
<nessy>
yes, that's why I am saying the one on the main list is more important
03:09
<nessy>
btw: there's not necessarily agreement in the TF even when things are moved forward
03:09
<nessy>
anyway...
03:12
<nessy>
what is the duration of a composition that contains a looping track?
03:12
<Hixie>
that is one of the questions we'd have to answer if we decide looping is to be supported
03:12
<nessy>
would it play until all non-loping tracks have completed their duration and then just continue with the looping track?
03:13
<nessy>
to be honest, I think it's an artificial use case - a solution looking for a problem
03:13
<zewt>
it seems it's hard to decide what to do because there are no use cases to base a design on :)
03:14
<Hixie>
it's not a use case, it's not a solution -- it's just something we have to decide one way or the other
03:14
<Hixie>
nessy: it's like saying that the ability to put a span inside an em inside a dfn is an artificial use case or a solution looking for a problem
03:14
<nessy>
what things to you intend to lock into sync between the slave elements?
03:15
<Hixie>
based on feedback so far, playback rate
03:16
<nessy>
also currentTime progress when playing, I guess
03:16
<Hixie>
how do you mean?
03:17
<nessy>
well, when you start playing one, you should start playing them all (as far as they are set to display), right?
03:17
<Hixie>
that's the playback rate :-)
03:18
<nessy>
how do you jump to the same time offset across all of them?
03:18
<nessy>
from script
03:18
<nessy>
so you can make a common controller for all of them
03:18
<roc>
Hixie: I have a feeling that solutions for multiple-media-resource synchronization, advanced audio API and RTC all need to be solved in an integrated way
03:18
<roc>
right now I think we have three trains rushing towards each other at high speed
03:19
<Hixie>
nessy: based on your feedback i'm planning on providing a currentTime feature in the MediaController to replace the seek() feature (that's why the looping thing became an issue -- it wasn't an issue at all with the old seek() approach, which is why i'd gone with seek() rather than a currentTime approach)
03:19
<nessy>
lol: want to add a fourth train? HTTP adaptive streaming
03:19
<nessy>
Hixie: I see - that explains it
03:19
<Hixie>
roc: the RTC and multiple-track things are definitely coordinated, at least insofar as I'm working on them
03:20
<Hixie>
roc: in fact they currently share an interface (the TrackList thing)
03:20
<roc>
yeah
03:20
<nessy>
oh really? … I need to check that...
03:21
<Hixie>
roc: (that's one reason i'm hoping to make the MediaController thing be good enough to convince silvia and others, so that they are coordinated, since the other proposals aren't coordinated like that)
03:21
<roc>
but the audio API piece is very significant
03:21
<Hixie>
roc: agreed
03:21
<Hixie>
roc: i need to coordinate with them more
03:21
<roc>
audio API needs to sync multiple streams
03:21
<roc>
with processing
03:21
<Hixie>
unfortunately they went to hide into an xg of their own :-P
03:21
<Hixie>
bbiab
03:22
<roc>
audio API needs to integrate with RTC to enable big use cases like XBox 360's voice distortion
03:22
<roc>
er, XBox Live
03:24
<nessy>
roc: do write about that to public-html, too - maybe that makes Adrian change his mind
03:25
<roc>
message already sent
03:26
<karlcow>
http://httparchive.org/interesting.php
03:26
<nessy>
in real-time communication - do we really need to lock the local and remote video stream to each other via a controller?
03:26
<roc>
you can't
03:26
<nessy>
I mean: skype doesn't do that - it just displays the data that it gets as quickly as possible
03:27
<roc>
who says we should do that?
03:27
<karlcow>
28% of Web pages with Error.
03:27
<nessy>
ok, cool - I need to find out what we need tracks for RTC then...
03:27
<karlcow>
or more exactly 28% of URIs with HTTP errors
03:28
<karlcow>
61% with HTTP redirects
03:28
<nessy>
(I've not read up on the RTC proposal yet)
03:28
<nessy>
http://www.whatwg.org/specs/web-apps/current-work/complete/video-conferencing-and-peer-to-peer-communication.html#generatedstream
03:29
<nessy>
ok, there is no controller there, just the track lists
03:31
<roc>
I think the tracklist stuff is low-hanging fruit
03:31
<roc>
we could just add that to media elements without any of the MediaController stuff, relatively easy to implement and addresses some important use cases
03:31
<roc>
that seems uncontroversial to me
03:33
<othermaciej>
Hixie: so, let me elaborate my prior use case a it more
03:33
<othermaciej>
Hixie: let's say I have a site that embeds various videos
03:33
<othermaciej>
I use an existing JS library to provide custom branded controls
03:34
<othermaciej>
this library is written to use the regular <video> API, not MediaController, since that is what it's historically used, and it generally works if you don't synchronize additional media items, so the developer never thought to change it
03:35
<othermaciej>
now for one video, I want to synch an external sign language translation video
03:35
<othermaciej>
in a separate playback area
03:35
<othermaciej>
it seems like my options are:
03:35
<othermaciej>
- rewrite the control logic
03:35
<othermaciej>
- switch to another JS library
03:35
<othermaciej>
- accept broken playback controls
03:35
<othermaciej>
that's kind of sucky
03:35
<othermaciej>
so there have to be strong use cases to justify the model that imposes this porting cost
03:36
<othermaciej>
now, one thing you could do is make the playback API on synch'd videos always throw, to force you to use the media controller, so at least that kind of bug is caught sooner
03:37
<othermaciej>
but that seems to remove the elegance and symmetry from the proposal
03:39
<othermaciej>
(I think the Eric/Sylvia proposal will Just Work for that case if you don't want any controls on the slave track, which most likely you do not)
03:40
<nessy>
(note that I am not married to our proposal - I want this to be worked out properly and I am not yet sure what the best approach is - still experimenting)
03:48
<Hixie>
othermaciej: if we had much legacy for <video>, i'd agree
03:48
<Hixie>
othermaciej: but we don't
03:48
<Hixie>
othermaciej: we can just treat this as part of the original api
03:48
<Hixie>
othermaciej: note that just using the Eric/Sylvia proposal wouldn't work either, if the slaved track was longer
03:49
<Hixie>
othermaciej: and it wouldn't work right if it assumed that the readyState of the main media element was representative of when the media could play
03:49
<othermaciej>
really? there's a lot of hosting sites using HTML5 video, and a lot of control libraries
03:49
<Hixie>
othermaciej: (since the other track might still be buffering)
03:49
<othermaciej>
I'm not down with (effectively) breaking compatibility with all current HTML5 video content
03:49
<Hixie>
othermaciej: this wouldn't break compat with anything, it just adds a new feature
03:50
<othermaciej>
sure, but if you use the feature, it breaks compat with any existing reusable video code you may have used
03:50
<othermaciej>
Eric/Sylvia proposal is easily extended to make the master media element proxy more things for the whole group
03:50
<Hixie>
othermaciej: the idea of reusing a media element as the master is a huge hack that will cause problems for years, imho
03:51
<othermaciej>
what kind of problems do you expect it to cause?
03:52
<Hixie>
othermaciej: it ties together the status for the whole group and the status for a single resource in such a way that you can't intuitively tell which you're looking at
03:52
<Hixie>
othermaciej: so e.g. we'll be stuck with not having tracks longer than the master track
03:52
<Hixie>
othermaciej: we'll be stuck with never having separate playback rate controls for individual tracks in a consistent way
03:52
<othermaciej>
if you make it only represent status for the whole group, we won't be stuck with such things
03:52
<nessy>
what is "working right for slaved tracks of different duration"? what do you expect should happen? it is possible to make that happen in any of the proposals IMHO
03:52
<othermaciej>
the timeline would be max of all timelines
03:52
<Hixie>
othermaciej: you wouldn't be able to mute the audio of just the master track
03:53
<Hixie>
othermaciej: you wouldn't be able to seek just the master track
03:53
<Hixie>
othermaciej: the list goes on and on and on
03:53
<othermaciej>
separate playback rate controls for individual tracks doesn't really have much of a use case
03:53
<othermaciej>
your suggested use cases do not seem realistic or useful as reviewed by media experts
03:53
<Hixie>
it's just an example of something we get blocked out of
03:54
<Hixie>
anything that relies on the api to control an individual track as opposed to the group is screwed
03:54
<othermaciej>
seeking just the master track also seems useless
03:54
<Hixie>
nessy: either the master's video.duration is the resource's duration (in which case you lose the ability to see the group's) or it's the group's (in which case you lose the ability to see the resource's)
03:54
<othermaciej>
well, if we ever have real use cases for and practical implementability of individual playback control of sync'd tracks, we can add new API for that
03:55
<Hixie>
othermaciej: you keep dismissing abilities as having no use cases but the whole point is to have a simple api that enables any use case
03:55
<othermaciej>
right now, there aren't good use cases, and it's not practical to implement
03:55
<othermaciej>
Hixie: you sound like an RDF guy right now
03:55
<Hixie>
hah
03:55
<Hixie>
based on what Jer was saying, it's quite feasible to implement a MediaController approach
03:55
<nessy>
I'm not opposed to a MediaController approach either
03:56
<nessy>
I think we have to solve the same problems for all of the proposals, btw
03:56
<othermaciej>
it is, it will just have severe limitations that make the edge case use cases not really work
03:56
<othermaciej>
and it will break compat with existing controller code if you ever use syncing
03:56
<Hixie>
what severe limitations?
03:56
<othermaciej>
and poor performance if you don't hit the sweet spot
03:57
<Hixie>
so would the other proposal
03:57
<Hixie>
and the existing controllers don't just work with the asymetric proposal either
03:57
<othermaciej>
you can't actually control playback of the individual tracks separately live
03:58
<Hixie>
the asymetric proposal allows that too, to the same extent (except that you can't do the master track for a random reason)
03:58
<nessy>
the master is not independent of its slaves, that's right
03:58
<Hixie>
so it would have the same "severe limitation"
03:58
<Hixie>
(or as i would put it, and as jer put it, "quality of implementation issue")
03:59
<Hixie>
othermaciej: the difference between this and rdf is that in this case, we get the various abilities and a consistent api by having a _simpler_ solution
03:59
<othermaciej>
sure, but if every implementation is going to have a QoI issue that makes a feature not practical to use, it's not helpful to say it's just a QoI issue
03:59
<nessy>
my main objection is whether it can be implemented, since the controller basically has its own independent timeline
03:59
<Hixie>
othermaciej: so should we not support seeking in the current api either?
04:00
<othermaciej>
I think it's debatable whether adding new API or making existing API modal is simpler
04:00
<Hixie>
othermaciej: or playbackRate?
04:00
<nessy>
however, we could always assume that the first element that is slaved to the controller is the main one to define the timeline from a sw implementation pov
04:00
<Hixie>
the video api is full of things that have QoI issues
04:00
<Hixie>
nessy: what do you mean by "timeline"?
04:01
<nessy>
clock
04:01
<Hixie>
nessy: the MediaController spec already defines that
04:01
<nessy>
it drives the playbackRate of all the slaves
04:01
<nessy>
so, it needs to have a clock
04:01
<Hixie>
"All the slaved media elements of a MediaController must use the same clock for their definition of their media timeline's unit time."
04:01
<nessy>
and in all media frameworks that I know, there is no such thing as an abstract clock - it's always bound to a specific resource
04:02
<Hixie>
no need to define which one it is, since it doesn't matter which one it is so long as there is only noe
04:02
<nessy>
(I'm talking implementation, not definition)
04:02
<Hixie>
one
04:02
<Hixie>
ah ok
04:02
<Hixie>
i don't see the problem then
04:03
<Hixie>
why would it be any different to implement?
04:03
<nessy>
yeah, all I am saying is that it's probably a hack to get it implemented and not as clean as it might seem
04:03
<nessy>
i.e. if you happen to remove during playback the one element that defines the timeline, all sorts of things may go wrong
04:04
<Hixie>
if you remove anything during playback, you need to redo the group anyway
04:04
<Hixie>
according to jer
04:04
<Hixie>
so that's rather academic
04:04
<Hixie>
also, you can do that with your proposal too :-)
04:04
<nessy>
you could remove any slave without affecting the timeline
04:04
<nessy>
removing the master is kinda dumb
04:05
<Hixie>
removing a slave, according to jer, will cause stalling
04:05
<nessy>
when you have a controller, yes, because you may need to find another element to become the timeline master
04:05
<Hixie>
no, in general
04:05
<Hixie>
not because of the controller
04:05
<nessy>
I don't thinks - that's not how I interpreted Jer's feedback
04:06
<nessy>
s/thinks/think so/
04:06
<Hixie>
ah, correction, he was talking about adding tracks
04:06
<Hixie>
in any case, what's the use case for every adding or removing tracks on the fly?
04:06
<nessy>
yeah, I guess hooking that into the master or controller would take time
04:07
<Hixie>
it seems like a rare event, so i don't see much point worrying about it
04:07
<Hixie>
and since both proposals have the problem, it seems academic
04:07
<nessy>
not at all - what if you are playing a video and mid-video you turn on the audio description track?
04:07
<Hixie>
it would already be slaved
04:07
<nessy>
I think that use case is more common than looping ;-)
04:07
<Hixie>
just disabled
04:08
<Hixie>
looping of synced content isn't common at all as far as i'm aware
04:08
<nessy>
disabled tracks aren't loaded and thus aren't progressing in time
04:08
<Hixie>
they can be
04:08
<Hixie>
they certainly will be loaded
04:08
<nessy>
(agree on the looping - I can't think of having seen that anywhere)
04:09
<Hixie>
my point is just that the problem exists with all the proposals
04:09
<nessy>
why would you load a resource that you're not using?
04:09
<Hixie>
because you might use it
04:09
<Hixie>
that's what prefetching is all about
04:10
<nessy>
yes, but in this case prefetching doesn't make much sense for disabled tracks
04:10
<nessy>
in particular if you have a video with 52 dubbed audio tracks where you only want to play one
04:10
<Hixie>
*shrug* sure, the author would say which to prefetch
04:12
<nessy>
all I am saying is that in my understanding the implementation would be a hack that would probably decide on a master video or audio anyway
04:12
<nessy>
this does not mean, however, that we have to define the html markup and api in that way
04:12
<Hixie>
i'm not sure i understand your use of the word "hack", but ok
04:14
<nessy>
a "hack" in that the concept that is defined in the controller as a clock that applies to all elements would in the implementation mean to clock of one element to which the others are slaved
04:14
<nessy>
I can see advantages of the controller approach
04:15
<Hixie>
but the spec doesn't say the controller has a clock
04:15
<Hixie>
it says the slaved elements must have the same clock, that's all
04:15
<nessy>
I would almost feel compelled to give the controller a css rendering area of its own even, so we can use css to arrange all the slave elements into that box and provide a single transport bar over all of them
04:16
<roc>
I want to implement API to list and select in-band tracks in a single media element, and punt on the out-of-band stuff until we understand how RTC and advanced audio API fit in
04:16
<Hixie>
(in practice you have to use the clock of the sound card. in your proposal, what would happen if the master was silent and there were two slaved audio tracks? the UA would have to use the clock of one of the audio tracks.)
04:16
<roc>
if the master is silent you can pretend it has an audio track of all silence and mix the slaves into it
04:16
<Hixie>
roc: i had been hoping to punt the api for the same reason, but unfortunately nessy then escalated the issue which is how we ended up discussing it :-)
04:17
<nessy>
I did not escalate the issue - I've not ever escalated any issue!
04:17
<Hixie>
http://www.w3.org/html/wg/tracker/issues/152 says "This issue was raised on behalf of Silvia Pfeiffer"
04:17
<nessy>
but I certainly registered the bug
04:18
<Hixie>
http://www.w3.org/Bugs/Public/show_bug.cgi?id=9452#c8 is where the TrackerRequest keyword was added, which indeed suggests otherwise
04:26
<nessy>
I guess it was just raised as part of the bugs that were registered pre last call
04:26
<nessy>
anyway...
04:27
<nessy>
what influence does RTC have on mutltirack?
04:27
<Hixie>
hard to know in advance
04:27
<Hixie>
in the current proposal they share the audioTracks and videoTracks attributes
04:27
<Hixie>
amongst other things
04:27
<Hixie>
(like both using <video>)
04:27
<nessy>
do you want to make use of the controller concept for rtc?
04:28
<Hixie>
i'm not currently aware of any reason to use MediaController in the context of video conferencing, but naturally we'd have to make sure how they interact is defined
04:30
<nessy>
or asked otherwise: why do you have a GeneratedStream API, when it is basically the same as the MediaController?
04:30
<nessy>
aren't they basically achieving the same thing?
04:30
<nessy>
(really trying to understand it - no criticism)
04:32
<Hixie>
i don't understand in what way they are similar :-)
04:32
<Hixie>
they have nothing in common as far as i can tell
04:33
<nessy>
a controller synchronizes multiple audio and video streams - so does a generatedStream
04:34
<Hixie>
a generatedstream just exposes a local webcam
04:34
<Hixie>
it doesn't do synchronisation
04:34
<Hixie>
not in the sense that mediacontroller does
04:34
<nessy>
so the audio and video tracks inside it are not synchronized?
04:34
<Hixie>
they're one media stream
04:35
<nessy>
is the difference that the GeneratedStream is creating data, while the MediaController is playing back data?
04:35
<Hixie>
the mediacontroller doesn't play back data
04:35
<Hixie>
it just ensures a number of <video> elements have the same clock
04:35
<Hixie>
it's like saying a <video> is the same as a mediacontroller, which i think is equally non-sequitur, though i guess you are proposing that too :-)
04:35
<nessy>
… and they play them back in sync, so indirectly, it does that
04:36
<nessy>
a <video> has a controller, if you want it or not - it may not be exposed ;-)
04:36
<Hixie>
overloading objects to do many related things is bad api design
04:37
<Hixie>
one should just have one object per task
04:37
<nessy>
sure
04:37
<Hixie>
(video, in retrospect, is poorly designed because it amalgamates video fetch and video playback)
04:37
<nessy>
I'm trying to understand the difference ...
04:37
<nessy>
and if we say that there is a difference, then I also don't see a need to have one wait for the other to be defined
04:39
<Hixie>
things that interact need to be designed with each other in mind
04:39
<Hixie>
otherwise you end up with apis that look like, well, a lot of the web's apis
04:39
<othermaciej>
fair point; though there is also value to doing things incrementally
04:39
<othermaciej>
it is hard to strike the right balance
04:39
<nessy>
yeah, unfortunately, it is impossible to solve all the world's problems at the same time - you end up achieving nothing
04:39
<Hixie>
incrementally is fine too, but it risks getting things like the video element :-)
04:40
<Hixie>
nessy: what i often do in the whatwg spec is overdesign and then comment-out large parts of the feature
04:40
<nessy>
yeah, I learnt that recently - I was indeed curious!
04:40
<Hixie>
nessy: the (new) drag-and-drop api being one example, where the spec already has support for a number of things that aren't in the spec, like promises and file objects (or is it blob objects)
04:41
<Hixie>
or like automatic ducking in the multitrack feature
04:41
<Hixie>
which is in there but commented out
04:41
<nessy>
is there a way to view the spec will all your commented out things>
04:41
<nessy>
?
04:41
<Hixie>
view > source :-)
04:41
<nessy>
lol
04:42
<nessy>
ok… I might browse the repository - I find that easier
04:42
<Hixie>
(though you're better off just looking at the /source file)
04:42
<roc>
Hixie: I think there could be an indirect relationship between RTC and multitrack
04:42
<nessy>
anyway - I am planning to try and design the multitrack with a conroller in mind, too, independently of what you did, and see where that takes me
04:42
<nessy>
… might end up having a clearer view of things then
04:43
<nessy>
roc: how so? do you have a hunch?
04:43
<roc>
Hixie's RTC proposal defines Streams which can be used as sources for media elements
04:44
<roc>
maybe that feature isn't literally in Hixie's draft, but it's clearly coming
04:44
<Hixie>
it's there
04:44
<Hixie>
there's even an example
04:44
<roc>
therefore multitrack synchronization needs to work with Streams
04:44
<Hixie>
search for "Snapshot Kiosk"
04:45
<roc>
furthermore
04:45
<roc>
if we define an advanced audio API based on Streams
04:45
<roc>
(including integrating existing audio API proposals with Streams)
04:46
<roc>
then that API will almost certainly allow mixing of multiple sources
04:46
<roc>
which will need to be synchronized
04:47
<roc>
which creates considerable overlap with MediaController and related proposals
04:47
<nessy>
GeneratedStream is the thing that synchronizes them, right?
04:47
<Hixie>
GeneratedStream is just a representation of the local WebCam's output
04:48
<Hixie>
it doesn't synchronise anything
04:48
<Hixie>
you can think of it as a remote stream
04:48
<Hixie>
rtsp://whatever/foo
04:49
<roc>
in particular any advanced audio API is likely to need a way to get a Stream (or equivalent) representing an arbitrary media resource, and mix those Streams together in a synchronized way, optionally with effects
04:50
<roc>
at which point you almost have the functionality of a MediaController
04:50
<roc>
even if the APIs stay unrelated (I'm not sure if that's wise or not), the implementation probably should have much in common
04:51
<roc>
at least in Gecko, where we're not shackled by some media framework
04:51
<roc>
am I making sense?
04:55
<roc>
I guess not :-)
04:55
nessy
is thinking...
04:56
<nessy>
well, if the implementation shares a lot, that would not have much of an effect on the markup and API, I guess
04:56
<Hixie>
hmm...
04:57
<nessy>
I am trying to understand how the streams are synchronized in the rtc proposal
04:59
<Hixie>
having added a combined currentTime feature, i wonder whether to just force the slaved tracks to be aligned up and not support offsets at all
04:59
<Hixie>
since offsets would have to be implicit, which is confusing
04:59
<Hixie>
hmm
05:00
Hixie
isn't liking the implications of having to add currentTime to the media controller
05:00
<Hixie>
nessy: nothing synchronises anything in the PeerConnection/GeneratedStream world
05:00
<Hixie>
nessy: there's nothing to synchronise
05:01
<nessy>
a local audio and video stream that are recorded and then sent to the other side would need to be synchronized to each other
05:01
<Hixie>
they're one stream
05:01
<nessy>
s/recorded/captured/
05:02
<Hixie>
that's like asking what synchronises the audio and video in a .mov file
05:02
<Hixie>
they're never _not_ synchronised
05:02
<nessy>
yes, and there is an answer: the container
05:02
<Hixie>
same answer applies here
05:02
<nessy>
are they being put in a container?
05:03
<nessy>
by the GeneratedStream object?
05:03
<nessy>
then it does the synchronization
05:03
<Hixie>
the user agent serialises them (with a container) as part of RTP
05:03
<Hixie>
(or as part of the StreamRecorder when recording to a file)
05:04
<Hixie>
this is an entirely different, and far less interesting, kind of synchronisation than what we're talking about with MediaController
05:04
<nessy>
anyway … more importantly your question before...
05:05
<nessy>
I would agree that we should not support offsets
05:05
<Hixie>
well we'd still support offsets, either now or eventually
05:05
<Hixie>
the question is when we do, what should the api look like
05:05
<nessy>
the by far most common use case for multitrack is same length audio and video tracks that all start at the same time and all end at roughly the same time
05:06
<Hixie>
that's a self-fulfilling prophecy if we design it to only truly cater for that use case
05:07
<nessy>
maybe there are two fundamentally different use cases that we are trying to satisfy with the same approach
05:07
<Hixie>
if we instead do as roc is suggesting, and design this with the audio api in mind, then "drum machines" as you call them (and more specifically, the audio synchronisation in video games) might well be far more common use cases
05:08
<Hixie>
on the long run
05:09
<Hixie>
even if we don't support that, i think things like director's commentaries are going to be a major use case, and they're often not the same length as the video
05:09
<nessy>
they still start at the same time
05:09
<Hixie>
usually
05:09
<Hixie>
though often they're silent for a while at the start
05:09
<nessy>
overhang at the end is not as big a problem as different playback positions for each track
05:09
<Hixie>
different playback positions isn't a problem :-)
05:10
<nessy>
how do you create a common transport bar then?
05:10
<Hixie>
we can easily have the media controller define a zero point and a total duration that spans the earliest point to the latest point, taking offsets into account
05:11
<zewt>
for a commentary track that doesn't start immediately, that can probably be done by having a timestamp offset in the file itself, at authoring time to match the video track it's for--rather than setting it by hand in script
05:11
<nessy>
ok, then currentTime would be the time on that transport bar - where is the problem?
05:11
<Hixie>
nessy: you're the one who said there was a problem :-)
05:12
<nessy>
you said:
05:12
<Hixie>
zewt: yeah, that would be ideal
05:12
<nessy>
"having added a combined currentTime feature, i wonder whether to just force the slaved tracks to be aligned up and not support offsets at all
05:12
<nessy>
since offsets would have to be implicit, which is confusing"
05:14
<Hixie>
here's what i just wrote in the e-mail i'm writing in response to all the feedback:
05:14
<Hixie>
Originally, the tracks could be offset because their .currentTime attributes were advanced at a fixed rate, and the MediaController didn't have any concept of the currentTime, so just changing the currentTime of a media element offset the video by the difference between the old and new values. I guess theoretically we can still do that, but it becomes kind of weird that you can change the currentTime of each video in
05:15
<Hixie>
oh that didn't wrap right
05:15
<Hixie>
second try:
05:15
<Hixie>
Originally, the tracks could be offset because their .currentTime
05:15
<Hixie>
attributes were advanced at a fixed rate, and the MediaController didn't
05:15
<Hixie>
have any concept of the currentTime, so just changing the currentTime of
05:15
<Hixie>
a media element offset the video by the difference between the old and
05:15
<Hixie>
new values.
05:15
<Hixie>
I guess theoretically we can still do that, but it becomes kind of weird
05:15
<Hixie>
that you can change the currentTime of each video in turn, and when you
05:15
<Hixie>
change the first one, the controller's "duration" changes, and then
05:15
<Hixie>
suddenly when you change the last slaved media elements's currentTime, the
05:16
<Hixie>
duration changes back.
05:17
<nessy>
all good thoughts!
05:20
<nessy>
I think we're starting to feel the pain between close and loose coupling
06:07
<Hixie>
actually i guess i have to support the offset thing, because otherwise setting currentTime on the video would be even weirder
06:07
<Hixie>
i wonder what nessy did in her proposal
06:08
<Hixie>
"currentTime of the slaves is turned into a readonly attribute"
06:08
<Hixie>
o_O
06:08
<Hixie>
nessy: what does "currentTime of the slaves is turned into a readonly attribute" mean in your proposal?
06:09
<nessy>
it means that they slaves are slaved to the timeline of the master and cannot seek on their own
06:10
<Hixie>
so what happens when you set a slave's .currentTime attribute?
06:10
<nessy>
however, they may not be fully in sync with the master, so the currenTime does display where they are actually at
06:10
<nessy>
nothing - it's rejected
06:10
<nessy>
but that was something I randomly made up - not sure it makes sense
06:10
<Hixie>
the setter just ignores the new value?
06:10
<nessy>
it was part of slaving everything to the master
06:10
<nessy>
yes
06:10
<Hixie>
huh
06:11
<nessy>
I guess it would be possible - if the master is paused - to set the slave and play with it individually
06:12
<nessy>
but as soon as the master (or controller) is touched, then the slaves would re-sync with it
06:13
<Hixie>
it seems really weird to have a mutable attribute whose value changes but which ignores values it is set to
06:14
<Hixie>
i wonder how else to handle this
06:14
<Hixie>
i guess i'll have to support the offsets after all
06:14
<Hixie>
hmm
06:24
<nessy1>
curious: what does it have to do with offsets?
06:24
<Hixie>
ignoring a new value in a mutable attribute is bad api design, so imho not an option
06:24
<nessy1>
ok, fair enough - but how else to deal with it?
06:24
<Hixie>
exactly
06:25
<Hixie>
if we have to make it do something, what is the logical thing for it to do?
06:25
<Hixie>
i see two options:
06:25
<nessy1>
you could play independently, I guess
06:25
<Hixie>
making all the currentTimes into proxies for each other, and making it just change that element's position
06:25
nessy1
is on a ferry, so may drop out randomly, sorry
06:25
<Hixie>
now if we grant that the elements must remain synced, then the second is equivalent to setting an offset
06:26
<Hixie>
the former seems bad because there's no intuitive reason why setting one track's position should affect other tracks, especially since it might set the other tracks to entirely different numbers
06:26
<Hixie>
(since they might have different "zero" times)
06:26
<nessy1>
except if you interpret the second as a local positioning only - so when the user changes the currentTime of the controller, it snaps back into place
06:27
<Hixie>
that would be essentially useless, especially while playing
06:27
<Hixie>
designing useless APIs is also bad api design :-)
06:27
<nessy1>
how so?
06:27
<Hixie>
it's essentially the same as saying it's ignored
06:28
<nessy1>
I can see it very useful - e.g. I have a sign language track and a main video - I watch both - I miss some parts in the sign language and just scroll back on that to watch something again - then I play the full composition again in sync
06:28
<Hixie>
if you can play one track and the others don't move then it's not synced...
06:28
<nessy1>
not when you directly interact with it
06:29
<nessy1>
isn't that the beauty of a controller?
06:29
<Hixie>
the beauty of a controller is that the api isn't asymetric
06:29
<nessy1>
that it only controls when interacted with it and otherwise leaves the slaves alone?
06:30
<Hixie>
you can't interact with a controller, it's a js object, it has no UI
06:30
<Hixie>
i'm not sure i follow what you're proposing
06:30
<Hixie>
anyway i have to go to bed now
06:30
<nessy1>
well, if they are all slaved together, then I don't see why the first option doesn't make sense
06:30
<Hixie>
i'll finish this tomorrow night i guess
06:30
<nessy1>
no worries
06:30
<nessy1>
nn
06:30
<nessy1>
(it's hard!)
06:31
<Hixie>
this is by orders of magnitude not what i'd call hard, it's just finicky
06:31
<Hixie>
if you think this is hard you should try writing the html parser spec :-)
06:32
<Hixie>
nn
07:48
<zcorpan>
Hixie: i'm happy to review books
07:49
<zcorpan>
at least if "review" means "point out errors to the author", not "publish a review to make the book sell more copies"
07:57
<VISHAL>
Hi
07:57
<VISHAL>
Hope this is the right channel to ask about html5
08:02
<VISHAL>
i am trying to access the server to get some data using ajax aplication is on same server but it shows Origin null is not allowed by Access-Control-Allow-Origin
08:15
<zcorpan>
Hixie: there's a problem with overlaying a sign-language video and using native controls
08:15
<zcorpan>
Hixie: because the overlaid video overlaps the native controls
08:19
zcorpan
filed a bug
08:26
<hsivonen>
what's the most realistic documentation of mutation events?
09:46
<gsnedders>
Where does WebIDL forbid assignment to read only attributes in the ES binding?
09:50
<gsnedders>
Oh, "The attribute setter is undefined if the attribute is declared readonly and has neither a [PutForwards] nor a [Replaceable] extended attribute declared on it"
09:51
<gsnedders>
Which means that the exact behaviour depends upon strict-mode
09:54
<foolip>
Hixie, I'm here now
10:06
<Lachy>
Hixie, yt?
10:10
<jgraham>
gsnedders: Yes]
10:10
<jgraham>
Lachy: 00:34 < Hixie> anyway i have to go to bed now
10:11
<jgraham>
I know that isn't always a reliable indicator, but it was 4 hours ago
10:12
<Lachy>
jgraham, "00:34" is 11 hours ago, unless you're running in a weird timezone
10:13
<jgraham>
Yes the server is in the US somewhere and I am too lazy to change the timezone in irssi
10:13
<Lachy>
ok
10:15
<zcorpan>
ok i have updated http://dev.w3.org/html5/html4-differences/
10:16
<zcorpan>
i guess i should set the date to 5th also
10:17
<zcorpan>
there
10:33
<jgraham>
Hixie: BTW I might also be interested in doing book review, if they need more volunteers (assuming the same definition as zcorpan)
10:36
<jgraham>
MikeSmith: BTW, as gsnedders reminded me, I will not be around on 20th May, and possibly some of the following week
10:52
<hsivonen>
anyone got an insertAdjacentHTML test suite_
10:52
<hsivonen>
?
10:55
<jgraham>
I don't at least
11:09
<hsivonen>
googling for mutation events shows an unfortunate interest in them
11:28
<hsivonen>
Someone seem to believe that whatwg members want to filter Norm: http://twitter.com/#!/gimsieke/status/51567252096548865
12:50
<MikeSmith>
jgraham: how about the week of May 8 to 14?
12:50
<jgraham>
MikeSmith: That is good for me, but before gsnedders finishes his exams
12:50
<MikeSmith>
oh
12:51
<MikeSmith>
anyway, I'm busy on the 20th as well
12:51
<MikeSmith>
but free after that
12:51
<jgraham>
I will *probably* be free the following week
12:52
<jgraham>
Not sure about gsnedders though
12:52
<MikeSmith>
OK, let's see what he says when he's back here
12:53
<MikeSmith>
zcorpan: doc looks good
12:53
<MikeSmith>
I will try to get it staged up today at http://www.w3.org/TR/2011/WD-html5-diff-20110405/
12:55
<MikeSmith>
hmm "The action and formaction attributes are no longer allowed to have the empty string as value."
12:56
MikeSmith
tries to remember if any validator fix has been made for that yet
12:56
<MikeSmith>
oh yeah, value is common.data.uri.non-empty
12:58
<zcorpan>
MikeSmith: cool
13:06
<hsivonen>
wow. this time writing test cases really pays off
13:06
hsivonen
is implementing insertAdjacentHTML
13:07
<jgraham>
hsivonen: Planning to release the tests?
13:08
<hsivonen>
jgraham: yes, in the "pushed to m-c" sense
13:08
<hsivonen>
jgraham: other publication depends on how easy it is to repurpose mochitests as HTML WG tests
13:17
<hsivonen>
jgraham: the tests are now public but not necessarily useful outside mochitest: https://bugzilla.mozilla.org/attachment.cgi?id=523283&action=diff
13:28
<volkmar>
someone knows what could be the labels attribute use case? (I see a few internal browser stuff but not for authors)
13:29
<hsivonen>
jgraham: Is the execution flow in HTML WG test harness one-to-one mappable to mochitest yet?
13:41
<jgraham>
hsivonen: I think the answer to your question is probably "no" although I don't understand the question.
13:42
<jgraham>
One could write a MochiTest wrapper for the HTML WG harness
13:42
<hsivonen>
jgraham: could I just take the tests that I linked to above, change the assertion function names and have it work?
13:42
<jgraham>
No
13:42
<hsivonen>
(maybe adding some kind of explicit finish() call)
13:42
<hsivonen>
jgraham: ok :-(
13:43
<zcorpan>
jgraham: btw, it'd be nice with a second version of t.step() that returns a function
13:44
<hsivonen>
jgraham: I'd really like to get to a point where the difference between Mochtest and HTML WG tests is just trivialities in naming that can be addressed with a simple wrapper
13:44
<hsivonen>
but I really don't expect to spend the time to adapt mochitests to .step() in order to contribute
13:56
<jgraham>
zcorpan: Yes, I was thinking t.step_func or so
13:57
<jgraham>
hsivonen: It would be trivial to write a wrapper that collected the results of a HTMLWG test and returned them to mochitest
13:57
<jgraham>
s/HTMLWG/testharness.js/
13:57
<jgraham>
zcorpan: I can add that now if you need it
13:58
<hsivonen>
jgraham: what about the other way round?
13:58
<jgraham>
hsivonen: I don't know about the other way around because I don't know how MochiTest exposes its test results
13:59
<jgraham>
testharness.js has callbacks for each test that completes and for the whole suite completing
14:00
<jgraham>
So you would just hook into those and do something like function on_result(test) {is(test.result === test.PASS, test.message)}
14:00
<jgraham>
(and hook on_result up to the right callbacks)
14:02
<zcorpan>
jgraham: that'd be nice
14:03
<hsivonen>
jgraham: what about window.onerror?
14:08
<jgraham>
hsivonen: What about it?
14:09
<hsivonen>
jgraham: are tests that rely on window.onerror now accepted in HTML WG test submissions?
14:09
<jgraham>
Rely on it in what way? Tests that test it are fine
14:10
<jgraham>
Tests that require it when they don't need to seem extremely dubious to me
14:10
<hsivonen>
jgraham: If something in the global scope fails, it is caught by window.onerror
14:11
<hsivonen>
which would matter if we submitted tests that failed in some UA
14:11
<jgraham>
hsivonen: In general relying on that is discouraged, not least becase when testing window.onerror one will need exceptions that propogate to the global scope
14:14
<zcorpan>
jgraham: step_func would support only one argument, right? javascript doesn't support passing along an arbitrary number of arguments, does it
14:15
<jgraham>
zcorpan: Sure it does
14:16
<zcorpan>
oh?
14:16
<hsivonen>
I think Mozilla needs someone whose primary work item is wrapping HTML WG tests into the mochitest reporting system and wrapping mochitests into the HTML WG reporting system
14:16
<jgraham>
hsivonen: http://hoppipolla.co.uk/tests/insert_adjacent_html.html is a very quick transliteration, which probably has bugs
14:19
<gsnedders>
MikeSmith: 8 to 14th definitely can't do. Week after the 20th I *might* be able to do. Almost certainly end of that is doable.
14:19
<MikeSmith>
ok
14:21
<Workshiva>
zcorpan: apply is your friend
14:21
<jgraham>
Workshiva: Or your enemy
14:22
<Workshiva>
Only if you mistreat it
14:26
<zcorpan>
ah
14:26
zcorpan
doesn't know the javascript fu
14:26
<hsivonen>
jgraham: there are somefailures in the translation in my build that passes the mochitest
14:27
<hsivonen>
jgraham: all "Should have had <a> as next sibling" tests throw
14:28
<hsivonen>
and tests 30 though 33 say expected is undefined
14:28
<jgraham>
so you ant to be able to do something like something.some_event = t.step_func(function(e) {assert_equals(1, e.a)}) or something?
14:29
<zcorpan>
yes
14:30
<zcorpan>
so one argument is good enough for me
14:34
<Workshiva>
Just make sure you get an argument and not abuse
14:36
<jgraham>
hsivonen: typo, fixed
14:38
<jgraham>
Oh, I missed the "script should not have run" bits
14:40
<hsivonen>
jgraham: now all tests pass
14:43
<jgraham>
hsivonen: Fancy cleaning up the "script should not have run" bits (e.g. by setting a flag in the script and checking its value doesn't change) and submitting to HTML WG?
14:44
<jgraham>
Or I could do that I suppose…
14:54
<gsnedders>
MikeSmith: I guess a more reasonable question from my POV is when is the latest the meeting can be to suit you?
14:55
<MikeSmith>
the end of that week I guess
15:02
<hsivonen>
jgraham: I can do it, but if you are already doing it, go ahead
15:04
<gsnedders>
MikeSmith: mmhmm, maybe doable
15:06
<jgraham>
gsnedders: Am I missing something on the RegExp.prototype.compile thread?
15:07
<gsnedders>
jgraham: I'm sure I am.
15:08
<jgraham>
gsnedders: OK, your reply makes sense to me
15:13
<gsnedders>
jgraham: At least it makes sense to someone
15:54
zcorpan
notes that there's no EventSourceSync for workers
15:54
<zcorpan>
but maybe that doesn't make any sense
16:51
<zewt>
... websql stopped because sqlite was too good? heh
17:02
<TabAtkins>
zewt: More precisely, sqlite was "good enough" that nobody seriously wanted to write an independent implementation. It's still not very good, though.
17:03
<TabAtkins>
(We're currently using sqlite as the backing store for indexeddb, and it's pretty slow and horrible. We're writing a specialized backing store just for it right now.)
17:03
<zewt>
sqlite is pretty excellent; it's definitely not "slow and horrible"
17:03
<TabAtkins>
Tell that to our indexeddb folks.
17:04
<zewt>
idb seems like a heinous wheel reinvention; the world doesn't need another completely distinct database vocabulary
17:04
<zewt>
(vs. sql)
17:05
<TabAtkins>
indexeddb is a pretty standard simple object store.
17:06
<zewt>
and they're spending all kinds of time trying to solve things like "how do we define multi-key indexes with different orderings on each key", stuff which is already solved in SQL
17:07
<TabAtkins>
Yes?
17:07
<zewt>
yes?
17:07
<zewt>
sorry, that's not a very meaningful response. heh
17:07
<TabAtkins>
SQL is still a relational model, which, for whatever reason, a lot of people simply can't wrap their heads around. I don't understand why, because I found relational algebra pretty trivial, but whatever.
17:08
<TabAtkins>
Linear object stores appear to be much more intuitive to most people.
17:08
<TabAtkins>
And then we need to find a way to map more advanced concepts onto the simpler model, in a clean and understandable way.
17:09
<zewt>
i can't say i have much sympathy for people who can't understand SQL. heh
17:09
<zewt>
basic stuff.
17:10
<TabAtkins>
I have sympathy for developers in general. That's why I obsess so much over things being simple in my specs.
17:11
<Philip`>
Given how many inexperienced people manage to develop web sites with PHP+MySQL, it seems it's understable enough to get by
17:11
<TabAtkins>
Have you looked at their databases?
17:11
TabAtkins
shudders.
17:11
<zewt>
heh, that may not be the most flattering comparison if you consider how secure those sites probably are on the whole :|
17:11
<Philip`>
s/understable/understandable/
17:12
<Philip`>
Their databases are probably no worse than their PHP code :-)
17:12
<TabAtkins>
Point. ^_^
17:13
<Philip`>
They've probably never heard of relation algebra but that doesn't stop them using the tools to solve their problems
17:13
<TabAtkins>
Indeed, and the relational algebra does *not* help them solve the problem. They just bodge on and create something pretty bad. We can offer them something that more closely matches their intuitive model, so when they just throw something together it's not as bad.
17:15
<zewt>
what bothers me a lot, though, is the notion that WebSQL development died because everyone used sqlite--that's exactly what they should be doing
17:15
<TabAtkins>
No, because then the web depends on sqlite's bugs, rather than the spec.
17:15
<zewt>
sqlite is in a small category of libraries that are so widely-used, heavily tested and robust that, for tasks it's designed for, it's generally a really bad idea to not use them (along with eg. zlib, libpng/jpeg, etc)
17:16
<TabAtkins>
All of those latter libraries definitely have problems as well, and we'd be better off if we spent the effort to reimplement them.
17:16
<TabAtkins>
They're just "good enough" that we don't do so.
17:16
<zewt>
none of those libraries need reimplementing.
17:17
<TabAtkins>
Luckily, the problems with those libraries are almost completely hidden from the web author. Exposing sqlite via websqldb means its problems *aren't* hidden.
17:18
<TabAtkins>
Again, ask our devs. I've heard a lot of chatter about people wanting to reimplement libpng, for example, due to its horribleness.
17:18
<Philip`>
Seems pretty typical for programmers to want reimplement everything - you always think you can do better yourself
17:19
<zewt>
heh
17:19
<zewt>
i'm not immune from NIH, but i'd never go there for that set of libraries
17:24
<Philip`>
(Usually you *can* do better, because you've got the experience of the old design and an extra decade of research, and better development tools and practices)
17:24
<Philip`>
(which makes it dangerously tempting)
17:25
<zewt>
oh, you could do better--but not enough better, IMO, to warrant throwing away the extra decade of testing and real-world use
17:26
<zewt>
(personally, zlib's buffering API always annoys me when I have to use it, for example, but everyone's used to it)
17:27
<Philip`>
(API problems are usually the easiest thing to resolve - just stick a wrapper API around it)
17:27
<zewt>
yeah.
17:28
<Philip`>
(libpng's use of setjmp is kind of nasty, but not enough to really matter)
17:29
<zewt>
oh yeah, i forgot about that. lua does that too
17:29
<TabAtkins>
I think that's one of the things that annoys some of our devs a lot.
17:29
<zewt>
that's pretty much wrappable too, i think.
17:32
<Philip`>
The more important problems are probably things like libjpeg using a poor decoding algorithm (given e.g. http://cbloomrants.blogspot.com/2011/02/02-13-11-jpeg-decoding.html)
17:35
<Philip`>
but then it's nice to fix the problems as non-disruptively as possible, e.g. change the implementation but don't redesign the whole API unless it's really necessary, and definitely don't make a whole new file format (e.g. WebP) that's incompatible with everybody else in the world
17:35
<zewt>
i'd sooner hope that improvements like that would make their way into libjpeg, not replace it outright
17:36
<zewt>
of course, that depends on how flexible libjpeg's internals are (which I have never needed to look at--which is a major part of why I like libjpeg so much :)
17:41
<stefan-_>
hi
17:41
<stefan-_>
how safe in terms of browser compat is it to use a contentEditable based editor?
17:43
<TabAtkins>
It's not.
17:43
<TabAtkins>
At least, not if you want to use execCommand().
17:44
<TabAtkins>
If you're willing to roll your own editting commands, then it's okay.
17:44
<stefan-_>
just for simple formatting (bold, underline, etc)
17:45
<stefan-_>
http://www.freshcode.co.za/plugins/jquery.contentEditable/demo.html
17:45
<stefan-_>
im using this one currently
17:45
<stefan-_>
seems to work ok in ie6 and chrome so far
17:47
<stefan-_>
so what you mean with rolling out my own editing commands?
17:48
<stefan-_>
http://www.quirksmode.org/dom/execCommand.html
17:48
<stefan-_>
nice
17:48
<stefan-_>
that should be satisfactory
17:50
<micheil>
Hixie: do you know if all vendors currently implementing websockets support the Events IDL?
17:50
<micheil>
(as in WebSocket inherits from the Events api)
17:52
<jgraham>
micheil: what do you actually mean?
17:52
<jgraham>
All relevant browsers implement DOM events…
17:52
<micheil>
well, each websocket instance has the .addListener methods
17:53
<micheil>
is a websocket guaranteed to inherit from DOMEvents
17:54
<jgraham>
By the spec, sure
17:54
<jgraham>
browsers can have bugs ofc
17:54
<jgraham>
If you want to ensure that they don't you'll have to test
19:45
beowulf
resents jgraham's suggestion that work must be done before cakes can be had
20:11
<jgraham>
There is cake?
20:12
<bfrohs>
The cake is a lie.
21:09
<zcorpan>
hsivonen: you have access to the @whatwg twitter?
21:10
<zcorpan>
hsivonen: wondered if you could RT https://twitter.com/zcorpan/status/53548207921315840 from @whatwg (or write a new tweet)
21:19
<antti_s>
is there an offline html5 validator available? (other than a local instance of validator.nu)
21:19
<Ms2ger>
That's the only one I know
21:20
<zcorpan>
is there an html5 validator other than validator.nu?
21:29
<karlcow>
zcorpan: validator.w3.org but which is the same engine than validator.nu. I do not know another implementations of html5 validation. Unfortunately.
21:30
<zcorpan>
yeah would be nice with some competition there - i'd like a validator that was tightly integrated with browser dev tools
21:31
<zcorpan>
where clicking an error would bring up the relevant element in the dom tree view
21:32
bfrohs
misses his old validator/dev tool that did exactly that
21:33
<zewt>
out of curiosity, when is that more useful than going to the corresponding source line?
21:34
<zcorpan>
there might not be a source line if you're doing stuff with script
21:34
<zewt>
i suppose if there were any in-place validators that could validate a DOM tree directly, eg. to validate dynamic content
21:35
<zcorpan>
with today's web apps, what the validator sees is not so useful to validate
21:44
<jgraham>
By the time you have a DOM you missed half the errors anyway
21:44
<jgraham>
Of course you can collect those up at the time
21:45
<zcorpan>
sure, parse errors would be logged too
21:45
<karlcow>
http://lists.w3.org/Archives/Public/www-archive/2011Mar/0016
21:45
<zcorpan>
also those from innerHTML etc, which could point to the line of script where it occurred
21:46
<karlcow>
DOM SpellChecker by Sean Palmer
21:49
<zcorpan>
karlcow: is that just thoughts or running code?
21:50
<karlcow>
zcorpan: thoughts I think. But Sean Palmer is known to hack too. So I do not know if he tried