14:46
<zcorpan_>
shouldn't the final wcag samurai be published by now?
16:36
<Lachy>
this has just been published! http://www.w3.org/TR/xbl-primer/
16:43
<zcorpan_>
Lachy: s/XmlHttpRequest/XMLHttpRequest/
16:44
<zcorpan_>
Lachy: s/how you can to simplify/how you can simplify/
16:44
<Lachy>
ah, you can blame Marcos for that :-)
16:44
<zcorpan_>
seems there are lots of typos
16:45
<Lachy>
oh well, that's probably cause I didn't get a chance to proof read the parts I didn't write
16:46
<Philip`>
The image in Figure 1 gets rescaled quite uglily
16:46
<Lachy>
yeah, I just noticed
16:47
<Lachy>
can you send mail to public-appformats with all the errors and either Marcos or I will fix them later this week
16:50
<Lachy>
I've fixed those 3 mistakes and will check them in shortly
17:12
<zcorpan_>
Lachy: shouldn't’t
17:12
<zcorpan_>
in 1.8 Templates
17:14
<Lachy>
fixed
17:14
<Lachy>
good night, cya tomorrow (send mail for any others)
17:15
<zcorpan_>
nn
19:37
<jgraham>
Philip`: Am I correct in thinking that your tokenizer can only dump tokens to stdout?
19:38
jgraham
is interested in writing python bindings and plugging it in to html5lib
19:39
<jgraham>
Where by "writing" I mean "using e.g. SWIG to generate"
19:47
<Philip`>
jgraham: The token stream is handled via an interface class, where the default implementation dumps JSON but it can be switched (e.g. by '--stats' on the command line) to a different implementation that counts tags/etc instead
19:48
<Philip`>
and so it should be possible to add another token-stream implementation that pushes the tokens into Python
19:49
<Philip`>
(It does the same kind of thing for the input stream too, though the current implementation is kind of rubbish since it doesn't handle non-ASCII correctly)
19:49
<Philip`>
((since it just reads bytes from stdin and pretends they're characters))
19:49
<jgraham>
Ah, OK. I'd just seen the bit in emitToken that looks like std::cout << printBuf
19:50
<Philip`>
That's just in the TestTokenStream class, which is the JSON-printing one
19:50
<Philip`>
(NullTokenStream is kind of useless and does nothing, and StatsTokenStream does the counting)
19:51
jgraham
should learn to read more context
19:53
<Philip`>
Would it be useful if the tokeniser could emit one token at a time, so you could pull data from it rather than having it run uninterruptibly until EOF?
19:58
<jgraham>
Philip`: Yes
22:05
gsnedders
hits parse error in one of karl's emails and aborts processing
22:10
<zcorpan>
gsnedders: pointer?
22:10
<gsnedders>
zcorpan: by parse error I mean English mistake
22:11
<zcorpan>
gsnedders: yes; pointer to the email?
22:11
<gsnedders>
zcorpan: I won't be able to find it quickly now
22:11
<zcorpan>
k