11:23 | <annevk> | krosylight: how are push subscriptions scoped currently? An origin can in theory create infinite service workers. Would each of them be able to get a push subscription? Or is there some implementation-defined limit? |
12:33 | <krosylight> | krosylight: how are push subscriptions scoped currently? An origin can in theory create infinite service workers. Would each of them be able to get a push subscription? Or is there some implementation-defined limit? |
12:37 | <annevk> | krosylight: ta, will do the same |
12:37 | <annevk> | krosylight: I plan on working on declarative web push again, so there will be some more activity until my vacation starts anyway |
14:23 | <annevk> | Speaking of which, anyone here familiar with whether there's anything remotely like file signatures for JSON? Using a rather unique key is the best I've come up with thus far. |
14:28 | <freddy> | for people using json-schema, they use $schema key to point to a URL of the accepted json-schema format |
14:29 | <freddy> | maybe that's what you mean with a "rather unique key"? Even if a file has that special key, you'd still have to check that all keys and values are: present, not duplicate, of the right type etc. |
14:32 | <jrconlin> | Hi, Push Engineer here. So, technically, a UserAgent could have as many subscriptions as there is numberspace in a UUID4. (Subscriptions are identified as "Channel IDs" or CHIDs). Push subscriptions are fairly cheap on the server side since at most they require adding a mostly empty row to a Bigtable database. The server then encrypts the UAID and CHID into the subscription URL and that's what you're getting back. Of course, having that many subscription updates will do frankly horrible things to the UA, which has to manage, decrypt, and process those subscriptions, so there's the very high likelihood that even if only a percentage of those subscriptions were active, you'd swamp the network and CPU for the UA. |
14:33 | <jrconlin> | If you're interested, the code we use is here. It's in rust, with Python integration tests. |
14:33 | <annevk> | It seems that only self-identifies schemas, but yeah, I guess something like that is what I mean. And yeah, it's clear all the validation still has to happen. |
14:34 | <annevk> | But what about a website? A website can have many service workers (infinite, in fact) which means it could single-handedly DOS? That'd be bad, no? |
14:36 | <jrconlin> | By DOS, what are you intending to DOS? Our push server? The UA? |
14:38 | <annevk> | Not sure, maybe it doesn't really matter as it's all behind a permission anyway. But might be tricky for the end user to clean up if it goes bad and it's not anticipated. |
14:39 | <jrconlin> | Push currently handles a ludicrous number of messages a minute. We are very free with 503 messages if we see a source generating a lot of messages, and even more generous with the backoff messages if we see some service ignoring the 503s. |
14:39 | <jrconlin> | Ah, so you're thinking more on the UA side. |
14:41 | <annevk> | Yeah, I guess so. Services already have to scale and deal with abuse so the additional angle of many registrations for a single origin prolly doesn't hurt them much. |
14:47 | <jrconlin> | Ok, so here are a few odd scenarios that could happen:
|
14:49 | <jrconlin> | I mean, in a perfectly frictionless network filled with spherical cows, then I can see this as being a problem, but in reality, I think there will be a number of other factors at play. At most, the user uninstalls or resets their browser. Subscriptions are generally not synced by any of the browser publishers for A LOT of reasons, so the recovery will not cause the problem to happen, unless they do the same thing again. |
14:50 | <annevk> | Yeah makes sense. I'm gonna ask around if we do anything, but I agree that things are prolly ok for the majority of cases. Thanks for the help! |
14:50 | <jrconlin> | I also don't mean to dismiss your concern. It's very valid. FWIW, there are a few things I've seen in my monitoring that are sketchy as hell and I'd love to know more of what the heck those connections are doing. |
14:51 | <jrconlin> | That said, PLEASE ask more questions like this. Poking at the protocol is really, really important and I absolutely welcome folk doing that. |
14:53 | <jrconlin> | (Honestly? I'd be fine if the UA were to limit the number of push subscriptions to something, even if it were some arbitrary number like 256.) Like I said, the bulk of the work is more on the UA than the backend server. |
16:13 | <caitp> | is this a reasonable place to talk about possible incongruency with web reality in webidl and/or dom? I asked a long-ish question to ms2ger, but he's out sick. It's entirely possible that my mental math is just wrong here |
16:18 | <caitp> | Copying most of my question here, just in case it is an accurate read of the spec algorithms not reflecting the behaviour of most browsers Can you explain the logic in https://github.com/web-platform-tests/wpt/blob/master/dom/collections/HTMLCollection-supported-property-names.html#L114-L115 which results in this test being the expected behaviour? My read of this is: it's a Legacy platform object / https://webidl.spec.whatwg.org/#js-legacy-platform-objects with no unforgeables, so that simplifies things a lot. 114: define a property via https://webidl.spec.whatwg.org/#legacy-platform-object-defineownproperty P is a string and O supports named properties, we aren't a [Global] interface, P ("new-id2") is not unforgeable, thus we're in 2.1. creating is true (2.1) O does not implement [LegacyOverrideBuiltIns], so skip to 3. O does not implement [Global], so Desc.[[Configurable]] changes from false to true -- so we define an own configurable property with a value of 5 121: "new-id2" becomes a supported property name of O 124: LegacyPlatformObjectGetOwnProperty(O, P, false) --- 2.1. the result of the named property visibility algorithm is false (it's a supported property name, but we also have an own property) return OrdinaryGetOwnProperty(O, P) -- We get a descriptor with a [[Value]] of 5 and a [[Configurable]] of true, because of the stuff on 114 126: [[Delete]] of a legacy platform object (https://webidl.spec.whatwg.org/#legacy-platform-object-delete) The named property visibility algorithm is false again for the reasons lsited above O has an own property P 3.1 The property is configurable, becasue of the steps on line 114 3.2 deletion is successful, and the property is removed from O return true --- This is NOT what the test is expecting to happen (see line 133) 129: delete again, this time named property visibility returns true, and we have no named property deleter and thus return false -- Being in strict mode this time, we throw an exception (so this assertion would be successful) 133: We try to repeat the steps on line 124 --- However, because deletion was successful in line 126, this assertion fails (at least in my read through this) However, I think all the major browsers are passing the test. So does this mean the spec is wrong, or am I just misreading this in a dumb way? |