> 1) What's the measured connection/socket footprint in KB / active
Very low, like a couple of kB. We have stress tested up to 180k concurrently active connections on low-end hardware (2 cores, 4GB).
I don't have the exact numbers around right now, and the specifics depend on OS and kernel parameters like i.e. TCP window/buffer sizes ..
2) Are there any easy way to extend AutoBahn to use a particular
back-end (e.g. Redis) for Pub Sub? (scalibility/load balanced situation)
Not right now, but there are plans.
In general, AutobahnPython message broker, when running on tuned kernel and PyPy is already very scalable (vertically), like to say 200k connections. In many scenarios, this is more than enough (not every site
Then, if you have requirements to scale to _millions_ of concurrently active connections, a horizontal scale-out architecture will be needed.
This can be done of course. If Redis would be the right tool, or even if scale-out at the PubSub layer would be the right approach I'm not sure.
For example, the first limit of a single instance is the number of TCP connections running WebSocket that can be sanely terminated.
I can imagine a scale-out architecture that has a cluster of frontend "WebSocket concentrators" that do nothing else but terminating massive numbers of incoming TCP/TLS/WebSocket connections and sending data to a _single_ PubSub broker over 1 TCP each. Hence the broker only serves
maybe 100s of TCPs (non-TLS, possible binary WebSocket) incoming
from the WebSocket concentrators.
Keep in mind that nowerdays a single mid-end machine can easily saturate a 10GbE link. Also keep in mind that you need a load-balancer
in front of the scale-out "concentrators cluster" likely operating on L3 ..
Anyway, there would be a lot more to say. If you have a commercial project, we (Tavendo) sure can create something for you!
Thank you in advance!