I’d like to be able to utilize additional cores on our instances. What is the best way for us to utilize more cores in our use case (i.e. mostly routing messages and occasionally establishing lots of new connections after restarting another service)?
Proxy workers are able to off-load (scale to multiple cores / boxes) all of these portions of the workload of WAMP routing:
- accepting and authenticating incoming WAMP connections from clients
- supporting multiple WAMP transports, including WebSocket and RawSocket
- supporting all WAMP serializers, including JSON, CBOR, MessagePack and Flatbuffers
- terminating TLS and off-loading encryption
- serving Web services, such as static files, redirects, HTML templates and so on
We’ve recently added some docs for these new features
So eg should your clients connect over WAMP-WebSocket-JSON-TLS, you will be able to scale substantially already using proxy workers alone.
Here is an example where we measured a test setup for a customer:
This test ran load clients on one machine, where the clients connect over RawSocket-TCP (no TLS, CBOR serialization) and call a WAMP procedure with 256 random bytes as the (single positional) argument.
The backend procedure called is running on the testee machine, and simply returns the 256 random bytes provided as call argument.
- CrossbarFX configuration: 8 router worker, 16 proxy worker (ratio proxy-to-router workers is 2:1)
- 128 client connections were used (#router X #proxies => 8 X 16 = 128)
- more than 150,000 WAMP calls/sec (@ 256 bytes/call) are performed by the load clients
- traffic runs over a real network (AWS internal) with almost 1Gb/s WAMP client (up+down) traffic
- CrossbarFX consumes 12 CPU cores and 6GB RAM
- the test was run constantly at full load for more than an hour with zero errors
- memory consumption remained constant, the testee machine stable
Besides proxy workers, the other element of clustering are indeed “router-to-router” links (aka “rlinks”).
These are used to scale the actual core routing for a single realm beyond a single worker process. This is not yet officially announced, but we’ve recently made RPC fully work (in addition to PubSub) over rlinks -/
Both proxy workers and rlinks are part of Crossbar.io OSS and fully usable “as is”, that is with manually written node configuration files. Crossbar.io FX is adding a master node that can automatically orchestrate, manage and monitor setups with many nodes / workers.
Actually, we will now very soonish publicly announce this stuff and also publish the complete source code for Crossbar.io FX as well (on GitHub).