all calls run asynchronously and concurrently: there is no serialization enforced by Crossbar.io. If your WAMP callee can process multiple calls in parallel (e.g. because it does stuff like CPU intensive processing on a background thread / CPU core), it will process and run multiple calls in parallel. Results will come back to clients (callers) also asynchronously, and potentially out-of-order (e.g. call1 issued before call2 might have call2 come back first, when processing for call1 did take longer).
If you use shared registrations, you can have multiple workers that each have registered a given procedure, and Crossbar.io will load-balance (according to configurable policy) accross those workers. This means, you can scale-out your workers across multiple CPU cores and complete machines.
There is no throttling, since to make it practically useful, any throttling would need to impose back-pressure back to the original callers. If callers are not throttled in turn, throttling at the router doesn’t buy you much. Caller throttling by failing calls would be one way.
Am Samstag, 14. März 2015 16:10:05 UTC+1 schrieb Rejo:
As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one But there is information I could not find in the documentation :
In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :
- are the calls running one after the other (like in a queue with a single “processor”) ?
- are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?
- in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?