multiple simultaneous RPC calls to the same uri

#1

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one :slight_smile: But there is information I could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :

  • are the calls running one after the other (like in a queue with a single “processor”) ?
  • are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?
  • in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?
0 Likes

#2

Just to add some information : the worker is not blocking (@coroutine and yields the result), for example : making an asynchronous complex request to a DB, like I can do with Tornado
.

···

Le samedi 14 mars 2015 16:10:05 UTC+1, Rejo a écrit :

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one :slight_smile: But there is information I could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :

  • are the calls running one after the other (like in a queue with a single “processor”) ?
  • are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?
  • in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?
0 Likes

#3

Hi Remi!

Just to add some information : the worker is not blocking (@coroutine and yields the result), for example : making an asynchronous complex request to a DB, like I can do with Tornado
.

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one :slight_smile: But there is information I could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :

  • are the calls running one after the other (like in a queue with a single “processor”) ?
  • are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?

Calls run asynchronously. This means that there is no ordering guarantee for the results either, e.g. a long-running database query may yield a result after a short lookup of a single value has already returned.

  • in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?

Crossbar.io does not offer any throttling at the moment.

Regards,

Alex

···

Le samedi 14 mars 2015 16:10:05 UTC+1, Rejo a écrit :

0 Likes

#4

Hi Rejo,

all calls run asynchronously and concurrently: there is no serialization enforced by Crossbar.io. If your WAMP callee can process multiple calls in parallel (e.g. because it does stuff like CPU intensive processing on a background thread / CPU core), it will process and run multiple calls in parallel. Results will come back to clients (callers) also asynchronously, and potentially out-of-order (e.g. call1 issued before call2 might have call2 come back first, when processing for call1 did take longer).

If you use shared registrations, you can have multiple workers that each have registered a given procedure, and Crossbar.io will load-balance (according to configurable policy) accross those workers. This means, you can scale-out your workers across multiple CPU cores and complete machines.

There is no throttling, since to make it practically useful, any throttling would need to impose back-pressure back to the original callers. If callers are not throttled in turn, throttling at the router doesn’t buy you much. Caller throttling by failing calls would be one way.

Cheers,
/Tobias

···

Am Samstag, 14. März 2015 16:10:05 UTC+1 schrieb Rejo:

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one :slight_smile: But there is information I could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :

  • are the calls running one after the other (like in a queue with a single “processor”) ?
  • are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?
  • in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?
0 Likes

#5

Thanks Alex and Tobias,

I’ll make a POC with access to a MongoDB DB. The next version of Motor (the asynchronous python drvier for MongoDB) seems to be able to deal with asyncio… If it works, I’ll post an example here

Best regards.

···

Le dimanche 15 mars 2015 14:44:03 UTC+1, Tobias Oberstein a écrit :

Hi Rejo,

all calls run asynchronously and concurrently: there is no serialization enforced by Crossbar.io. If your WAMP callee can process multiple calls in parallel (e.g. because it does stuff like CPU intensive processing on a background thread / CPU core), it will process and run multiple calls in parallel. Results will come back to clients (callers) also asynchronously, and potentially out-of-order (e.g. call1 issued before call2 might have call2 come back first, when processing for call1 did take longer).

If you use shared registrations, you can have multiple workers that each have registered a given procedure, and Crossbar.io will load-balance (according to configurable policy) accross those workers. This means, you can scale-out your workers across multiple CPU cores and complete machines.

There is no throttling, since to make it practically useful, any throttling would need to impose back-pressure back to the original callers. If callers are not throttled in turn, throttling at the router doesn’t buy you much. Caller throttling by failing calls would be one way.

Cheers,
/Tobias

Am Samstag, 14. März 2015 16:10:05 UTC+1 schrieb Rejo:

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more promissing for one project I’m working one :slight_smile: But there is information I could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple clients) to the same registred uri (single worker taking a few/many seconds to process the data) :

  • are the calls running one after the other (like in a queue with a single “processor”) ?
  • are they submited asynchronously : don’t wait for the answer of one call to submit the next one and get the answers asynchrously too (answers may also arrive in a different order than submited) ?
  • in that case is there a way to limit the number of calls simultaneously sent to the worker (like in AMQP) ?
0 Likes

#6

Thanks Alex and Tobias,

I'll make a POC with access to a MongoDB DB. The next version of Motor (the
asynchronous python drvier for MongoDB) seems to be able to deal with

That looks good. As soon as the asyncio support in Motor is there, using
this with AutobahnPython to create WAMP app components in Python that
hook up to MongoDB should be very easy.

asyncio... If it works, I'll post an example here

Oh yes, please!

We have an expanding set of

http://crossbar.io/docs/Programming-Guides/

that cover use with various frameworks, stacks or technologies.

···

Am 15.03.2015 um 15:48 schrieb Rejo:

Best regards.

Le dimanche 15 mars 2015 14:44:03 UTC+1, Tobias Oberstein a écrit :

Hi Rejo,

all calls run asynchronously and concurrently: there is no serialization
enforced by Crossbar.io. If your WAMP callee can process multiple calls in
parallel (e.g. because it does stuff like CPU intensive processing on a
background thread / CPU core), it will process and run multiple calls in
parallel. Results will come back to clients (callers) also asynchronously,
and potentially out-of-order (e.g. call1 issued before call2 might have
call2 come back first, when processing for call1 did take longer).

If you use shared registrations, you can have multiple workers that each
have registered a given procedure, and Crossbar.io will load-balance
(according to configurable policy) accross those workers. This means, you
can scale-out your workers across multiple CPU cores and complete machines.

There is no throttling, since to make it practically useful, any
throttling would need to impose back-pressure back to the original callers.
If callers are not throttled in turn, throttling at the router doesn't buy
you much. Caller throttling by failing calls would be one way.

Cheers,
/Tobias

Am Samstag, 14. März 2015 16:10:05 UTC+1 schrieb Rejo:

Hello,

As far as I dig in wamp/crossbar.io, it seems to be more and more
promissing for one project I'm working one :slight_smile: But there is information I
could not find in the documentation :

In the case of multiple (almost) simultaneous calls (from multiple
clients) to the same registred uri (single worker taking a few/many seconds
to process the data) :

   - are the calls running one after the other (like in a queue with a
   single "processor") ?
   - are they submited asynchronously : don't wait for the answer of one
   call to submit the next one and get the answers asynchrously too (answers
   may also arrive in a different order than submited) ?
      - in that case is there a way to limit the number of calls
      simultaneously sent to the worker (like in AMQP) ?
   

0 Likes