I'm trying to evaluate Crossbar at work; we are currently designing a
new multi-process software architecture for our embedded systems (X86 &
Great! Exactly one use case we designed Crossbar.io for.
2 weeks ago, I have installed the Crossbar packages on Win7+Vmware with
Linux Ubuntu 14.04 image and it ran smoothly.
The memory footprint for a demo based on hello:cpp template was ~10MB
for the controller and additional ~10MB RAM for the router. Cpp workers
< 1MB RAM. The memory footprint was stable even with additionnal cpp
Cpp workers (actually, any non-native worker, that is non-Python) is just "run as is".
Means: the memory overhead of Crossbar.io for running such a worker is just a couple of KB in the controller process.
During the last days I've installed the latest version of crossbar
(0.9.7-3 with Twisted 14.00) on a Linux Ubuntu 14.04 X86 target in order
to measure the latencies.
During the test, I have checked the memory footprint and now it uses
~25MB RAM for controller and ~25MB for the router !
Mmh. This sounds a little dubious to be honest;)
Try checking memory when just starting Python interactive prompt.
I get something like 35MB VSS and 5MB RES on 14.04/64bit. That is _Python_ without anything else.
VSS is "virtual memory", and practically nearly meaningless. You can have (almost) any amount of _virtual_ mem. "consumed".
RSS is the resident memory consumption of the process. That is closer to "actual" memory use. However, it doesn't take into account sharing of pages accross processes.
What you can however also try is: build Python _shared_:
./configure --prefix=$HOME/python278 --enable-shared
This will build Python as a SO that is shared between all processes using that Python.
Do you think it has increased with the latest modifications of Crossbar
or Twisted 14.00 ? Or is it a regression ?
Neither nor. How did you measure?
50MB is definitely not possible for my ARM equipment since it is fitted
with 128MB RAM only... that's why 20MB for controller+router would be
Measuring memory use on a x86-64 host when the actual device is ARM 32 bit (I guess): not recommended;) For 2 reasons:
- "measuring" _actual_ memory consumption is tricky
- the difference in architecture can make a huge difference (64 bit vs 32 bit)
For these reasons, I'd strongly encourage to actual test this stuff on your real hardware device. Fire up Crossbar with 10 Py workers. _Test_ what happens.
Could you have a look and tell me if it is normal or not ? It is very
important for me to know if you can decrease this memory footprint.
Crossbar.io looks awesome and I would like to use it !
Sure. Let's track that down. For a start, pls tell us how you "measure" memory consumption .
Precision: I use CPython and not Pypy (Pypy uses even more memory).
While it is true in general that PyPy uses more memory, I wouldn't rule it out from the beginning. _Test_ it. On a ARM board like the Pi, it runs very well:
Btw, here are latency numbers:
Note: above was still done with plain AutobahnPython with a test router, but it'll be very similar for Crossbar.io.
Initial latency was about 40ms (!) for 1 RPC call with AutobahnCpp
client. I had to configure the boost socket with the "tcpNoDelay" option
in the AutobahnCpp example, in order to fall between 1 and 2ms. May be
the template could be updated accordingly.
socket.setTcpNoDelay(true); // to add just after the connection
You mean adding
It's a tradeoff between CPU load and latency. Nagle will result in lower latency, but higher CPU load (roughly). It'll reduce the chances of coalescing outgoing bytes written by the app in different calls to socket.send() - hence higher CPU.
Am 26.07.2014 02:32, schrieb Julien Martin:
Thanks in advance for your help.
You received this message because you are subscribed to the Google
Groups "Autobahn" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to autobahnws+...@googlegroups.com
To post to this group, send email to autob...@googlegroups.com
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.