Autobahn Websockets bumps and bruises

#1

ENVIRONMENT:

Python 2.7

Autobahn 0.8.8

Twisted 13.2.0

Django 1.5.3

I’m currently using supervisord/twistd to handle a websocket service along side a supervisord/gunicorn RESTful service. I seem to be getting some very odd behavior where Autobahn no longer accepts websocket connections. This happens randomly throughout the week (3-4 times), which requires a restart of the websocket service. I checked to see if it was due to the amount of websockets that may be using the service at a given time, but the most I have seen connected is around 30-40 connections. I’ve attempted to look at the logs generated by twisted as well as autobahn but have not been able to track down any errors. Below is my general setup. The websocket factory uses the basic implementation of a WebSocketFactory. Same goes for the WebSocketResource. However, I do store the client connections within a dictionary objects so I can send the right messages to the right group of people. My question is, on the outside does anyone see anything odd about the implementation I have within the _run_ws.py file or have experienced anything like I’ve mentioned?

In addition to this. I run the service within a small AWS instance in EC2 and whenever I send a group of packets outbound through 12 unique client.send() commands I notice that the CPU spikes to over 30%. This seems a little high for only 12 connections. Is this generally normal?

Lastly, I’ve noticed that some of the autobahn examples attempt to send messages in different ways (https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/broadcast/server.py) where in one place it uses c.sendMessage() and in another it uses c.sendPreparedMessage(). How are these different than c.send()?

Apache is in front of Twisted by use of the following logic. Maybe some of you have experienced the disconnect issue through the use of the proxy below?

LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so

ProxyPass /ws/ ws://localhost:9000/ws/ disablereuse=On flushpackets=on keepalive=On

Sorry for the lengthly message. I'm happy to supply additional clarity about my environment if need be to help track down the issue with why websockets no longer connect as well as the CPU spike.

APPLICATION FILE _run_ws.py:

import sys, os, logging, datetime, threading, time, json, re

from os.path import abspath, dirname, basename, join

sys.path.append(abspath(dirname(__file__) + '../'))

from autobahn.twisted.resource import WebSocketResource, WSGIRootResource

from twisted.application import internet, service

from twisted.internet import reactor

from twisted.python.threadpool import ThreadPool

from twisted.web.server import Site

from twisted.web.static import File

from twisted.web.wsgi import WSGIResource

from twisted.python import log

# Environment setup for your Django project files:

os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.config.settings'

from django.core.handlers.wsgi import WSGIHandler

class ThreadPoolService(service.Service):

def __init__(self, pool):

self.pool = pool

def startService(self):

service.Service.startService(self)

self.pool.start()

def stopService(self):

service.Service.stopService(self)

self.pool.stop()

# create a Twisted Web resource for our WebSocket server

ws_factory = WSFactory.MyWebsocketFactory("ws://{0}:{1}".format( localhost, 9000 ), debug=False, debugCodePaths=False)

ws_factory.protocol = WSFactory.MyProtocol

ws_factory.setProtocolOptions(allowHixie76 = True)

ws_resource = WebSocketResource(ws_factory)

application = service.Application('twisted-websockets')

multi = service.MultiService()

pool = ThreadPool()

tps = ThreadPoolService(pool)

resource = WSGIResource( reactor, tps.pool, WSGIHandler() )

root = WSGIRootResource(resource, {'ws': ws_resource})

main_site = Site( root )

internet.TCPServer( settings.WS['server_port'], main_site ).setServiceParent( multi )

multi.setServiceParent( application )

STARTING COMMAND

[program:ws]

command=twistd -n -y _run_ws.py --pidfile=ws.pid

; command=twistd --nodaemon -ny _run_ws.py --pidfile=ws.pid

process_name=%(program_name)s ; process_name expr (default %(program_name)s)

numprocs=1                    ; number of processes copies to start (def 1)

; directory=%(here)s/ws                ; directory to cwd to before exec (def no cwd)

directory=%(here)s/ws

autostart=false                ; start at supervisord start (default: true)

autorestart=unexpected        ; whether/when to restart (default: unexpected)

startsecs=1                   ; number of secs prog must stay running (def. 1)

startretries=3                ; max # of serial start failures (default 3)

exitcodes=0,2                 ; 'expected' exit codes for process (default 0,2)

stopsignal=QUIT               ; signal used to kill process (default TERM)

stopwaitsecs=10               ; max num secs to wait b4 SIGKILL (default 10)

stopasgroup=false             ; send stop signal to the UNIX process group (default false)

killasgroup=false             ; SIGKILL the UNIX process group (def false)

stdout_logfile=%(here)s/logs/ws.stdout.log        ; stdout log path, NONE for none; default AUTO

stdout_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)

stdout_capture_maxbytes=1MB   ; number of bytes in 'capturemode' (default 0)

stderr_logfile=%(here)s/logs/ws.stderr.log        ; stdout log path, NONE for none; default AUTO

stderr_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)

stderr_capture_maxbytes=1MB   ; number of bytes in 'capturemode' (default 0)
0 Likes

#2

Hi Patrick,

some quick notes (sorry, I’m a bit short in time right now)

  1. CPU load: EC2 small has very low CPU power (are you using “new gen” or old?) => use PyPy - more steam for the buck
  2. Apache? are you kidding? why? :wink: kick it. unneeded. which is doing the CPU load? Twisted or Apache?
  3. rgd. the issue of Autobahn stopping to accept connections: if you can replicate this without Apache in front and without using thread pool, I might shelf out time to have a more detailed looked. I haven’t seen this before with Autobahn - which of course doesn’t mean there couldn’t be a bug somewhere

Hope this helps,
/Tobias

···

Am Mittwoch, 28. Mai 2014 07:34:59 UTC+2 schrieb Patrick Santora:

ENVIRONMENT:

Python 2.7

Autobahn 0.8.8

Twisted 13.2.0

Django 1.5.3

I’m currently using supervisord/twistd to handle a websocket service along side a supervisord/gunicorn RESTful service. I seem to be getting some very odd behavior where Autobahn no longer accepts websocket connections. This happens randomly throughout the week (3-4 times), which requires a restart of the websocket service. I checked to see if it was due to the amount of websockets that may be using the service at a given time, but the most I have seen connected is around 30-40 connections. I’ve attempted to look at the logs generated by twisted as well as autobahn but have not been able to track down any errors. Below is my general setup. The websocket factory uses the basic implementation of a WebSocketFactory. Same goes for the WebSocketResource. However, I do store the client connections within a dictionary objects so I can send the right messages to the right group of people. My question is, on the outside does anyone see anything odd about the implementation I have within the _run_ws.py file or have experienced anything like I’ve mentioned?

In addition to this. I run the service within a small AWS instance in EC2 and whenever I send a group of packets outbound through 12 unique client.send() commands I notice that the CPU spikes to over 30%. This seems a little high for only 12 connections. Is this generally normal?

Lastly, I’ve noticed that some of the autobahn examples attempt to send messages in different ways (https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/broadcast/server.py) where in one place it uses c.sendMessage() and in another it uses c.sendPreparedMessage(). How are these different than c.send()?

Apache is in front of Twisted by use of the following logic. Maybe some of you have experienced the disconnect issue through the use of the proxy below?

LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so

ProxyPass /ws/ ws://localhost:9000/ws/ disablereuse=On flushpackets=on keepalive=On

Sorry for the lengthly message. I’m happy to supply additional clarity about my environment if need be to help track down the issue with why websockets no longer connect as well as the CPU spike.

APPLICATION FILE _run_ws.py:

import sys, os, logging, datetime, threading, time, json, re

from os.path import abspath, dirname, basename, join

sys.path.append(abspath(dirname(file) + ‘…/’))

from autobahn.twisted.resource import WebSocketResource, WSGIRootResource

from twisted.application import internet, service

from twisted.internet import reactor

from twisted.python.threadpool import ThreadPool

from twisted.web.server import Site

from twisted.web.static import File

from twisted.web.wsgi import WSGIResource

from twisted.python import log

Environment setup for your Django project files:

os.environ[‘DJANGO_SETTINGS_MODULE’] = ‘myapp.config.settings’

from django.core.handlers.wsgi import WSGIHandler

class ThreadPoolService(service.Service):

def init(self, pool):

self.pool = pool

def startService(self):

service.Service.startService(self)

self.pool.start()

def stopService(self):

service.Service.stopService(self)

self.pool.stop()

create a Twisted Web resource for our WebSocket server

ws_factory = WSFactory.MyWebsocketFactory(“ws://{0}:{1}”.format( localhost, 9000 ), debug=False, debugCodePaths=False)

ws_factory.protocol = WSFactory.MyProtocol

ws_factory.setProtocolOptions(allowHixie76 = True)

ws_resource = WebSocketResource(ws_factory)

application = service.Application(‘twisted-websockets’)

multi = service.MultiService()

pool = ThreadPool()

tps = ThreadPoolService(pool)

resource = WSGIResource( reactor, tps.pool, WSGIHandler() )

root = WSGIRootResource(resource, {‘ws’: ws_resource})

main_site = Site( root )

internet.TCPServer( settings.WS[‘server_port’], main_site ).setServiceParent( multi )

multi.setServiceParent( application )

STARTING COMMAND

[program:ws]

command=twistd -n -y _run_ws.py --pidfile=ws.pid

; command=twistd --nodaemon -ny _run_ws.py --pidfile=ws.pid

process_name=%(program_name)s ; process_name expr (default %(program_name)s)

numprocs=1 ; number of processes copies to start (def 1)

; directory=%(here)s/ws ; directory to cwd to before exec (def no cwd)

directory=%(here)s/ws

autostart=false ; start at supervisord start (default: true)

autorestart=unexpected ; whether/when to restart (default: unexpected)

startsecs=1 ; number of secs prog must stay running (def. 1)

startretries=3 ; max # of serial start failures (default 3)

exitcodes=0,2 ; ‘expected’ exit codes for process (default 0,2)

stopsignal=QUIT ; signal used to kill process (default TERM)

stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)

stopasgroup=false ; send stop signal to the UNIX process group (default false)

killasgroup=false ; SIGKILL the UNIX process group (def false)

stdout_logfile=%(here)s/logs/ws.stdout.log ; stdout log path, NONE for none; default AUTO

stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)

stdout_capture_maxbytes=1MB ; number of bytes in ‘capturemode’ (default 0)

stderr_logfile=%(here)s/logs/ws.stderr.log ; stdout log path, NONE for none; default AUTO

stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)

stderr_capture_maxbytes=1MB ; number of bytes in ‘capturemode’ (default 0)

0 Likes

#3

2. Apache? are you kidding? why? :wink: kick it. unneeded. which is doing

> the CPU load? Twisted or Apache?

one more note: to give you some numbers rgd. performance

AutobahnPython can do:

"""
1,000 PubSub events/sec with 32+ bytes payload to 1,000 subscribers with average latency of 25 ms at a CPU load of 65%.

or

28 Mbit/sec net outgoing
"""

On a _RaspberryPi_!

And this isn't raw WebSocket, but full flavored WAMP PubSub.

http://tavendo.com/blog/post/autobahn-pi-benchmark/

Any kind of EC2 small should have lots more power than a Pi ..

0 Likes

#4

Great response! Thank you!

  1. Totally agree on the CPU side. We are using m1’s currently, so old gen. But will rebuild to a new gen today and test further. I have not used PyPy yet. Think it will make that much of a difference?
  1. In regards to Apache, yes, it need to go away :). However we need to use ssh for our websocket implementation so Apache seemed like the best way to get that implemented. In addition we have multiple Websockets servers running at the moment and use a Apache balancer in front of the system (on a small instance) to handle delegation. It’s not ideal, but seemed to be the best approach to handle websocket distribution in an AWS setting. If you have another idea I’m all ears :). The CPU spike is specific to Twisted. I watch it via top as it occurs on the linux box.

  2. I’m in the process of building up a use case to test what might be happening here. At this point there is no rhyme or reason to why this is happening. I want apache gone so it’s a strait shot to websockets in the long run to remove any other moving parts. You mention removing the thread pool. By looking at the code I submitted, how would I go about doing that? If I am able to replicate the issue I will surely let you know so we can get it fixed.

Thank you again

-Pat

···

On Wednesday, May 28, 2014 10:56:53 AM UTC-7, Tobias Oberstein wrote:

Hi Patrick,

some quick notes (sorry, I’m a bit short in time right now)

  1. CPU load: EC2 small has very low CPU power (are you using “new gen” or old?) => use PyPy - more steam for the buck
  2. Apache? are you kidding? why? :wink: kick it. unneeded. which is doing the CPU load? Twisted or Apache?
  3. rgd. the issue of Autobahn stopping to accept connections: if you can replicate this without Apache in front and without using thread pool, I might shelf out time to have a more detailed looked. I haven’t seen this before with Autobahn - which of course doesn’t mean there couldn’t be a bug somewhere

Hope this helps,
/Tobias

Am Mittwoch, 28. Mai 2014 07:34:59 UTC+2 schrieb Patrick Santora:

ENVIRONMENT:

Python 2.7

Autobahn 0.8.8

Twisted 13.2.0

Django 1.5.3

I’m currently using supervisord/twistd to handle a websocket service along side a supervisord/gunicorn RESTful service. I seem to be getting some very odd behavior where Autobahn no longer accepts websocket connections. This happens randomly throughout the week (3-4 times), which requires a restart of the websocket service. I checked to see if it was due to the amount of websockets that may be using the service at a given time, but the most I have seen connected is around 30-40 connections. I’ve attempted to look at the logs generated by twisted as well as autobahn but have not been able to track down any errors. Below is my general setup. The websocket factory uses the basic implementation of a WebSocketFactory. Same goes for the WebSocketResource. However, I do store the client connections within a dictionary objects so I can send the right messages to the right group of people. My question is, on the outside does anyone see anything odd about the implementation I have within the _run_ws.py file or have experienced anything like I’ve mentioned?

In addition to this. I run the service within a small AWS instance in EC2 and whenever I send a group of packets outbound through 12 unique client.send() commands I notice that the CPU spikes to over 30%. This seems a little high for only 12 connections. Is this generally normal?

Lastly, I’ve noticed that some of the autobahn examples attempt to send messages in different ways (https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/broadcast/server.py) where in one place it uses c.sendMessage() and in another it uses c.sendPreparedMessage(). How are these different than c.send()?

Apache is in front of Twisted by use of the following logic. Maybe some of you have experienced the disconnect issue through the use of the proxy below?

LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so

ProxyPass /ws/ ws://localhost:9000/ws/ disablereuse=On flushpackets=on keepalive=On

Sorry for the lengthly message. I’m happy to supply additional clarity about my environment if need be to help track down the issue with why websockets no longer connect as well as the CPU spike.

APPLICATION FILE _run_ws.py:

import sys, os, logging, datetime, threading, time, json, re

from os.path import abspath, dirname, basename, join

sys.path.append(abspath(dirname(file) + ‘…/’))

from autobahn.twisted.resource import WebSocketResource, WSGIRootResource

from twisted.application import internet, service

from twisted.internet import reactor

from twisted.python.threadpool import ThreadPool

from twisted.web.server import Site

from twisted.web.static import File

from twisted.web.wsgi import WSGIResource

from twisted.python import log

Environment setup for your Django project files:

os.environ[‘DJANGO_SETTINGS_MODULE’] = ‘myapp.config.settings’

from django.core.handlers.wsgi import WSGIHandler

class ThreadPoolService(service.Service):

def init(self, pool):

self.pool = pool

def startService(self):

service.Service.startService(self)

self.pool.start()

def stopService(self):

service.Service.stopService(self)

self.pool.stop()

create a Twisted Web resource for our WebSocket server

ws_factory = WSFactory.MyWebsocketFactory(“ws://{0}:{1}”.format( localhost, 9000 ), debug=False, debugCodePaths=False)

ws_factory.protocol = WSFactory.MyProtocol

ws_factory.setProtocolOptions(allowHixie76 = True)

ws_resource = WebSocketResource(ws_factory)

application = service.Application(‘twisted-websockets’)

multi = service.MultiService()

pool = ThreadPool()

tps = ThreadPoolService(pool)

resource = WSGIResource( reactor, tps.pool, WSGIHandler() )

root = WSGIRootResource(resource, {‘ws’: ws_resource})

main_site = Site( root )

internet.TCPServer( settings.WS[‘server_port’], main_site ).setServiceParent( multi )

multi.setServiceParent( application )

STARTING COMMAND

[program:ws]

command=twistd -n -y _run_ws.py --pidfile=ws.pid

; command=twistd --nodaemon -ny _run_ws.py --pidfile=ws.pid

process_name=%(program_name)s ; process_name expr (default %(program_name)s)

numprocs=1 ; number of processes copies to start (def 1)

; directory=%(here)s/ws ; directory to cwd to before exec (def no cwd)

directory=%(here)s/ws

autostart=false ; start at supervisord start (default: true)

autorestart=unexpected ; whether/when to restart (default: unexpected)

startsecs=1 ; number of secs prog must stay running (def. 1)

startretries=3 ; max # of serial start failures (default 3)

exitcodes=0,2 ; ‘expected’ exit codes for process (default 0,2)

stopsignal=QUIT ; signal used to kill process (default TERM)

stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)

stopasgroup=false ; send stop signal to the UNIX process group (default false)

killasgroup=false ; SIGKILL the UNIX process group (def false)

stdout_logfile=%(here)s/logs/ws.stdout.log ; stdout log path, NONE for none; default AUTO

stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)

stdout_capture_maxbytes=1MB ; number of bytes in ‘capturemode’ (default 0)

stderr_logfile=%(here)s/logs/ws.stderr.log ; stdout log path, NONE for none; default AUTO

stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)

stderr_capture_maxbytes=1MB ; number of bytes in ‘capturemode’ (default 0)

0 Likes

#5

Wow! That’s awesome. ON a Pi even! That’s amazing. I’ve been wanting to get my hands on a Pi and do similar work via XMPP standards. Very Very cool stuff!!

···

On Wednesday, May 28, 2014 11:11:44 AM UTC-7, Tobias Oberstein wrote:

  1. Apache? are you kidding? why? :wink: kick it. unneeded. which is doing

the CPU load? Twisted or Apache?

one more note: to give you some numbers rgd. performance

AutobahnPython can do:

“”"

1,000 PubSub events/sec with 32+ bytes payload to 1,000 subscribers with
average latency of 25 ms at a CPU load of 65%.

or

28 Mbit/sec net outgoing

“”"

On a RaspberryPi!

And this isn’t raw WebSocket, but full flavored WAMP PubSub.

http://tavendo.com/blog/post/autobahn-pi-benchmark/

Any kind of EC2 small should have lots more power than a Pi …

0 Likes

#6

Great response! Thank you!

1. Totally agree on the CPU side. We are using m1's currently, so old
gen. But will rebuild to a new gen today and test further. I have not
used PyPy yet. Think it will make that much of a difference?

PyPy is the way to go. It is way faster (we are talking 2-10x depending as always)

2) In regards to Apache, yes, it need to go away :). However we need to
use ssh for our websocket implementation so Apache seemed like the best

FWIW, AutobahnPython of course can terminate TLS itself. And it won't be slower than Apache on that, since both pretty much got the TLS load inside OpenSSL done.

Anyway: if you need any kind of "frontend" stuff, please consider Nginx or HAproxy. Apache is (IMHO) a dead technology. Given the former 2, I have troubles seeing any use case ..

way to get that implemented. In addition we have multiple Websockets
servers running at the moment and use a Apache balancer in front of the
system (on a small instance) to handle delegation. It's not ideal, but
seemed to be the best approach to handle websocket distribution in an
AWS setting. If you have another idea I'm all ears :). The CPU spike is

Route53 doing L3 balancing ("sticky" source IP/port)

See:
http://stackoverflow.com/questions/12526265/loadbalancing-web-sockets/12526857#12526857

No need for anything but Route 53 and AutobahnPython to scale out WebSocket to multiple nodes.

specific to Twisted. I watch it via top as it occurs on the linux box.

Guess "top" doesn't aggregate threads? probably check htop and ctrl-h or something ..

3) I'm in the process of building up a use case to test what might be
happening here. At this point there is no rhyme or reason to why this is
happening. I want apache gone so it's a strait shot to websockets in the
long run to remove any other moving parts. You mention removing the
thread pool. By looking at the code I submitted, how would I go about
doing that? If I am able to replicate the issue I will surely let you

Sorry, I currently drowning in stuff;(

know so we can get it fixed.

Thank you again
-Pat

np
/tobias

···

Am 28.05.2014 20:22, schrieb Patrick Santora:

On Wednesday, May 28, 2014 10:56:53 AM UTC-7, Tobias Oberstein wrote:

    Hi Patrick,

    some quick notes (sorry, I'm a bit short in time right now)

    1. CPU load: EC2 small has very low CPU power (are you using "new
    gen" or old?) => use PyPy - more steam for the buck
    2. Apache? are you kidding? why? :wink: kick it. unneeded. which is
    doing the CPU load? Twisted or Apache?
    3. rgd. the issue of Autobahn stopping to accept connections: if you
    can replicate this _without_ Apache in front and _without_ using
    thread pool, I might shelf out time to have a more detailed looked.
    I haven't seen this before with Autobahn - which of course doesn't
    mean there couldn't be a bug somewhere

    Hope this helps,
    /Tobias

    Am Mittwoch, 28. Mai 2014 07:34:59 UTC+2 schrieb Patrick Santora:

        ENVIRONMENT:
        Python 2.7
        Autobahn 0.8.8
        Twisted 13.2.0
        Django 1.5.3

        I'm currently using supervisord/twistd to handle a websocket
        service along side a supervisord/gunicorn RESTful service. I
        seem to be getting some very odd behavior where Autobahn no
        longer accepts websocket connections. This happens randomly
        throughout the week (3-4 times), which requires a restart of the
        websocket service. I checked to see if it was due to the amount
        of websockets that may be using the service at a given time, but
        the most I have seen connected is around 30-40 connections. I've
        attempted to look at the logs generated by twisted as well as
        autobahn but have not been able to track down any errors. Below
        is my general setup. The websocket factory uses the basic
        implementation of a WebSocketFactory. Same goes for the
        WebSocketResource. However, I do store the client connections
        within a dictionary objects so I can send the right messages to
        the right group of people. My question is, on the outside does
        anyone see anything odd about the implementation I have within
        the _run_ws.py file or have experienced anything like I've
        mentioned?

        In addition to this. I run the service within a small AWS
        instance in EC2 and whenever I send a group of packets outbound
        through 12 unique client.send() commands I notice that the CPU
        spikes to over 30%. This seems a little high for only 12
        connections. Is this generally normal?

        Lastly, I've noticed that some of the autobahn examples attempt
        to send messages in different ways
        (https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/broadcast/server.py
        <https://github.com/tavendo/AutobahnPython/blob/master/examples/twisted/websocket/broadcast/server.py>)
        where in one place it uses c.sendMessage() and in another it
        uses c.sendPreparedMessage(). How are these different than c.send()?

        Apache is in front of Twisted by use of the following logic.
        Maybe some of you have experienced the disconnect issue through
        the use of the proxy below?

        LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
        ProxyPass /ws/ ws://localhost:9000/ws/ disablereuse=On
        flushpackets=on keepalive=On

        Sorry for the lengthly message. I'm happy to supply additional
        clarity about my environment if need be to help track down the
        issue with why websockets no longer connect as well as the CPU
        spike.

        APPLICATION FILE _run_ws.py:
        import sys, os, logging, datetime, threading, time, json, re
        from os.path import abspath, dirname, basename, join
        sys.path.append(abspath(dirname(__file__) + '../'))

        from autobahn.twisted.resource import WebSocketResource,
        WSGIRootResource
        from twisted.application import internet, service
        from twisted.internet import reactor
        from twisted.python.threadpool import ThreadPool
        from twisted.web.server import Site
        from twisted.web.static import File
        from twisted.web.wsgi import WSGIResource
        from twisted.python import log

        # Environment setup for your Django project files:
        os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.config.settings'
        from django.core.handlers.wsgi import WSGIHandler

        class ThreadPoolService(service.Service):
           def __init__(self, pool):
             self.pool = pool

           def startService(self):
             service.Service.startService(self)
             self.pool.start()

           def stopService(self):
             service.Service.stopService(self)
             self.pool.stop()

        # create a Twisted Web resource for our WebSocket server
        ws_factory = WSFactory.MyWebsocketFactory("ws://{0}:{1}".format(
        localhost, 9000 ), debug=False, debugCodePaths=False)
        ws_factory.protocol = WSFactory.MyProtocol
        ws_factory.setProtocolOptions(allowHixie76 = True)
        ws_resource = WebSocketResource(ws_factory)

        application = service.Application('twisted-websockets')
        multi = service.MultiService()
        pool = ThreadPool()
        tps = ThreadPoolService(pool)
        resource = WSGIResource( reactor, tps.pool, WSGIHandler() )
        root = WSGIRootResource(resource, {'ws': ws_resource})

        main_site = Site( root )
        internet.TCPServer( settings.WS['server_port'], main_site
        ).setServiceParent( multi )
        multi.setServiceParent( application )

        STARTING COMMAND
        [program:ws]
        command=twistd -n -y _run_ws.py --pidfile=ws.pid
        ; command=twistd --nodaemon -ny _run_ws.py --pidfile=ws.pid
        process_name=%(program_name)s ; process_name expr (default
        %(program_name)s)
        numprocs=1 ; number of processes copies to
        start (def 1)
        ; directory=%(here)s/ws ; directory to cwd to
        before exec (def no cwd)
        directory=%(here)s/ws
        autostart=false ; start at supervisord start
        (default: true)
        autorestart=unexpected ; whether/when to restart
        (default: unexpected)
        startsecs=1 ; number of secs prog must stay
        running (def. 1)
        startretries=3 ; max # of serial start failures
        (default 3)
        exitcodes=0,2 ; 'expected' exit codes for
        process (default 0,2)
        stopsignal=QUIT ; signal used to kill process
        (default TERM)
        stopwaitsecs=10 ; max num secs to wait b4 SIGKILL
        (default 10)
        stopasgroup=false ; send stop signal to the UNIX
        process group (default false)
        killasgroup=false ; SIGKILL the UNIX process group
        (def false)
        stdout_logfile=%(here)s/logs/ws.stdout.log ; stdout log
        path, NONE for none; default AUTO
        stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation
        (default 50MB)
        stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode'
        (default 0)
        stderr_logfile=%(here)s/logs/ws.stderr.log ; stdout log
        path, NONE for none; default AUTO
        stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation
        (default 50MB)
        stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode'
        (default 0)

--
You received this message because you are subscribed to the Google
Groups "Autobahn" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to autobahnws+...@googlegroups.com
<mailto:autobahnws+...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

0 Likes

#7

Route53 doing L3 balancing ("sticky" source IP/port)

EBS that is not Route 53 .. its getting late

0 Likes

#8

Thank you very much for the responses! We will look into removing apache from the mix as well as look into the other aspects you have recommended.

I totally get being overloaded with stuff to do :). No worries. Thank you again!

···

On Wednesday, May 28, 2014 11:34:10 AM UTC-7, Tobias Oberstein wrote:

Route53 doing L3 balancing (“sticky” source IP/port)

EBS that is not Route 53 … its getting late

0 Likes

#9

So some more feedback on this. The time when autobahn becomes unavailable is when the server starts sending 500 responses back to the browser simply stating there is an internal error. I’m guessing it the WSGI resource that’s assigned to the /ws/ endpoint below. Is there a good way I can troubleshoot this or is there a recommended secondary approach?

application = service.Application(‘twisted-websockets’)

multi = service.MultiService()

pool = ThreadPool()

tps = ThreadPoolService(pool)

resource = WSGIResource( reactor, tps.pool, WSGIHandler() )

root = WSGIRootResource(resource, {‘ws’: ws_resource})

···

On Wednesday, May 28, 2014 11:49:43 AM UTC-7, Patrick Santora wrote:

Thank you very much for the responses! We will look into removing apache from the mix as well as look into the other aspects you have recommended.

I totally get being overloaded with stuff to do :). No worries. Thank you again!

On Wednesday, May 28, 2014 11:34:10 AM UTC-7, Tobias Oberstein wrote:

Route53 doing L3 balancing (“sticky” source IP/port)

EBS that is not Route 53 … its getting late

0 Likes

#10

I may have jumped the gun on this one. Although there is a 500 error. It looks like it was coming through Apache. Figuring out what the error is not from Autobahn is another story.

···

On Monday, June 16, 2014 2:12:12 PM UTC-7, Patrick Santora wrote:

So some more feedback on this. The time when autobahn becomes unavailable is when the server starts sending 500 responses back to the browser simply stating there is an internal error. I’m guessing it the WSGI resource that’s assigned to the /ws/ endpoint below. Is there a good way I can troubleshoot this or is there a recommended secondary approach?

application = service.Application(‘twisted-websockets’)

multi = service.MultiService()

pool = ThreadPool()

tps = ThreadPoolService(pool)

resource = WSGIResource( reactor, tps.pool, WSGIHandler() )

root = WSGIRootResource(resource, {‘ws’: ws_resource})

On Wednesday, May 28, 2014 11:49:43 AM UTC-7, Patrick Santora wrote:

Thank you very much for the responses! We will look into removing apache from the mix as well as look into the other aspects you have recommended.

I totally get being overloaded with stuff to do :). No worries. Thank you again!

On Wednesday, May 28, 2014 11:34:10 AM UTC-7, Tobias Oberstein wrote:

Route53 doing L3 balancing (“sticky” source IP/port)

EBS that is not Route 53 … its getting late

0 Likes

#11

Hi Patrick,

I may have jumped the gun on this one. Although there is a 500 error. It
looks like it was coming through Apache. Figuring out what the error is
not from Autobahn is another story.

Alright. Hope you get it tracked down! Or go with Nginx as you indicated in your other post. That is supposed to work with WebSocket in a robust and scalable way.

/Tobias

···

Am 16.06.2014 23:38, schrieb Patrick Santora:

On Monday, June 16, 2014 2:12:12 PM UTC-7, Patrick Santora wrote:

    So some more feedback on this. The time when autobahn becomes
    unavailable is when the server starts sending 500 responses back to
    the browser simply stating there is an internal error. I'm guessing
    it the WSGI resource that's assigned to the /ws/ endpoint below. Is
    there a good way I can troubleshoot this or is there a recommended
    secondary approach?

    application = service.Application('twisted-websockets')
    multi = service.MultiService()
    pool = ThreadPool()
    tps = ThreadPoolService(pool)
    resource = WSGIResource( reactor, tps.pool, WSGIHandler() )
    root = WSGIRootResource(resource, {'ws': ws_resource})

    On Wednesday, May 28, 2014 11:49:43 AM UTC-7, Patrick Santora wrote:

        Thank you very much for the responses! We will look into
        removing apache from the mix as well as look into the other
        aspects you have recommended.

        I totally get being overloaded with stuff to do :). No worries.
        Thank you again!

        On Wednesday, May 28, 2014 11:34:10 AM UTC-7, Tobias Oberstein > wrote:

             >
             > Route53 doing L3 balancing ("sticky" source IP/port)

            EBS that is not Route 53 .. its getting late

--
You received this message because you are subscribed to the Google
Groups "Autobahn" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to autobahnws+...@googlegroups.com
<mailto:autobahnws+...@googlegroups.com>.
For more options, visit https://groups.google.com/d/optout.

0 Likes