When building a real-time service it’s vital to have a high-performance scalable proxy that actually works with WebSockets. There are many flavors but which one is actually best tool for the job in terms of raw performance?
The following technologies were tested:
- http-proxy, version: 0.10.0
- nginx, version: 1.3.15 (development release)
- HAProxy, version: 1.5-dev18 (development release)
- Nothing, just the plain echo server that was used as a control test.
3 different, separate servers were used for testing. All these servers are hosted at joyent.
- Proxy, a 512mb Ubuntu server. This is the server were all the proxy servers are installed. image: sdc:jpc:ubuntu-12.04:2.4.0
- WebSocketServer, a 512mb Node.js smart machine that ran our WebSocket echo
server. The server is written in Node.js and spread across multiple cores
clustermodule. image: sdc:sdc:nodejs:1.4.0
- Thor, another 512mb Node.js smart machine with the same specs as above. This was the server were we generated the load from. Thor is a WebSocket load generation tool which we’ve developed. It’s released as open source and available at http://github.com/observing/thor
Configuring the Proxy server
The Proxy server was just a clean, bare bones Ubuntu 12.04 server. These are the steps that were taken to configure and install all the dependencies. To ensure that everything is up to date we have to run.
The following dependencies were installed on the system:
gitfor access to the github repositories.
build-essentialfor compiling the proxies for source, most of the proxies recently got support for WebSockets or HTTPS.
libssl-devNeeded for HTTPS support.
libev-devLibev required for stud, stud is awesomesss.
apt-get install git build-essential libssl-dev libev-dev
Node.js is required for the
http-proxy. While it runs on the latest Node.js
version for these tests were executed under
0.8.19 to ensure compatibility of
all dependencies. It was installed through github.
git clone git://github.com/joyent/node.git
git checkout v0.8.19
This also installed the
npm binary on the system so we can install the
dependencies of this project. Run
npm install . in the root of this repository
http-proxy and all it’s dependencies are installed automatically.
Nginx is already a widely deployed server. It supports proxing of to different back end servers but it did not support WebSockets. This got recently added in to the development branch of Nginx. There for we installed the latest development version and compiled from source:
tar xzvf nginx-1.3.15.tar.gz
./configure --with-http_spdy_module --with-http_ssl_module
As you can from the options above we’ve included SSL, SPDY and configured some other settings. This yielded the following configuration summary:
“` Configuration summary + PCRE library is not used + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library
nginx path prefix: “/usr/local/nginx” nginx binary file: “/usr/local/sbin” nginx configuration prefix: “/etc/nginx” nginx configuration file: “/etc/nginx/nginx.conf” nginx pid file: “/var/run/nginx.pid” nginx error log file: “/var/log/nginx/error.log” nginx http access log file: “/var/log/nginx/access.log” nginx http client request body temporary files: “client_body_temp” nginx http proxy temporary files: “proxy_temp” nginx http fastcgi temporary files: “fastcgi_temp” nginx http uwsgi temporary files: “uwsgi_temp” nginx http scgi temporary files: “scgi_temp” “`
After this it’s just a simple make away:
HAproxy was already able to proxy WebSockets in
tcp mode but it’s now also
possible to do so in
http mode. HAproxy also got support for
termination. So again, we need to install the development branch.
tar xzvf haproxy-1.5-dev18.tar.gz
make TARGET=linux26 USE_OPENSSL=1
While HAProxy is capable of terminating SSL it’s common practise to have stud in front of HAProxy for SSL offloading. So this is something we want to test as well.
git clone git://github.com/bumptech/stud.git
Now that everything is installed we need to install the configuration files. For
Nginx you can copy & paste the
nginx.conf from the root of this repository to
/etct/nginx/nginx.conf. All the other proxies can be configured on the fly.
After all the proxies are installed we need to do some socket tuning. This information was generously stolen from the internets:
And set the following values.
“`General gigabit tuning:
net.core.somaxconn = 16384 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_syncookies = 1this gives the kernel more memory for tcp which you need with many (100k+) open socket connections
net.ipv4.tcp_mem = 50576 64768 98152 net.core.netdev_max_backlog = 2500 “`
There are 2 different tests executed:
- Load testing the proxies without SSL. This will purely test the performance of WebSocket proxing.
- Load testing the proxies with SSL. Nobody should be running unsecured WebSockets as they have really bad connectivity in browsers. But this adds overhead of SSL termination to the proxy.
In addition to different tests we’re also testing the different amount of connections:
And for the equal results:
Before each test all WebSocketServer is reset and the Proxy re-initiated. Thor
will hammer all the Proxy server with
x amount of connection with a
cuncurrency of 100. For each established connection one single UTF-8 message
is send and received. After the message is received the connection is closed.
stud --config stud.conf
haproxy -f ./haproxy.cfg
FLAVOR=http node http-proxy.js
FLAVOR=http node index.js
http-proxy lives up to it’s name, it proxies requests and does it quite
fast. But as it’s build on top of Node.js it quite heavy on the memory. Just a
simple node process starts with a 12MB of memory. For the 10K requests it took
70mb of memory. The overhead was of the HTTP proxy was about 5 seconds
if you compare it to control test. The HTTPS test was the slowest of all, but
that was expected as Node.js sucks hairy monkey balls in SSL. Not to mention
that will put your event loop to a grinding halt when it’s under severe stress.
I had high hopes for Nginx and it did not let me down. It had a peak memory of
10MB and it was really fast. The first time I tested Nginx, it had a horrible
performance. Node was even faster in SSL then Nginx, I felt like failure, I
genuinely sucked a configuring Nginx. But after some quick tips from some
friends it was actually a one line change in the config. I had the wrong
ciphers configured. After some quick tweaking and a confirmation using
openssl s_client -connect server:ip it was all good and used
RC4 by default
which is really fast.
Up next was HAproxy, it has the same performance profile as NGINX, but lower on
the memory it only required 7MB of memory. The biggest difference was when we
tested with HTTPS. It’s was really show and no where near the performance of
Nginx. Hopefully this will be resolved as it’s a development branch we are
testing. When we put
stud in front of server it gets closer the performance of
http-proxy it’s a great flexible proxy, really easy to extend and build up on.
If you deploy this in production I advice to run
stud in front of it to take
care of the SSL offloading.
haproxy were really close, it’s almost not significant enough to
say that one is faster or better then the other. But if you look at it from an
operations stand point. It’s easier to deploy and manage a single
All test results are available at: