now that i have three different software load balancers installed (Balance, Crossroads, and Pen), i want to evaluate their relative performance. benchmarking a single web server isn’t difficult using tools like ab, but trying to benchmark a load balanced cluster is somewhat different. since most load balancers support stickiness, all the requests from a single source will be directed to a single back-end server. thus, i’ll need to run the benchmarker from several different sources simultaneously, or i’m really just testing one server with something in the way. fortunately, i have three machines on different IP addresses sitting idle.

my first test is 10,000 requests for a static HTML page (2866 bytes). this test was run against a single apache server in the pool and against each of software load balancers with two back-end servers, from one source and from three simultaneously.

handlersingle sourcethree sources
apache only21.659 seconds33.822
balance106.794failed
crossroads37.729failed
pen39.112failed

the single apache server actually performed the best, easily beating any of the software load balancers in raw throughput. the test from three sources is effectively a mild denial-of-service attack, and none of the software load balancers could handle it. each of them failed and stopped accepting connections well before 10,000 requests were completed.

the second test is 20 requests for a PHP script which performs exactly 1 second of mathematics then returns results. in this case, almost all the load is on the back-end servers, and there was negligible difference in results between the four front-ends.

the third test is 400 requests for the PHP script, but issuing 20 concurrent requests from each source at a time. this generates significant load on the back-end servers, but is the first test where having multiple back-ends shows any improvement.

handlersingle sourcethree sources
apache only23.493 seconds31.097
balance22.82026.191
crossroads34.19940.355
pen24.72128.365

the fourth test is a monster, 400 requests for the PHP script, 100 concurrent requests from each source at a time.

handlersingle sourcethree sources
apache only14.912 seconds22.604
balance10.35518.909
crossroadsfailedfailed
pen15.219failed

these results suggest that a software load balancer might be an option for putting more capacity and resilience into a script-heavy website, but clearly shouldn’t be chosen for performance. the single apache server performed better than my small cluster in nearly every test, and much better in a few cases. the total meltdown of the software load balancers in difficult situations is of particular concern. the results for the cluster might improve with more back-end servers, but the software load balancer itself seems to be the bottleneck.

of the three software load balancers, the simpler Balance and Pen outperformed Crossroads in general. interestingly, Balance fared spectacularly poorly against heavy traffic in the first test, but very well against a different sort of heavy traffic in the fourth.