RSS

Preparing for Massive Load with OpenStack

This entry was posted on Feb 25 2013

In November last year I again updated and hosted the website for the NAB Australian Football League Draft Tracker and flew up to the Gold Coast to sit in on the event to ensure it all went well. The website (http://drafttracker.afl.com.au/) was built as a live tracker so the public can watch the picks as they happen.

It was designed to be lightweight to both server and browser so that any client requests only pulled in all site assets on the initial page load and then tiny JSON files every ten seconds looking for updates [1].  Adding drafted players as they happened by the admin updated records in the database via PHP which would create static files (JSON) for clients to pull down to update the page. NGINX was used as the webserver. All this allowed the server to run with minimal effort during the busy night.

However the trade period weeks earlier showed that the online interest in the AFL had lifted significantly and that I should prepare further to ensure that load was not a problem. As I host on Rackspace with their OpenStack cloud hosting I was able to take advantage of their versatile system to easily build a solution to meet potential demand. I created an image of the server and made four new server instances from it which were to become slaves. I then modified the site so that updates on the master would copy (via rsync) any changed JSON files to the slaves. Then I created a load balancer within the same interface with a few clicks and added the four slaves behind it before finally pointing the domain name to the load balancer’s IP address.  Another benefit of this design was that the site administrator could make updates to the site from an instance that was experiencing no load and therefore unhindered by too much traffic.

The draft went for about 1.5 hours and saw 100,000 unique visitors, each of which would poll the site every ten seconds. Monitoring the servers showed that my solution was complete overkill and probably only the one server was enough. But it’s better to plan for the worst and it was a great experience putting the solution together.

COSTING

Each of the four slaves ran with 512MB memory which costs $0.03 per hour, $0.15 total (including master). The load balancer costs a base of $0.015 per hour but scaled up per number of connections. Therefore for the 1.5 hours the expense of the set-up would have cost just a few dollars. Of course I had this set-up running for quite a few days beforehand though but overall the costing is negligible.

[1] The site was designed before the days of NodeJS and websockets not solution for older browsers.

Post a Comment