Cross Posted from my personal blog.
When I launched Techie Buzz, I started out with a shared hosting with Dreamhost, which I got at a steal for $40 a year. However, over time I had to gradually move out to a Virtual Private Server (VPS) for the site.
In the initial days, LAMP (Linux/Apache/MySQL/PHP) suited me well on the server but over time Apache literally killed me. This is when I decided to move to a Nginx (pronounced Engine X) setup for my website.
All said and done, I had a great run with Nginx, but then my 2GB setup crapped out on traffic. I increased it to 4GB and things worked fine for a while too.
Then one fine day I had a lot of traffic, like 1000s of visitors a minute and the site crashed so often that I had to literally cry. I tried to upgrade the server to 8GB memory too but it didn’t hold up.
In my entire time of dealing with servers, I have helped several people to setup their own but never revealed my own. So here is the secret on how Techie Buzz runs.
There are some key ingredients on the server which make it a successful setup. I will list them out below.
- Nginx and PHP FastCGI
- Memcached (No point using this if you have one server only)
- NFS (One file system rules it all)
- W3 Cache
These four things (plus several other secret ingredients) are core to the setup at Techie Buzz as it allows for scalability without having to make a system cry. For starters, here is a diagram of how the servers at Techie Buzz are setup.
Though the configuration in the above picture is outdated, the technology we use is still the same. We have a multi-server setup which is basically made up of one host server; where all the requests come, and several other proxy servers which serve users.
When a user visits Techie Buzz, they first land on the host server. The host server then redirects the request to one of the proxy servers we have setup. We can add and remove as many proxy servers we want within minutes based on the traffic we get. However, currently we can deal with more than 3000+ users in a minute without adding new proxy web servers.
This makes the setup highly scalable and allows us to grow as the traffic goes.
We use memcached as a core component of our setup to store cached objects so that we don’t hit the database frequently. MySQL is not optimized for high traffic and without a cache the site would die.
An added advantage of using memcached is that it is a perfect fit for caching and sharing objects on a multi-tiered server setup. This means that, we cache an object in memcached and the same can be accessed by 100s of servers without the cache having to be present physically on any of them accessing it. Think of it as a centralized storage for objects.
Another important part of the setup is NFS (Network File System) which allows us to share the same files over multiple systems.
When you have multiple servers, it does not make sense for you to have multiple copies of the same files on all of them. The problem arises when you have to update even a single file. If you use the local file system on all the servers, you will have to update that single file across all the servers. Now imagine if you have 100 servers, this process simply becomes dreadful.
Thankfully, NFS allows us to share and use common files across multiple servers. This means that if we change one file it will reflect across multiple servers without having to deploy it separately. Bliss.
In addition to all those things, we also make use of an HTML Cache in the form of W3 Cache and of course WordPress.
Additionally I have written several shell scripts which run on individual servers to check server status every two minutes. If the script finds that the server is not responding well, it restarts the core processes automatically. Another script runs frequently to ensure that the CPU process and memory usage is under acceptable levels and reboots the server if required, however, this script has hardly rebooted the server thanks to the optimizations I have done.
There are several other shell scripts I have written to make sure that we have things running fine. One of them backs up the MySQL DB every 4 hours and emails several addresses with an attachment. Another script takes a snapshot of the WordPress directory every week and emails several email addresses a copy and so on.
Other than that, I use SVN for themes, plugins etc so that I have a copy in the cloud. This again is replicated to several online storage platforms like Dropbox, Windows Live and Sugarsync through my local PC.
The DB and Files are also mirrored to several other servers using rsync so that I have multiple copies of the same file everywhere. All in all it is almost a fool proof setup and backup.
Our servers have always been powered by Ubuntu. We have used all the available releases including Hardy, Karmic and Lucid, however our current setup is powered by Natty.
Our servers have been hosted on Dreamhost, Slicehost and currently Linode. Linode (Slicehost earlier) powers our non-static content, while our static content is served by Dreamhost and optimized by a CDN through MaxCDN (who are our sponsors).
Although there is nothing unique about our setup, I take pride in creating a highly scalable environment which is easy to setup and move across any network. I took only 10 hours to move from Slicehost to Linode and most of time spent was used to transfer files, so you can imagine how simple and easy the setup is.