Load balancers? We don’t need no load balancers! (Part 1)

November 17, 2016
By Ben Gordon

As a CTO of a technology startup, one of the most important decisions that you make is where and how to host the systems that support your product. Depending on your traffic forecasts at launch, the most common options are cloud or managed hosting. These are typically the best choices as they provide the most flexibility in terms of commitment and cost. If you are only serving a few million requests per day or your application can take advantage of a content delivery network, the economics work pretty well and you can easily add capacity as needed.

At nToggle, we are in the business of AdTech traffic optimization. Our flagship product is an algorithmic smart router that provides our customers with a stream of concentrated real-time HTTP bid requests that match their customers’ demand. Since we knew up front that we would be serving more than a few million requests per day, (more like a few million requests per second,) we could not just deploy the CAPEX with the mindset of “if we build it they will come”.

We started to prototype our product in the cloud and once it was ready for production and we signed our first customer, we moved to a managed service. Bandwidth is expensive in the cloud and we push a lot of bits! Managed hosting worked well until our inbound traffic regularly set off our provider’s denial of service mitigation. The business case was clear; the time had come to build our own cage. The benefit was having direct access to peering exchanges and various transit providers with “large pipes” to support our bandwidth requirements. We would also have full control over our network infrastructure and a more predictable cost basis.

Like nToggle, the majority of Internet scale applications communicate using OSI Layer 7 protocols (HTTP). To achieve scalability, applications are typically distributed, redundant, and are opaque to client communication. This “distributed” architecture allows services to be deployed behind a load balancer where they can be accessed at single or multiple IP addresses. One of the benefits of using a load balancer is “horizontal scale”, by simply adding more instances of the service to accommodate traffic growth and gracefully handle failure.

A Layer 7 load balancer can distribute traffic using information extracted from properties of the HTTP request, while Layer 4 load balancing relies on address information extracted from TCP packets. It is likely that your system uses a load balancer which distributes traffic at Layer 4 or Layer 7. Whether your system is in AWS using an ELB, or in a data center using network hardware such as a Big-IP F5 or Citrix Netscaler, or software like HAProxy or NGINX, there is a cost (CAPEX/OPEX) to deploying load balancers into your infrastructure to distribute your HTTP traffic.

In most instances the cost associated with deploying a load balancer into your infrastructure is well worth the investment. High end Layer 7 switches can handle one million requests per second of L7 traffic, or five hundred thousand L4 connections per second. A Layer 4-7 switch with these specs can cost upwards of $125,000. Now considering that redundant hardware is a requirement for nToggle, we would need to purchase 2N of these devices, or go with software load balancers.

So back to our cost basis; the hardware load balancers would come in at a quarter of a million dollars, which is obviously a big expense for a start-up. Servers for the software load balancers would be roughly half that cost and this was before budgeting for switches, racks, PDUs and most importantly servers for our system. (In a managed service or cloud environment, these costs are baked into the cost of each server.) This was simply not going to work, so, we needed to come up with another way to balance traffic across our system. BGP to the rescue!

Tags: , ,