Skip to content

Overview

Concepts and Principles

Development

Overview

IDEs

API Explorer

Releases

Release Notes

TORO Integrate

Coder Studio

Coder Cloud

Bug Reports

Search

Three-Tier Network Architecture with High-Availability

What's a three-tier network architecture?

Learn more about the specifics of this design by reading the Simple Three-Tier Network Architecture page.

A high-availability (HA) architecture is one that aims to prolong the uptime of systems, allowing resources to stay available and services to operate over long periods of time despite the occurrence of errors and/or high loads. This type of design puts an emphasis on redundancy, ensuring that resources can failover during downtime and/or be served with load-balancing.

In the following section, we'll discuss one such way of setting up TORO Integrate on top of a high-availability architecture featuring failovers and load-balancing. Paired with the right services, this can provide a very stable infrastructure.

Implementation

There are plenty of ways to achieve high-availability. The model implementation in this document is just one of many. This, in particular, runs on top of a three-tier network architecture – a design which works well with redundancy.

TORO Cloud and HA

The model set-up we'll be discussing is the same set-up employed by TORO Integrate instances running on TORO Cloud.

Under the three-tier network architecture, the following tiers will be present in our set-up (whose scopes are defined by closed dashed orange lines in the diagram below):

  • Tier 1, the web or presentation tier

    This tier is where web servers are located; they are in charge of distributing traffic. In this implementation, we'll be using NGINX instances to act as load balancers and reverse proxies. Additionally, these NGINX instances will also serve as the network's first line of defense, or a demilitarized zone (DMZ).

  • Tier 2, the application tier

    This is the layer where the applications are deployed. In our set-up, we'll deploy the different applications required by TORO Integrate (ZooKeeper, Solr, ActiveMQ), as well as TORO Integrate itself in multiple application servers. These redundancies will be done to ensure availability.

  • Tier 3, the data tier

    This tier contains our database servers which will be configured to adapt to a master-slave set-up.

In an attempt to provide a more concrete example, we'll further discuss this set-up in terms of Amazon Web Services (AWS)1.

AWS terms ahead!

As you read on, you'll encounter plenty of AWS concepts. To learn more about AWS, please refer to their documentation.

A diagram illustrating TORO Integrate on top of a highly-available, three-tier network architecture

Different resources will require different AWS services for hosting2 and though this might be the case, all resources in our set-up will still be anchored to the same AWS region3.

AWS regions and Availability Zones

In AWS, you may host your resources in multiple locations world-wide4. These locations are composed of regions and Availability Zones (AZ). Each region is a separate geographic area, completely independent from other regions. The closer the region (where your resources reside) is to the client, the faster data can be served. Each region has multiple, isolated locations known as Availability Zones connected through low-latency links.

For tiers one and two, we shall provision two Availability Zones (B and C), each of which replicates the services of the other5. The same goes for the data tier, whose RDS instances use multi-AZ deployments. In this scenario, we'll assume a number of two AZs as well for the RDS instances – one zone containing the master RDS and the other containing the slave RDS. Having redundant deployments of the same service or resource paves the way for load-balancing and failing-over and having multiple, duplicate AZs ensures that not all copies are down when a specific Availability Zone gets hit.

To manage all resources under a certain AWS region, we shall provision an Amazon VPC6. A VPC is there to put all services (including those belonging to different Availability Zones) under a unified scope. A space such as that makes it easier to configure the load-balancing and failing-over capabilities of the system. Work under the VPC includes tasks such as configuring which EC2 instances would share loads and how, sharing data across servers for consistency, implementing security, and arranging the behavior of servers in the event that a fatal error occurs.

In addition to the Amazon services mentioned above, this set-up will also make use of the following services:

Automatically configure EC2 resources

You can use AWS OpsWorks to automate the configuration of your Amazon EC2 resources, like those described in this set-up.

To summarize, our model set-up will use up the following AWS resources:

Tier Amazon Service and Instance Type Number of Resources
Tier 1 EC2, t2.micro 2
Tier 2 EC2, t2.micro 10
Tier 3 RDS, db.t2.micro 2

Of course, you may have to scale up depending on your business needs. For shifting requirements, you may want to look at Amazon's scale-on-demand offers.


Now that we have gone through how all components of the architecture would work together, let us discuss each tier in detail:

Tier 1

In Tier 1, we will provision two NGINX instances, one running on each of the Availability Zones B and C. In this set-up, only one of these NGINX instances would be running at a time. The other instance is only intended to take over if the primary server goes down.

As said, NGINX will act as a reverse proxy and a load balancer. It will distribute traffic between the application servers in Tier 2, across all Availability Zones. This tier is also the face of your network and will act as an extra layer of security. Learn how to configure TORO Integrate on top of NGINX in Running TORO Integrate with NGINX and TLS.

Tier 2

Tier 2 is comprised of this architecture's application servers. In particular, they are to host the following applications:

  • TORO Integrate
  • ZooKeeper
  • Solr
  • ActiveMQ

Each application would have multiple servers; though costly, this ensures availability. These applications require file storage and that, we shall provide by mounting them with Amazon EFS. This will enable us to provision a common file system for applications with multiple instances.

Setting up ActiveMQ, SolrCloud, and ZooKeeper

Check out these pages to learn how to configure ActiveMQ, SolrCloud, and ZooKeeper:

Tier 3

Tier 3 is comprised of databases. In our set-up, we will have two Amazon RDS instances deployed in multiple Availability Zones. These instances will have a master-slave set-up; at a time, only one can read (master) and the other one may only write (slave). Two cannot write at the same time because of the possibility of conflicts; we have to segregate the roles to ensure consistency in data. In the event that the master RDS instance crashes, the slave RDS instance will be promoted as the master RDS.


  1. Cloud service providers other than AWS may be used. 

  2. Amazon RDS for relational databases; Amazon EC2 for the NGINX servers, ZooKeeper, Solr, ActiveMQ, and TORO Integrate; EFS for file storage. 

  3. Depending on your business needs, one or multiple AWS regions may be required. This specific set-up assumes regions are independent and do not require interaction with each other. Inter-regional data or service sharing is possible although this will require a slightly modified architecture. 

  4. Although not all services may be tied to a specific region or AZ, in which the service is said to be global

  5. Although not identical in number. 

  6. Whose scope (at least, according to the intended architecture) is limited to the region's resources it is configured to manage. 

  7. This page is an introduction to SolrCloud and an overview of the requirements you would have to fulfill if you want to use TORO Integrate with SolrCloud (instead of embedded Solr). The following pages will discuss how to configure SolrCloud and ZooKeeper