A 'production environment' refers to a setting where your organization's services are finally made available to your intended clients. Typically, the transition to a production environment does not occur until your code is ready and deemed stable to avoid unwanted and unexpected behavior.
As with most applications, high availability and scalability is desired in production instances of TORO Integrate. There are plenty of setup implementations that can help you achieve these properties, like the three tier network design featuring high availability. In this document, however, we'll be discussing an alternative three-tier network architecture on AWS, illustrated below, that could work for organizations looking for a slightly easier and more budget-friendly approach.
The purpose of going over an actual implementation is to help you get an overview of the resources often required in a production environment so you could further understand, and hopefully conceptualize the production environment you would want for your very own TORO Integrate instance.
You may substitute, scale, or change the recommended concepts and ideas as you see fit.
A three-tier network architecture is suitable for production as this design makes it possible to modularize components, hence making it easy to replace or update parts without compromising the entire system. AWS, for its popularity, hefty community support, and reasonable options for scalability, has been chosen as the cloud provider.
In our setup, the infrastructure will be hosted in a specific AWS Region and contained in a virtual private cloud (VPC). A VPC is an isolated section in AWS where you can define resources and configurations for your network. The architecture will be divided into two Availability Zones (AZ): Availability Zone B and Availability Zone C. In each of this AZs are three tiers; namely, the web tier, application tier, and the database tier. When configured correctly, this setup will ensure reliability, availability, scalability, and security.
|Tier||Name||Number of Resources||AWS Service and Instance Type||Purpose|
|Tier 1||Web Tier||2||EC2, t2.medium||This tier is where web servers are located. Instances in this tier are in charge of distributing traffic using NGINX, which acts as a load-balancer and a reverse proxy. It also serves as the first line of defense or a demilitarized zone (DMZ) in the network.|
|Tier 2||Application Tier||2||EC2, m4.xlarge||This is the layer where the applications are deployed. In our production configuration scenario, there are two application servers for TORO Integrate. In addition to these, however, other additional application servers such as a standalone Solr server may be configured.|
|Tier 3||Database Tier||2||RDS, r4.large||This is the tier that includes the database servers. It includes a master (writer) and a slave (reader) database. This relationship ensures that in the case that the master goes down, the slave will automatically be promoted as the main database server.|
The first tier includes two instances for NGINX, featuring redundancy and fail-over capabilities. NGINX will serve as a reverse proxy and a load-balancer for TORO Integrate.
It will be used to proxy requests from the client to the application servers. This adds layer of security as it prevents users from directly accessing the actual application servers.
How do I run TORO Integrate with NGINX?
Check out this step-by-step guide on how to configure a TORO Integrate instance to work with NGINX and TLS.
The second tier includes two application servers which both run TORO Integrate. This tier was configured this way for the same reasons as the resources on Tier 1 – redundancy and fail-over capabilities.
For production environments, it is recommended to use a standalone Solr or SolrCloud instance. A standalone Solr or SolrCloud instance would be configured in Tier 2, as well. Having an independent instance is particularly useful when you have multiple TORO Integrate instances connected to a single instance of Solr.
The third tier includes the database instances that your TORO Integrate instances will connect to. This tier is composed of two AWS RDS instances that have a master-slave relationship, which also feature fail-over capabilities.
A network file share (NFS) will be required in this setup so that application configuration files and Integrate packages can be shared. In this example, we'll be using the AWS Elastic File System (EFS) service. There will be a common data source or mount for the servers to refer to. During an unexpected downtime in any of servers, the backup server which will be taking over will seamlessly have the necessary files to serve the clients, users, and applications at hand.