Skip to content


Concepts and Principles




API Explorer


Release Notes

TORO Integrate

Coder Studio

Coder Cloud

Bug Reports


Production Environment


The production environment for TORO Integrate is usually required to be highly available and scalable. In this document we will demonstrate some of the considerations of a production environment using a sample environment deployed at AWS.

The purpose is to provide an overview of the resources often required in a production environment to further help you understand and aid you in conceptualizing or configuring your own production environment.

You may substitute, scale, or change the recommended concepts and ideas as you see fit.

Below is a diagram for our sample production environment.

Production Environment Diagram

The illustration shows one possible architecture set up for a production environment in AWS wherein the infrastructure is hosted in a specified AWS Region and contained in a Virtual Private Cloud (VPC). A VPC is an isolated section in AWS wherein you can define resources and configurations for your network.

The architecture is also divided into different Availability Zones (AZ). Availability Zone B and Availability Zone C, within these AZs are 3 Tiers, namely the web tier, application tier, and database tier.

This setup is intended to ensure reliability, availability, scalability, and security when configured correctly.

Tier Name Number of Resources AWS Resources Purpose
Tier 1 Web Tier 2 (2) EC2 - t2.medium This tier is where web servers are located. Instances in this tier are in charge of distributing traffic, using NGINX which acts as a load balancer and a reverse proxy. It also serves as a first line of defense or a demilitarized zone (DMZ) in the network.
Tier 2 Application Tier 2 (2) EC2 - m4.xlarge This is the layer where the applications are deployed. In the production configuration scenario, there are two application servers. This is where you will be deploying your TORO Integrate instances, you may also add additional servers for a standalone Solr server in this tier.
Tier 3 Database Tier 2 (2) RDS - r4.large This is the tier that includes the database servers. It includes a master (writer) and a slave (reader) database. This relationship ensures that in the case that the master goes down, the slave will automatically be promoted as the main database server.

Tier 1

The first tier includes 2 instances for NGINX for redundancy and fail over capabilities. NGINX will serve as a reverse proxy and a load balancer for TORO Integrate.

It will be used to proxy requests from the client to the applicaiton servers. This adds layer of security as it prevents users from directly accessing the actual application servers directly.

For further information on configuring NGINX check out this step-by-step guide on how to configure TORO Integrate instances with NGINX and TLS.

Tier 2

The second tier includes 2 application servers on which TORO Integrate is deployed. This was configured for the same reason as the resources on Tier 1, that is, redundancy and fail over capabilities. You can learn more about the different server deployments here.

Standalone Solr

For this scenario, it is recommended that you have a standalone Solr or SolrCloud instance. A standalone Solr or SolrCloud instance would be configured in Tier 2. This is useful when you have multiple TORO Integrate instances connected to a single instance of Solr.

Tier 3

The third tier includes the database instances that your TORO Integrate instances will connect to. This tier is composed for 2 RDS instances that have a master-slave relationship, which also has its failover capabilities.

You can learn more about configuring the database connections for your TORO Integrate instances here if you'd like to learn how to connect it to your preferred database.

File Systems

A network file share will be required in this setup so that application configuration files and TORO Integrate packages can be shared. In this example we use AWS Elastic File System (Amazon EFS), wherein there is a common data source or mount for the servers to refer to so in the case of any unexpected downtime in one of the servers occurs, the server which will be taking over will seamlessly have the necessary files to serve the clients, users, and applications at hand.