Skip to content

Overview

Concepts and Principles

Development

Overview

IDEs

API Explorer

Releases

Release Notes

TORO Integrate

Coder Studio

Coder Cloud

Bug Reports

Search

Simple

Introduction

A Three Tier Architecture is composed of a Presentation Tier, an Application Tier, and a Data Tier.

A Three Tier Architecture is a client-server based architecture in which the user interface, data access, computer data storage, and different logical processes are developed and administered as independent modules on separate platforms.

It is commonly used for web applications, software, or services and involves servers and clients.

Servers are where programs are constantly running in the background and exchange information with remote users.

Clients, on the other hand, are programs or users that exchange information with servers.

Three Tier Architecture

The diagram above is an example of a simple three tier architecture setup with TORO Integrate.

It is hosted in Amazon Web Services (AWS)1. The infrastructure and its tiers are contained in a VPC2 belonging in a chosen region3 and configured in an availability zone4. The servers in this diagram take their form in Amazon EC2 instances5 which are configurable virtual servers you can use to host your applications and services. The instances in Tier 1 and Tier 2 are configured with Amazon Elastic File System (EFS)6.

This page will be discussing the three tiers of this architecture, the steps on how to deploy a simple three tier architecture and recommend configurations that you can follow to setup your very own three tier architecture.

Summary of the Three Tiers

  1. Presentation Layer (Client)

    Receives input, receives and displays output.

  2. Application Layer (Server)

    Processes the logic and makes any necessary calculations.

  3. Data Layer (Server)

    Stores and manages the data.

Advantages of a Three Tier Architecture

  • Logical segregation of data
  • Redundancy of services in the Application Layer which results in:
  • Server availability
    • Multiple application servers in the second tier decreases the likelihood of downtime because the hosting of services may easily be interchangeable depending on how you configure your devices
  • Improved scalability and flexibility in terms of:
    • Easier migrations of specific instances or servers
    • Easier additions or upgrades
    • Easier removals or replacements
  • Enhanced security
    • The Application and Data layers can be restricted to only those internal network resources that require access

Disadvantages of a Three Tier Architecture

  • More complex architecture
  • Requires technical expertise to setup and manage
  • Takes more time to setup and configure

Configuring a Three Tier Architecture

The setup procedure discussed below is an example only and assumes that the deployment will be on Linux based servers. However, a three tier architecture can equally be applied to a Windows network. The configuration below is an example only and can be customised for your own requirements and/or budget.

To simplify larger deployments on AWS we have a made available a CloudFormation template. Simply download and run the template to configure a highly scalable and redundant network configuration within minutes.

Sequencing

We will be covering the tiers in this order: Application Tier, Presentation Tier, and Data Tier. The Application Tier needs to be configured before the Presentation Tier. This is so that we may configure and test endpoints that are available later on when configuring the first tier.

Application Tier

The Application Tier is the logical tier from which the Presentation Tier pulls data. It controls application functionality by performing detailed processing. This is where your TORO Integrate instances will be deployed.

To support scalability and redundancy you will need a minimum of two Tier 2 servers. Within these servers, you will need to install your TORO Integrate instances. You may refer to the prerequisites and licenses pages and then following pages to choose the installation method that you would prefer.

You can also configure separate servers to run ActiveMQ7 and Solr8. For maximum scalability these applications should be installed on separate servers and be configured with either have a hot spare (ActiveMQ) or in a cluster (SolrCloud).

Presentation Tier

The Presentation Tier occupies the top level and displays information to related services available on a website. It is the tier that communicates with the Application tier and presents the data to the users. We recommended using NGINX as a proxy server for the Presentation tier.

A proxy server is an intermediary server that forwards requests for content from clients to servers across the internet. It allows a client to make an indirect network connection to a network service.

A client connects to the proxy server, then requests a connection, file, or other available resources from a server. The proxy provides the resource either by connecting to the specified server or by serving it from a cache.

There are many types of proxy servers which include Open, Anonymous, Transparent, Distorting, Reverse Proxies, and more, but we will be focusing on Reverse Proxies for this topic.

Reverse Proxies serve the server rather than the client. They provide an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Reverse proxies also have the ability to mask the existence or the characteristics of an origin server or servers which can protect the identity of these servers and act as an additional line of defense against security attacks. They can also distribute the load of incoming requests to multiple servers, compress inbound and outbound data, and use a cache for commonly requested content.

NGINX is an open source reverse proxy server. NGINX can be used to enhance performance, scalability, and availability. It can also be used for URL rewriting and SSL termination.

Nginx Diagram

In our three tier architecture NGINX will be the agent or mediator that delivers, manages, and accepts data to and from the user and the server, whilst protecting and masking the application servers.

Install

It is fairly easy to install on Linux systems, for this example we will assume that you are using a machine with CentOS 7 installed.

Step 1: Add the EPEL repository

You will need to add the CentOS 7 EPEL repository. Open your terminal and enter the following command:

1
sudo yum install epel-release 

Step 2: Install NGINX

Now, you can install NGINX by using this command:

1
sudo yum install -y nginx

Step 3: Enable and Start NGINX

Enable and start the NGINX service with this command:

1
sudo systemctl enable nginx && sudo systemctl start nginx

Step 4: Allow HTTP/s Requests

If your firewall is running, you must execute these commands to allow http and https requests:

1
2
3
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload

Step 5: Verify

Proceed to verify if you can access your web server by opening your web browser and typing in your IP address in the address bar.

NGINX Tips

If you encounter any difficulties in accessing your NGINX server, try and check the selinux10 configurations of your machine. You may check by using the command:

1
get enforce

Set it to permissive it is set to enforcing by executing :

1
set enforce 0

For installation of NGINX in other Linux distributions or operating systems or other methods of installing NGINX, you can refer to their installation page.

Configure

The next step is to create the configuration files for your application servers in the Application tier.

Step 1: Create NGINX Directories

Proceed to the directory of NGINX and create the directories sites-available and sites-enabled:

1
cd ${nginxPath}
Begin to create the directories by typing:

1
mkdir sites-available && mkdir sites-enabled

Step 2: Create Configuration Files

Proceed by creating your own configuration file by using your text editor of choice.

1
vi  ${nginxPath}/sites-available/${WebServiceName}.conf

Step 3: Setting Configurations

This is an example configuration you may want to checkout.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
upstream ${WebServiceName} {
server 192.168.21.56:8983 fail_timeout=0;
}
server {
listen          80;
server_name    ${WebServiceName}.com;
access_log /var/log/nginx/${WebServiceName}_acces.log;
error_log /var/log/nginx/${WebServiceName}_error.log;
location / {
        proxy_pass http:/${WebServiceName};
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout 240;
        proxy_send_timeout 240;
        proxy_read_timeout 240;
        }
}

You may apply these settings when you are just starting out but here are other parameters you may want to take note of.

Term Definition
upstream This module is used to define groups of servers that can be referenced to with their proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass directives. Directives define a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX-domain sockets can be mixed. The benefit of the upstream block is that you can configure more than one server/port/service as upstream and distribute the traffic on them
server Defines the address or identifiers of a server. The address may be identified as a domain name or IP address optionally along with port numbers. But if no port is specified, port 80 would be used.
fail_timeout Sets the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable
listen Sets the address and port for IP, or the path for a UNIX-domain socket on which the server will accept requests.
access_log Sets the path, format, and configuration for a buffered log write
error_log Sets path on where to output error logs
location Sets configuration depending on a request URI. A location can either be defined by a prefix string, or by a regular expression. It used to match expressions and create rules for them.The matching is performed against a normalized URI, after decoding the text encoded in the “%XX” form, resolving references to relative path components “.” and “..”, and possible compression of two or more adjacent slashes into a single slash.
return Stops processing and returns the specified code to a client.
proxy_pass Sets the protocol and address of a proxied server and an optional URI to which a location should be mapped. As a protocol, “HTTP” or “HTTPS” can be specified.
proxy_set_header Allows redefining or appending fields to the request header passed to the proxied server. The value can contain text, variables, and their combinations.
proxy_redirect Sets the text that should be changed in the “Location” and “Refresh” header fields of a proxied server response.
proxy_connect_timeout Defines a timeout for establishing a connection to a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.
proxy_send_timeout Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed.
proxy_read_timeout Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. If the proxied server does not transmit anything within this time, the connection is closed.
ssl Enables the HTTPS protocol for the given virtual server.
ssl_certificate Specifies a file with the certificate in the PEM format for the given virtual server.
ssl_key Specifies a file with the secret key in the PEM format for the given virtual server.

Step 5: Create Symbolic Links

After drafting your specified configuration files for the services you have in Tier 2, you may proceed to create a symbolic link for these configuration files.

1
ln -s ${nginxPath}/sites-available/${WebServiceName}.conf ${nginxPath}sites-enabled/${WebServiceName}.conf

Considerations

  • If you would like to test this within your network, it is suggested that you first modify your host file to test if you can access your web services locally.
  • You need to register your domain if you’d like your web service to be available publicly.

See More

You may check out the following for more information:

Data Tier

The Data Tier houses the database servers where information is stored and retrieved. This tier can be accessed through the Application Layer and consists of data access components to aid in resource sharing.

If deploying on AWS you may deploy an RDS instance11 or an EC2 instance with MySQL12 or any other database permitted by your license.

But for now, the assumptions are that you will be using a machine with a CentOS 7 operating system.

You can proceed to execute the following steps to install MySQL on your machine.

Prerequisites

  • wget

Install

Step 1: Retrieve MySQL Repository

Retrieve the MySQL repository via wget, then update. This will enable MySQL to be installed via yum.

1
wget https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm

Then proceed to type:

1
sudo rpm -ivh mysql57-community-release-el7-11.noarch.rpm

Followed by this yum command:

1
yum update

You can check for the latest MySQL Yum repository here.

Step 2: Install MySQL

Install MySQL via yum.

1
sudo yum install mysql-server

Step 3: Enable and start MySQL

Enable and start MySQL. You are enabling it to be able to have MySQL run on startup. You can exchange enable with disable if you decide to revert these changes in the future. The start command on the other hand just starts the service right now.

1
sudo systemctl enable mysqld && sudo systemctl start mysqld

Step 4: Secure your MySQL server

The following command will be executed to address security concerns by hardening your MySQL server. You will be asked to change your MySQL root password, disable, or remove anonymous user accounts, disable remote logins for the root user, and remove test databases. You can answer these according to your preference but you may want to check MySQL’s Reference Manual on this program.

1
sudo mysql_secure_installation

If you would like to install MySQL on a machine with a different operating system, please refer to MySQL’s installation page.

You can also choose to acquire AWS’s Relational Database Service (RDS) and deploy an RDS instance to deploy, administer, and scale a relational database using AWS. In line with this, if you’d prefer another type of database instance you may refer to TORO Integrate’s prerequisite page to see the supported types of databases.

See More

You can check out the following for more information

Configuring a Network File Share

Shared resources such as configuration files and TORO Integrate packages should be written to a network file share. The instructions below explain how to configure a NFS Server and NFS Client on your network. Alternatively, if you are configuring your three tier architecture on AWS, you may like to use AWS’s Elastic File System (EFS) service instead.

Setting up an NFS Server and Client

An NFS or a Network File System is a protocol that involves a server and a client. NFS enables you to mount directories to share and directly access files within a network.

Below are steps which will instruct you on how to configure an NFS Server and Client setup for CentOS 7.

Configurations for the NFS Server

Step 1: Determine the NFS Server

Determine the instance which will act as the NFS server for all the files you’d like to be universal in your network.

Step 2: Install NFS packages

Up next, we will install the NFS packages with yum.

nfs-utils package

The nfs-utils package contains the tools for both the server and client which are necessary for the kernel's NFS abilities.

Step 3: Create or determine the directory that will be shared for your NFS setup.

For this set up, we’d recommend creating or targeting a directory in /datastore, but you may always choose differently or create another directory according to your preference.

1
mkdir /datastore

If this directory already exists, you may skip this step.

Step 4: Permissions of NFS Directory

Once you have created or determined the NFS directory from the NFS server, you can now set the permissions of the folder using the chmod and chown commands.

1
chmod -R 755 /datastore

After modifying the directory permissions, proceed to specify the owner of the directory via:

1
chown toro-admin:toro-admin /datastore

Step 5: Start and Enable the Services

After the packages have been installed and the directories have been determined, created and configured. You may now proceed to enable the services to set up the NFS server.

1
2
3
4
5
6
7
8
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

Step 6: Sharing the NFS over the network

For this portion we will now share the NFS directory to enable clients to access it. First, open the /etc/exports file and edit it with your preferred text editor.

1
vi /etc/exports
Next, input the following:

1
/datastore  <nfs-client-ip>(rw,sync,no_root_squash,no_all_squash)

Note: The server and client must be able to ping one another.

Step 7: Start the NFS service

You can now start the nfs service for your server.

1
systemctl restart nfs-server

Step 8: Configuring the Firewall for CentOS 7

1
2
3
4
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --permanent --zone=public --add-service=mountd
firewall-cmd --permanent --zone=public --add-service=rpc-bind   
firewall-cmd --reload

Setting up the NFS Client

After you have set up the NFS Server, it is now time to set up the client side.

Step 1: Install NFS packages

Similar to the first steps in the server side, you should install the NFS packages with yum.

1
yum install nfs-utils

Step 2: Create the NFS Directory mount point/s

1
mkdir /datastore

Step 3: Mount the directory

1
mount -t nfs <nfs-server-ip>:/datastore /datastore

Step 4: Verify NFS mount

1
df -kh

You will see a list of file systems and belonging in that list should be your NFS configurations from earlier.

After verifying that your mounts exist, you may now proceed to populate your NFS directory.

Setting up a Permanent NFS Mount

By default, you will have to remount all your NFS directories after every reboot. To make sure it is available on boot you can follow the following steps:

Step 1: Modify the fstab file

Open the fstab file with your text editor of choice.

1
vi /etc/fstab

Step 2: Add the configurations on the file

Add the following lines to the file to have the mount points automatically configured after reboot.

1
&lt;nfs-server-ip&gt;:/home/datastore /datastore  nfs   defaults 0 0

Conclusion

Above is a simple overview on what a Three Tier Architecture is, what each tier represents, how to setup each tier, and how all these tiers come together. If you liked this setup, you may be interested in checking out our High-Availability Design page which talks about a more complex yet sophisticated and reliable infrastructure setup for TORO Integrate.


  1. Amazon Web Services, often abbreviated as AWS, is a cloud service provider rear offers services including compute power, database storage, and more. 

  2. Amazon Virtual Private Cloud (Amazon VPC) is a service which enable you to provision an isolated section in Amazon, which is where you can create and assign AWS resources, as well as manage and modify your network configurations and more. 

  3. AWS Regions are separate geographic areas which contain Availability Zones. 

  4. Availability Zones are isolated locations within data center regions from which public cloud services operate and originate. 

  5. Amazon EC2 instances are virtual servers in Amazon's Elastic Compute Cloud wherein you can host your applications. 

  6. Amazon Elastic File System is a service that provides scalable elastic storage capacity that can be used or mounted on EC2 instances to accommodate processes. 

  7. ActiveMQ is a message oriented middleware that let's two applications or components communicate with each other. It also processes all the messages in queues to make sure that the interaction between the two applications is reliably executed. 

  8. SolrCloud is a mode in running Solr, an open source search platform. It is capable of index replication, fail-over, load balancing, and distributed queries with the help of ZooKeeper. 

  9. ZooKeeper is an open source server application that can reliably coordinate distributed processes, maintain and manage configuration information, naming conventions, and synchronization for distributed cluster environments. 

  10. selinux stands for Security-Enhanced Linux. It is a module of the Linux kernel that you can configure to manage and ensure security and access control. 

  11. Amazon RDS instances are relational databases hosted, operated, and setup in the Cloud. 

  12. MySQL is an open source database management system.