Skip to content

Deploying TORO Integrate in a Docker Swarm Cluster

Docker swarm is a clustering tool for Docker containers. By deploying TORO Integrate in a Docker swarm cluster, it will take less effort to manage your Docker instances. Ease of scaling the number of TORO Integrate containers, automatic recovery of failed instances, and network load balancing are just some of the benefits of using Docker in swarm mode. Since Docker swarm natively exists with Docker engine, setup should be fast and easy and you would no longer need to setup a third party application to manage your cluster.

In this guide, we'll assume that you already have an idea on how to setup a Docker swarm cluster. Since we'd like the cluster to have a shared file system, knowledge in setting up an NFS server is also a requirement. We're going to configure a simple topology only and one should note that this is not a recommended setup for a production environment. The goal of this guide is for you to have an idea on how to deploy TORO Integrate in a Docker cluster. To know more about how to setup a Docker swarm cluster for TORO Integrate that can be used for production, please see Configuration Scenarios.

We recommend reading the simple deployment guide first

If you haven't read the simple deployment guide, you should check that first before proceeding to this guide.

Topology

The diagram below illustrates our simple infrastructure:

Simple Docker Swarm Topology

In this guide, a directory in the NFS Server called /datastore will be shared between the Docker servers (Server 1 and Server 2). It is mounted as /datastore in both Docker nodes.

Behaviors

Before you head on, here are some behaviours you should be aware of when using Docker swarm:

  • Docker swarm doesn't ensure that your requests will be served constantly by the same server. This means endpoints that are using a session that is stored in the application itself might not work as expected.
  • TORO Integrate requires a license for each virtual machine used in this setup. You'll see the license setup page in the browser upon starting TORO Integrate for the first time.
  • Schedulers will be executed twice or equal to the number of your package replicas distributed in all instances.
  • This setup will not work if you're using the embedded Hypersonic database as it only allows one machine to use the database as a time.
  • In production, this setup must be done with external applications – external Solr Server, external ActiveMQ, and external database sources.

Procedure

In this setup, we will be creating the required data directories in the NFS server's /datastore directory to make all data files and directories available across all servers. By doing so, we will no longer need to worry about which Docker node TORO Integrate's container will be deployed in as its data should be available in any of the servers.

  1. Create the data directories directories. Make sure you have created them in the shared file system.

    1
    mkdir -p /datastore/apps/toro-integrate/{data,packages,logs,.java,code}
    
  2. Once done creating the directories, we can now provision TORO Integrate as a Docker service. Execute the command below to start a new Docker service. Don't forget to change the value of the environment variables.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    docker service create \
    -p :8080 \
    -p :8443 \
    --restart-condition on-failure --restart-max-attempts 5 \
    --env JAVA_XMS=${JVM_XMS_MEMORY}m --env JAVA_XMX=${JVM_XMX_MEMORY}m \
    --mount type=bind,source=/etc/localtime,destination=/etc/localtime,readonly \
    --mount type=bind,source=/datastore/apps/toro-integrate/data,destination=/data/data \
    --mount type=bind,source=/datastore/apps/toro-integrate/packages,destination=/data/packages \
    --mount type=bind,source=/datastore/apps/toro-integrate/code,destination=/data/code \
    --mount type=bind,source=/datastore/apps/toro-integrate/logs,destination=/data/logs \
    --mount type=bind,source=${HOME}/.java,destination=/root/.java \
    --name toro-integrate \
    toroio/integrate
    

    Best practice

    It's always a good practice to mount the directory /etc/localtime as read-only in the container to ensure that the server and container time are synced.

  3. Verify that the service has been created and replicated. Run the command below to show all the Docker services.

    1
    2
    3
    # Listed docker services via `docker service ls`.
    ID              NAME              MODE          REPLICAS    IMAGE               PORTS
    sogsv6m99rw8    toro-integrate    replicated    1/1         toroio/integrate    *:30004->8080/tcp,*:30005->8443/tcp
    

    In the replica column, the value should be 1/1, which means one task has been created for this service. To get more details about your service you can run the command below:

    1
    docker service ps toro-integrate
    
    1
    2
    ID             NAME               IMAGE              NODE      DESIRED STATE   CURRENT STATE           ERROR   PORTS
    sadi53w0h73z   toro-integrate.1   toroio/integrate   docker2   Running         Running 3 minutes ago
    

    As seen in the NODE column, the task has been created in the docker2 server.

  4. Scale the service to two tasks. By executing the command below, the Docker service will create an additional instance of TORO Integrate and it will automatically load balance the requests directed to our application.

    1
    docker service scale toro-integrate=2
    
    1
    2
    3
    4
    5
    toro-integrate scaled to 2
    overall progress: 2 out of 2 tasks
    1/2: running   [==================================================>]
    2/2: running   [==================================================>]
    verify: Service converged
    

    This setup is good if you have stateless API endpoints that should always be ready for high traffic load. Try to check the service now to see where the instances are deployed:

    1
    docker service ps toro-integrate
    
    1
    2
    3
    ID             NAME               IMAGE              NODE      DESIRED STATE   CURRENT STATE            ERROR               PORTS
    sadi53w0h73z   toro-integrate.1   toroio/integrate   docker2   Running         Running 15 minutes ago
    j29lbr3dumul   toro-integrate.2   toroio/integrate   docker1   Running         Running 4 minutes ago
    

    In this example, the second instance has been deployed under the docker1 server. Docker should automatically choose the best server to deploy the second instance of TORO Integrate.

To access TORO Integrate, you can use any of the Docker servers' IP address with the mapped port. Docker should automatically route you to the server running TORO Integrate.

Upgrading to the Latest Version

With a simple Docker setup, you usually just need to pull the new image from the Docker Registry then recreate the Docker instance. In a Swarm setup, however, it should be much easier as you only need to tell your Docker service to force update all its tasks.

Execute the command below to tell Docker to update all the tasks of the service:

1
docker service update --force toro-integrate

Upon command execution, all nodes should automatically download the newest release from the Docker Hub and recreate all its tasks.

Good to know Tips from TORO

  • As always, it's best to read the upgrade notes of each release before upgrading your instances.
  • Docker Swarm supports different types of deployment strategies to lessen service interruption. See this link to know more about these.

That's it! We hope this will help you get started on deploying TORO Integrate as a Docker service.