Skip to content






Service manager

API Explorer

Migrating from bare metal servers to AWS

If you are looking to migrate data from your local Linux servers to AWS EC2 instances, then read on because this guide is for you. The steps required for this type of migration are split into sections and are discussed in order.


  • Successfully deploy the infrastructure using the CloudFormation template. The Linux instances deployed on AWS will be where we will be migrating data to.
  • Ensure that you have the credentials needed to access the AWS EC2 instance(s).
  • You need to make sure that you have ROOT access for both the source and destination server instances.
  • Your SSH ports should be set to default (port 22).
  • You must have screen installed (yum install -y screen).

Password-less SSH

You need to configure a temporary password-less configuration to achieve a connection from your source server to the destination server. You may achieve this using the following instructions:

  1. Login to your local server and generate a public/private key pair if you don’t have one already.

    ssh-keygen -t rsa
  2. Use SSH to create an .ssh directory for the source server. This directory may already exist, so if you are sure it does, you may skip this step.

    ssh user@EC2IP mkdir -p .ssh
  3. Finally, append the public key of your local server to the EC2 server.

    cat .ssh/ | ssh user@EC2IP 'cat >> .ssh/authorized_keys'
  4. Verify the password-less SSH by trying to connect to it.

    ssh user@EC2IP

Script-assisted migration

To help us with migrating resources, we will be running a script.

It is recommended to install screen on the source server so that we can maintain a persistent session during this potentially long process. You may use your package manager to perform the installation.

  1. We start by launching a screen session with the command:

    screen -S OCSMigration
  2. In case you get disconnected, you can run this command to resume the session after logging back in:

    screen -r
  3. Now it is time to retrieve the script. You may retrieve it via:

  4. Make sure it is executable by running the following command:

    chmod +x
  5. Once you are able to retrieve the script, you may now execute it by running:

    bash ./

    This script will:

    1. Prompt you for the IP address of the target server.
    2. Ask you which directories you would like to migrate. It will ask you the path of the following directories:

      • /data
      • /jdbc-pool
      • /logs
      • /packages
      • /system-tmp
      • /code
      • /tmp
    3. Afterwards, the script will attempt to connect with the destination server as ROOT.

    4. After that it will ask you to specify your organization name. This name is relative to the path which was created for you during the CloudFormation setup. The syntax goes as follows: /datastore/clients/${organizationName}/apps/integrate/${organizationName}1/assets.
    5. It will also ask you to verify the path, and to ensure it exists in the destination server as there is no way to check if the directory exists or not.
    6. After authentication, the script will attempt to setup a key-based authentication between the source and destination server. Make sure that your SSH port is 22, at least for the migration.
    7. The script will attempt to verify the installation of rsync on both servers.
    8. It will run rsync twice. First, to attempt to migrate the data; and second to perform a final sweep in an attempt to follow up on any missed files or files that may have had changes during the first sync.

    After the process has ended, verify if the files have been successfully migrated. Once verified, you may now use this data for your TORO Integrate instance.