Skip to content

Home

Introduction

Development

IDEs

Services

Service manager

API Explorer

Measuring TORO Integrate's web service performance

This page is intended to explain the impact of Monitor and Tracker through a performance test. Given a simple RESTful web service, we will measure the amount of throughput by the number of processed transactions per second when (a) both Monitor and Tracker are on, (b) when Monitor is on but Tracker is off, (c) when Monitor is off but Tracker is on, and (d) both Monitor and Tracker are off.

This test was conducted on a TORO Integrate v3.0 instance

Previous and future releases may produce different results.

Test environment

A controlled and isolated environment is vital so as to prevent external unrelated factors from affecting the result. For this test, TORO Integrate and its family of related services are configured as follows:

JVM

Three virtual machines were provisioned at AWS for this test; one for each of TORO Integrate, ActiveMQ, and Solr. These VMs were deployed through Amazon's EC2 service and are equally using c3-xlarge instances.

Specification Value
CPU 4 cores
Storage 40 GB SSD
RAM 7.5 GB
Operating System Amazon Linux

TORO Integrate's JVM will be using the default JVM configuration provided out-of-the-box by the TORO Integrate start-up script except we'll be modifying the Java heap size; both the -xmx and -xms parameters will be set to 2G.

Tomcat

We will be using the default Tomcat configuration therefore there will be no parameter modifications for the Tomcat container.

Core applications

ActiveMQ and Solr will be configured as stand-alone instances on independent virtual machines as mentioned beforehand. Both instances will be configured with a 3 GB JVM heap size, leaving the rest of configuration options to their default values.

Likewise, TORO Integrate will be sitting on a different VM as well. It will contain only one Integrate package. This package will contain the script exposing the RESTful web service which our performance measurement tool will consume later on.

We will not be using an external database for this test; therefore, all core databases will reside in an embedded HSQLDB database.

Procedure

In our performance test, we will be sending requests to a RESTful web service exposed by TORO Integrate through a widely-used benchmarking tool, Apache Bench. This RESTful web service is exposed via Groovy code; simply accepting GET requests with no required parameters and returning a "Hello, world!" JSON response. We will be hitting the aforementioned endpoint as much as possible whilst:

  • Both Tracker and Monitor are on
  • Monitor is on while Tracker is off
  • Monitor is off while Tracker is on
  • Both Tracker and Monitor are off

Turning off Monitor...

Unlike Tracker which can be turned off through application properties, Monitor on the other hand, is only turned off internally given certain conditions such as the absence of Monitor rules.

Below is the RESTful web service-exposing script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import io.toro.integrate.core.api.APIResponse
import org.springframework.web.bind.annotation.*

@RestController
@RequestMapping(value = 'test', produces = ['application/json', 'application/xml'])
class Test {

    @RequestMapping(value = 'sayHello', method = [RequestMethod.GET])
    APIResponse sayHello() {
        new APIResponse('Hello, world!')
    }
}

Meanwhile, the expected response of this web service is a 52-byte JSON file with the following content:

1
2
3
4
{
    "result": "OK",
    "message": "Hello, world!"
}

The load-generating client, Apache Bench, will reside in the same machine as TORO Integrate. The four test cases mentioned earlier will each execute the steps laid out below.

  • Request URL: GET /api/test/sayHello
  • Concurrent users: 150
  • Number of requests: 20,000
  • Benchmark invocation iteration: 5
  • Apache Bench command:

    1
    ab -k -n 20000 -c 150 http://localhost/api/test/sayHello
    

The step procedures in this test are:

  1. Ensure the TORO Integrate instance is freshly started without error.
  2. Run a single invocation of the aforementioned Apache Bench command.
  3. Ensure that there are no errors occurring in TORO Integrate and the invocation reports 0 errors and no 2xx responses. If there are errors, halt the benchmarking test and fix the issue.
  4. Wait for TORO Integrate to go back to its idle state after each test case; idle meaning all ActiveMQ messages of Tracker and Monitor have been de-queued by TORO Integrate between invokes. If Tracker and/or Monitor is turned off, then wait for a few seconds before running the next test case.
  5. Record requests per second results or what we know as throughput.
  6. Repeat step #2 onwards again repeatedly until the throughput result stabilizes; that is, when the throughput stops changing significantly. The goal here is to wait for Tomcat to arrive at a state where everything is well initialized. This might take a few ab invocations for the server to be in its optimal runtime state. It is at this state that we will officially start benchmarking. Once the throughput has stabilized, we will now start recording the throughput results of the ab invocation; we'll be doing this until we have five records.
  7. Repeat steps #2 to #5 once more until you have successfully recorded five consecutive and stable ab invocation throughput results.

It is highly important that we ensure TORO Integrate is at a state where all components are already initialized, as would be in a production environment. All ab invocation results from non-optimal states should be discarded.

Results

As discussed earlier, throughput will be measured in order to determine TORO Integrate's performance per test case. There will be five consecutive throughput results which we will average in order to ensure the fairness of results. From running this test, our team was able to achieve the following:

Results in table

Results in graph

The table and chart above shows the minimum, maximum, and average throughput per test case scenario in five consecutive and stabilized iterations.

Factors

There are several other factors that directly affect the overall performance of TORO Integrate aside from Tracker and Monitor. Tuning the JVM and Tomcat server also plays a major role in increasing the overall throughput of your web services.

Conclusion

Although very useful, Tracker and Monitor's indexing of data causes a significant performance overhead and from our findings, turning off these features would result in higher web service throughput.

Oftentimes, turning Monitor off is not an option, hence, it may be more beneficial to look at the difference of throughput results when (a) both Monitor and Tracker are on as opposed to (b) Monitor is on while Tracker is disabled.

From our test above, (b) turning off Tracker whilst still enabling Monitor produced 32.47% more throughput than (a) leaving both Tracker and Monitor enabled.