Skip to content

Overview

Concepts and Principles

Development

Overview

IDEs

API Explorer

Releases

Release Notes

TORO Integrate

Coder Studio

Coder Cloud

Bug Reports

Search

Benchmarks for a RESTful Web Service

The following benchmark tests will focus on the impact that the Monitor and Tracker search indexes have on performance.

The Monitor search index is used to index the meta data associated with every http request that invokes a service. This data is normally used to limit the number of requests that a user can make for an API and/or to create billing reports for API usage.

The Tracker search index is used to index the payload of a request invoked by any endpoint. The Tracker index is very useful for auditing transactions, troubleshooting errors, and resubmitting failed transactions.

Although very useful these indexes cause a significant performance overhead. In this article we will evaluate the performance overhead created by these indexes for a simple RESTful web service. The benchmark will measure the throughput of web service measured in transactions per second.

Test Environment & Instance Configuration

For this performance benchmarking, the test environment is as follows:

TORO Integrate

TORO Integrate was configured with a single Integrate Package containing the test script exposed as a RESTful web service for our benchmarking tool to consume.

ActiveMQ and Solr were configured as standalone instances on independent virtual machines (VM's). Both instances will have 3GB JVM heap and the rest of the configuration will be left as default.

The database used was the default embedded database, HSQLDB.

Virtual Machine Instance

Three VM's at AWS were provisioned for the test. One each for TORO Integrate, ActiveMQ and Solr. The instance specification of the VM's were Amazon EC2 compute optimized c3.xlarge.

Specification Value
CPU 4 cores
Storage 40GB SSD
RAM 7.5GB
Operating System Amazon Linux

Tomcat Configuration

In this test, we'll leave all the configuration of Tomcat as default.

JVM Configuration

In this test, the JVM opts will use the default configuration provided by TORO Integrate start up script except for the JVM Heap size. The -xmx and -xms will be set to 2G.

Test Scenario

For this performance benchmarking, we'll hit a TORO Integrate exposed RESTful web service using a widely used benchmarking tool, Apache Bench. The service will invoke a simple groovy script which returns a "Hello World" json response.

In this performancing benchmarking, we will test the following four test cases:

  • Tracker feature turned on while Monitor feature turned off
  • Tracker feature turned off while Monitor feature turned off
  • Tracker feature turned off while Monitor feature turned on
  • Both Tracker and Monitor feature turned on

Note

Unlike Tracker which can be turned off through Application Properties, Monitor on the other hand is turned off by default if there are no monitoring rules set.

Test Script

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import io.toro.integrate.core.api.APIResponse
import org.springframework.web.bind.annotation.*

@RestController
@RequestMapping(value = 'examples-services', produces = ['application/json', 'application/xml'])
public class Bambi {

    @RequestMapping(value = 'sayHello', method = [RequestMethod.GET])
    public APIResponse helloworld() {
      return new APIResponse('Hello World')
    }
}

Reponse

The REST service response will be a 52 bytes of json file with the following content

1
2
3
4
{
    "result": "OK",
    "message": "Hello World"
}

Test Procedure

In this test, the load generating client, Apache Bench, will reside in the same machine as TORO Integrate. The four test cases will each execute the steps laid out in this test procedure.

Request URL: GET /api/examples-services/sayHello

Concurrent Users: 150

Number of request: 20000

Benchmark Invocation iteration: 5

Apache Bench Command: ab -k -n 20000 -c 150 http://localhost/api/examples-services/sayHello

Steps:

  1. Ensure TORO Integrate instance is freshly started without error
  2. Run a single invocation of ab command.
  3. Ensure that there are no errors occuring in TORO Integrate and the invocation reports 0 errors and non-2xx response. If there are errors, halt the benchmarking test and fix the issue.
  4. Wait for TORO Integrate to go back to its idle state after each test case. By Idle State, this means all ActiveMQ messages of Tracker and Monitor must be dequeud by TORO Integrate between invokes. If the Tracker or Monitor features are turned off then wait a few seconds before running the next test case.
  5. Record the Requests Per Second results or also known as Throughput
  6. Go back to Step 2. What we'll be doing here is repeatedly go back to step 2 until the Throughput result stabilized (A state where the throughput stops changing significantly). The goal here is to wait for Tomcat to arrive at a state where everything is well initialized. This might take a few ab invocation for the server to be in its optimal runtime state. It is at this state that we will officially start benchmarking.
  7. Once the throughput stabilized. This is where you will start recording the Throughput results of your ab invocation. If you already have the first 5 consecutive stabilized ab invocation throughput result then you are done else continue to step 8 until you have 5 consecutive stabilized ab invocation throughput results.
  8. Run a single invocation of ab command.
  9. Ensure that there is no errors occuring in TORO Integrate and the invocation reports 0 errors and non-2xx response. If there are errors, halt the benchmarking test and fix the issue.
  10. Wait for TORO Integrate to go back to its idle state after each test case. By Idle State, this means all ActiveMQ message of Tracker and Monitor must be dequeud by TORO Integrate between invokes. If the Tracker or Monitor features are turned off then wait a few seconds before running the next test case.
  11. Go back to Step 8 until you have 5 consecutive stabilized ab invocation throughput result.

Test Results and Observation

To correctly benchmark the performance of TORO Integrate, we ensure that the state of TORO Integrate is at a state where a production server would be in, where every component of TORO Integrate is initialized. All ab invocation results that is not at the Optimal State is discarded. As discussed earlier, throughput will be measured to determine the performance of each test scenario. There will be 5 thoughput results recorded and to ensure fairness of results, the final value will be averaged.

The data shown here will be the average, minimum, maximum throughput the test scenario was able to achieve in the 5 ab invocation during its optimal state.

results-in-graph

results-in-table

Note

There are several other factors that directly affects the overall performance of TORO Integrate aside from Tracker and Monitor. Tuning JVM Configuration and Tomcat Configuration also plays a major role in improving the overall throughput of your web service.

Conclusion

All the tests were concluded without any errors. In this test, it is clear that turning off Tracker & Monitor resulted in higher throughput.

Given that API management for purposes of throttling or monetizing an API may necessitate the Monitor being turned on it may be beneficial instead to focus on the impact of Tracker. In this case there is 2 Test Scenario result worth looking at:

  • Tracker Off & Monitor on
  • Tracker & Monitor on

To conclude, turning off Tracker increased throughput in the test case by 32.48%. However this comes at the cost of disabling the Tracker feature which is typically used for auditing transactions, troubleshooting failed transactions, resubmitting failed transactions and creating reports.