Performance testing

« Back to documentation home

Delays

There are several options available useful in performance testing.

HTTP fixed delays

Configurable via the UI. Just change the value of "Response delay" on the HTTP mapping form.
Edit mocked response delay

HTTP random delays

Configure your mapping file to contain:
...
    "response": {
            "status": 200,
            "delayDistribution": {
                    "type": "lognormal",
                    "median": 70,
                    "sigma": 0.3
            }
...

JMS delays

JMS delays are available, see the documentation for more details.

Native IBM® MQ delays

JMS delays are available, see the documentation for more details.

Performance profile

Performance profile allows for faster than usual responses in Traffic Parrot.

To enable the performance profile:
  1. use trafficparrot.performance.properties
  2. configure only the extensions and helpers that are in use in the mappings, by default all of them are disabled
  3. disable INFO logs (set log4j to WARN or ERROR only)
  4. configure the operating system
  5. Optional: run a cluster of Traffic Parrot instances behind a load balancer
  6. Optional: if you have any issues contact us
Please note that all mappings in performance mode are loaded into memory from the disk on load so you if you make changes directly on disk to any mappings please do not do it in performance mode.

Native IBM® MQ tuning

Here is a list of configuration parameters for tuning Traffic Parrot Native IBM® MQ.

Traffic Parrot properties:
  • trafficparrot.virtualservice.mapping.cache.milliseconds - how long to mapping cache the files in memory (default 0 means no caching)
  • trafficparrot.virtualservice.mapping.cache.populate.on.startup - should the mappings cache be pre-loaded on startup
Native IBM® MQ connection attributes:
  • readConnectionsToOpen - how many read connections to open to the queue manager via the specified channel
  • writeConnectionsToOpen - how many write connections to open to the queue manager via the specified channel
For example:
{
    "connectionId": "1",
    "connectionName": "Local Docker MQ 9",
    "connectionData": {
      "ibmMqVersion": "IBM_MQ_9",
      "hostname": "localhost",
      "port": 1414,
      "queueManager": "QM1",
      "channel": "DEV.APP.SVRCONN",
      "username": "app",
      "password": "",
      "useMQCSPAuthenticationMode": false,
      "readConnectionsToOpen": 5,
      "writeConnectionsToOpen": 5
    }
},
Native IBM® MQ attributes:
  • receiveThreads - how many threads to use for the given mapping to receive messages
  • sendThreads - how many threads to use for the given mapping to send messages
For example:
{
  "mappingId" : "3bc18f0b-9d95-4af1-a2f8-848210b2d8a1",
  "request" : {
    "destination" : {
      "name" : "DEV.QUEUE.1",
      "type" : "QUEUE"
    },
    "bodyMatcher" : {
      "anything" : "anything"
    }
  },
  "response" : {
    "destination" : {
      "name" : "DEV.QUEUE.2",
      "type" : "QUEUE"
    },
    "ibmMqResponseTransformerClassName" : "NO_TRANSFORMER",
    "format" : "MQFMT_STRING",
    "text" : "",
    "fixedDelayMilliseconds" : 0
  },
  "receiveThreads" : 5,
  "sendThreads" : 5
}

Low performance?

If you are observing lower than expected performance please contact us.

Please keep in mind that Traffic Parrot performance depends on:
  • speed of hardware or VM, the slower the hardware or VM the slower Traffic Parrot
  • Java version, the older the version the lower the performance
  • number of mappings, the higher number of mapping the lower the performance
  • complexity of dynamic responses, the higher the complexity the lower the performance
  • complexity of the request matching, an "any" matcher will be much faster than a complex one like regexp or jsonpath

Example performance benchmarks Native IBM® MQ

Summary

This benchmark demonstrates how hardware resources, network parameters and complexity of virtual services (mocks/stubs) impact Traffic Parrot version 5.12.0 performance for a few sample scenarios. In one example, we show how we improve from 6,000 TPS to 20,000 TPS by increasing the hardware resources and network capacity. In another example, we show that the complexity of the virtual services (mocks/stubs) results in the difference between 1,000 TPS and 6,000 TPS when running on the same hardware and network. Download benchmark PDF here of view the results in web browser below.

Benchmark results

Test setup Test results
4 vCPUs HDD 6GB heap 10 Gb/s network 16 vCPUs SSD 12GB heap 10 Gb/s network 16 vCPUs SSD 12GB heap 30 Gb/s network
Request to response mappings (transactions defined in the virtual service) Queues and queue managers TPS Processing latency (read request message, construct response message, write response message) TPS Processing latency (read request message, construct response message, write response message) TPS Processing latency (read request message, construct response message, write response message)
20 XML mappings, 100ms fixed delay, Dynamic (2 XPaths), Message size 490B, 1 send threads per queue, 1 receive threads per queue, 5 read connections per  QM, 5 write connections per QM, Non-transactional, non-persistent 20 queues 4 queue managers 6,022t/s

10,000,000 transactions
99% under 50.00ms
95% under 20.00ms
14,984t/s

10,000,000 test transactions
99% under 40.00ms
95% under 30.00ms
21,541t/s

10,000,000 test transactions

99% under 30.00ms, 95% under 20.00ms

20 XML mappings, No delay, Dynamic (2 XPaths), Message size 490B, 1 send threads per queue, 1 receive threads per queue, 5 read connections per  QM, 5 write connections per QM, Non-transactional, non-persistent 20 queues
4 queue managers
5,751t/s

10,000,000 transactions
99% under 30.00ms
95% under 20.00ms
13,425t/s

10,000,000 test transactions
99% under 50.00ms
95% under 30.00ms
19,321t/s

10,000,000 test transactions

99% under 30.00ms
95% under 20.00ms

15 XML mappings
fixed delays 100ms to 200ms
Dynamic (1 to 29 XPaths per message)
Message size 500B to 57kB
1-4 send threads depending on the queue
1-4 receive threads depending on the queue
18 read connections per QM
18 write connections per QM
Non-transactional, non-persistent
15 queues 2 queue managers 1,276t/s

3,080,000 transactions
99% under 10.00ms
95% under 10.00ms
4,180t/s

3,080,000 test transactions
99% under 10.00ms
95% under 10.00ms
4,472t/s

3,080,000 test transactions

99% under 10.00ms
95% under 10.00ms

Environment configuration 4 vCPUs - 6GB heap - 10 GB/s network

Testing Traffic Parrot version 5.12.0-RC1

IBM MQ 9.1.1.0

TP running on
  • GCP VM type n2-standard-4 (4 vCPUs, 16 GB memory)
  • Standard persistent disk
  • “europe-west2-c” GCP data center
TP configuration
  • Logging level = WARN
  • -Xmx6144m
Network
  • Using external IPs (10 Gbits/sec tested with iperf)
Four IBM MQ queue managers each of them running on
  • GCP VM type n1-standard-1 (1 vCPU, 3.75 GB memory)
  • Standard persistent disk
  • “europe-west2-c” GCP data center

Environment configuration 16 vCPUs - 12GB heap - 10 Gb/s network

Testing Traffic Parrot version 5.12.0-RC1

IBM MQ 9.1.1.0

TP running on
  • GCP VM type c2-standard-16 (16 vCPUs, 64 GB memory)
  • SSD persistent disk
  • “us-central1-a” GCP data center
TP configuration:
  •  Logging level = WARN
  • -Xmx12g
Network
  • Using external IPs (10 Gbits/sec tested with iperf)
Four IBM MQ queue managers each of them running on
  • GCP VM type n2-highcpu-4 (4 vCPUs, 4 GB memory)
  • SSD persistent disk
  • “us-central1-a” GCP data center

Environment configuration 16 vCPUs - 12GB heap - 30 Gb/s network

Same as “16 vCPUs - 12GB heap - 10 Gb/s network” above but:

Network
  • Using internal IPs (30Gb/s tested with iperf)

Example performance benchmarks JMS and HTTP

Test set up

Here is a Traffic Parrot 3.10.0 performance test result to give you a rough idea what you can expect.

The test was performed on a Heztner EX41S-SSD server:
  • Intel® Core™ i7-6700 Quad-Core Skylake incl. Hyper-Threading Technology
  • 64 GB DDR4 RAM
  • SSD
The test was performed on:
  • Ubuntu-1604-xenial-64-minimal 4.10.0-35-generic
  • Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
The test was performed with these settings:
  • Performance profile enabled
  • WARN log level
  • Internal ActiveMQ broker for JMS

Both the test suite and Traffic Parrot were both running on the same server, so both using the same server resources but there was no network latency.

If these numbers are not enough for you, please contact us and we will suggest a setup that will allow for higher performance.

Test results

Protocol tested Number of parallel users Total number of transactions Mappings Total test duration Transaction throughput Average transaction time
HTTP 1000 100,000 requests 22 mappings total:
  • 2 mappings with a urlEqualTo matcher and static response body, both accessed by users
  • 20 mappings that are not accessed by users
Test took 48s to run Throughput for 1,000 concurrent users is 2,099t/s Which results in an average of 0.5ms/t
HTTP 100 100,000 requests 22 mappings total:
  • 2 mappings with a urlEqualTo matcher and static response body, both accessed by users
  • 20 mappings that are not accessed by users
Test took 29s to run Throughput for 100 concurrent users is 3,428t/s Which results in an average of 0.3ms/t
HTTP 1000 100,000 requests 102 mappings total:
  • 1 mapping with a urlEqualTo matcher and dynamic response body, is accessed by users
  • 1 mapping with a urlPathEqualTo, withQueryParam matching, withRequestBody equalToJson matcher, is accessed by users
  • 100 mappings that are not accessed by users
Test took 136s to run Throughput for 1,000 concurrent users is 734t/s Which results in an average of 1.4ms/t
HTTP 100 100,000 requests 102 mappings total:
  • 1 mapping with a urlEqualTo matcher and dynamic response body, is accessed by users
  • 1 mapping with a urlPathEqualTo, withQueryParam matching, withRequestBody equalToJson matcher, is accessed by users
  • 100 mappings that are not accessed by users
Test took 104s to run Throughput for 100 concurrent users is 963t/s Which results in an average of 1ms/t
JMS 1000 300,000 messages (req, rsp, fake) 52 mappings total:
  • 2 mapping with a urlEqualTo body matcher, accessed by users
  • 50 mappings that are not accessed by users
Test took 378s to run Throughput for 1,000 concurrent users is 792trans/s (264task/s) Which results in an average of 1.2ms/trans (3.8ms/task)
JMS 100 30,000 messages (req, rsp, fake) 52 mappings total:
  • 2 mapping with a urlEqualTo body matcher, accessed by users
  • 50 mappings that are not accessed by users
Test took 35s to run Throughput for 100 concurrent users is 858trans/s (286task/s) Which results in an average of 1.2ms/trans (3.5ms/task)

Old version warning!

This documentation is for an old version of Traffic Parrot. There is a more recent Traffic Parrot version available for download at trafficparrot.com

Browse documentation for recent version