MPT: messaging performance tool
Linux Build Status:
MPT is a tool for running performance tests on messaging systems. Current development version supports AMQP and STOMP messaging protocols. Support for MQTT and OpenWire is planned for the future. The test data is saved in a CSV format and can be exported to ElasticSearch DB. That allows it to be visualized using the Messaging Performance UI
- gcc or clang
- litestomp (optional) for STOMP support
- iperf (as a good practice, for testing network performance prior to test execution)
Disk Space: The clients may generate a lot of data depending on how much messages are sent per second. On my baseline system (two servers with Quad-Core AMD Opteron 2376 @ 8x 2.3GHz) on a gigabit network, it generates around 1Gb of data per hour, transferring around 66.000 messages per second.
- Linux: x86 and x86_64
- OS X: x86
Broker Settings: ActiveMQ
ActiveMQ may need to have the inactivity monitor disabled. It can be done by adding the following setting in the conf/activemq.xml, in the transport connector setting:
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.useInactivityMonitor=false"/>
Usage - Performance Tool:
Here's an example of how to run a 10 minute load test, with 4 concurrent senders, 4 concurrent receivers, sending 256 bytes of data per message.
mpt-runner.sh -l /tmp/log -b amqp://<amqp server>:5672/<queue name> -d 10 -p 4 -s 256 -n "sample test"
It's possible to have more complex deployment scenarios by running the sender and receiver separately. In this case, you have to run the test steps manually:
Run the receiver (the controller will print the PID, please take note of that):
mpt-receiver -b amqp://<amqp server>:5672/<queue name> --log-level=stat -d 10 -p 4 --logdir=/tmp/log --daemon
Run the sender (the controller will print the PID, please take note of that):
mpt-sender -b amqp://<amqp server>:5672/<queue name> --log-level=stat -d 10 -p 4 --logdir=/tmp/log --daemon
This is all that is required for running very simple tests. For larger test scenarios or integration with other tools, please keep reading the sections below.
Usage - ElasticSearch Integration:
The performance data generated by this tool can be exported to a ElasticSearch database so that it can be visualized in the Messaging Performance UI.
Overall, the integration is pretty simple and requires only two modifications to the basic configuration. First, you need to configure it to allow Cross Origin Resource Sharing (CORS). To do so, add the following to the configuration file in config/elasticsearch.yml:
http.cors.allow-origin: "*" http.cors.enabled: true
It may also be necessary to configure it to listen on all interfaces (or, at least, the one desired), so that it can be accessed from anywhere. You can do so by setting the following parameter:
At the moment, it does not implement nor require any security configuration, therefore it's highly recommended to setup adequate security mechanisms on top of your instance for publicly available instances.
Usage - Runner:
Dealing with the synchronization and parameters of performance can be daunting, though, therefore a runner script is available to simplify the execution. Before running the runner, it's advised to create configuration files for both the application as well as the test scenario. The file configuration should be simple and self-explanatory, since they match the same name of the test parameters.
mpt-runner.sh -l /path/to/log -t 5000 -b <protocol>://<host>:<port>/queue/<queue> -d 5 -p 1 -s 32 -C /path/to/mpt-loader.conf -T /path/to/stomp-small-test.conf -R 002
Attention: the runner requires a ElasticSearch database. This behavior will be changed in the future.
Usage - Loader
These steps are automatically handled by the runner, however, in case you need to run them, this is how it works:
The first step is to register the Software Under Test in the database:
mpt-loader.py --register \ --sut-key "activemq" \ --sut-name "ActiveMQ" \ --sut-version "5.13.3" \ --url http://localhost:9200/
The second is to record the test info. This part should contain the details about the test (messaging parameters, hardware information, etc)
mpt-loader.py --testinfo --url http://localhost:9200 \ --test-run "001" \ --sut-key "activemq" \ --test-start-time "20167-11 14:48:03" \ --test-duration 5m \ --broker-sys-os-version 24 \ --broker-sys-os-name Fedora \ --broker-sys-os-type Linux \ --test-comment "Small local test on Fedora 24" \ --test-result-comment "Run ok, no comments" \ --broker-sys-info "Thinkpad T450/12Gb RAM/521gb SSD" \ --consumer-sys-info "Thinkpad T450/12Gb RAM/521gb SSD" \ --producer-sys-info "Thinkpad T450/12Gb RAM/521gb SSD" \ --msg-endpoint-type "queue"
After the SUT and test information is recorded, the test result information can be loaded in 3 steps:
Load sender throughput data:
mpt-loader.py --load throughput \ --url http://localhost:9200 \ --sut-name "ActiveMQ" \ --sut-key activemq \ --sut-version "5.13.3" \ --test-run "001" \ --msg-direction sender \ --filename /path/to/sender-throughput-<pid>.csv
Load receiver latency data:
mpt-loader.py --load latency \ --url http://localhost:9200 \ --sut-name "ActiveMQ" \ --sut-key activemq \ --sut-version "5.13.3" \ --test-run "001" \ --msg-direction receiver \ --filename /path/to/receiver-latency-<pid>.csv
Load receiver throughput data:
mpt-loader.py --load throughput \ --url http://localhost:9200 \ --sut-name "ActiveMQ" \ --sut-key activemq \ --sut-version "5.13.3" \ --test-run "001" \ --msg-direction receiver \ --filename /path/to/receiver-throughput-<pid>.csv
Most of the test parameters to a configuration file and have the loader use it (ie.: using --config and --config-test). It uses 2 separate configuration files: one contains the application configuration (ie.: URL for ES) and the other contains the test parameters. For example:
mpt-loader.py --load latency \ --config /path/to/application/config \ --config-test /path/to/test/config \ --test-run 001 \ --msg-direction receiver \ --filename /path/to/receiver-latency-<pid>.csv
The binaries provide configuration samples that can be modified:
- Loader configuration: /usr/share/mpt/mpt-loader.conf
- Test case configuration: /usr/share/mpt/sample-test-case.conf
The configuration parameters are the same as passed from the command line.
Binaries for this tool, for Fedora, CentOS and RHEL, can be found on my COPR at https://copr.fedorainfracloud.org/coprs/orpiske/msg-perf-tool/
- Run the clients and the broker in different servers
- Make sure that the time is properly set on both servers
- Run on an dedicated network (or, at least, avoid hours of peak usage)
- Measure the network performance before running (ie.: use iperf)
- Ideally, you should run at a fixed rate instead of flooding the brokers. Brokers tend to get slower as the queue size increase.
The code is licensed under Apache License v2