Evaluation artifact for the emper IO runtime
We use a lot of connections in our evaluation if you encounter issues during the connect phase of the client make sure your SOMAXCONN value is big enough.
prepare
Three git submodules are included for emper, the legacy emper IO system, a rust based load generator and for an echoserver using the IO design of memcached.
Run git submodule init --update
to clone and checkout all submodules.
build
Requirements:
* pthread
* libevent
* rust / cargo
* golang
Simply run make
to build all non-emper echo server implementations
All used emper variants will be build when running eval.py
.
evaluate
Run ./eval.py -l
to benchmark all included implementations on the localhost.
See ./eval.py -h
for available options.
Evaluation results will be generated in results/<git-desc>-<clients>+-<server-host>/<timestamp>
A simple matplotlib plot can be shown by running plot_results.py results/<experiment-description>/<timestamp>
Evaluation across hosts
Per default eval.py
will start the servers per ssh on faui49big02 so
if you run eval.py
on any other machine you have a two host setup.
The server host is changeable using the --host
cli argument.
Additionally clients can be specified with the --clients
argument.
The client commands will be started via SSH on the remote hosts and
synchronize the start of their echophase using a coordinator process running
on the host where eval.py
is running.
Adding server implementations
All server implementations are defined in bench/server_cmds.py
and the standard
Makefile will recursively build all Makefiles found in servers/
.
Adding a new server is done by including the needed source and a Makefile to build
it in the server/
subdirectory and add the command used to run the server in bench/server_cmds.py
.
EMPER variants are defined in bench/emper.py
.
See the documentation there for information about how EMPER variants are build and
which options are available.