Commit d3151103 authored by Florian Fischer's avatar Florian Fischer
Browse files


parent 94148135
# allocbench - benchmark tool for memory allocators
# allocbench - benchmark tool for POSIX memory allocators
to download allocbench just run
To download allocbench run
git clone
## Requirements
* python3
* perf (`perf stat -d` is the default command to measure benchmark results)
* util-linux (`whereis` is used to find system installed allocators)
* (git to clone allocators in `allocators/{no_falsesharing, BA_allocators}.py`)
## Usage
usage: [-h] [-s] [-l LOAD] [-a ALLOCATORS] [-r RUNS] [-v]
benchmark memory allocators
optional arguments:
-h, --help show this help message and exit
-s, --save save benchmark results to disk
-s, --save save benchmark results in RESULTDIR
-l LOAD, --load LOAD load benchmark results from directory
load allocator definitions from file
......@@ -25,10 +33,38 @@ git clone
benchmarks to run
-ns, --nosum don't produce plots
-sd RESULTDIR, --resultdir RESULTDIR
-rd RESULTDIR, --resultdir RESULTDIR
directory where all results go
--license print license info and exit
### Examples
./ -b loop
runs only the loop benchmark for some installed allocators and will put its
results in `$PWD/results/$HOSTNAME/<time>/loop`
./ -a allocators/
builds all allocators used in my [BA thesis]( and runs all
default benchmarks
./ -r 0 -l <path/to/saved/results>
doesn't run any benchmark just summarizes the loaded results
## Benchmarks
You want to compare allocators with your own software or add a new benchmark,
have a look at [doc/]().
## Allocators
By default tcmalloc, jemalloc, Hoard and your libc's allocator will be used
if found and the `-a` option is not used.
For more control about used allocators have a look at [doc/]().
## License
This program is released under GPLv3. You can find a copy of the license
......@@ -12,14 +12,14 @@ import src.allocators
benchmarks = ["loop", "mysql", "falsesharing", "dj_trace", "larson"]
parser = argparse.ArgumentParser(description="benchmark memory allocators")
parser.add_argument("-s", "--save", help="save benchmark results to disk", action='store_true')
parser.add_argument("-s", "--save", help="save benchmark results in RESULTDIR", action='store_true')
parser.add_argument("-l", "--load", help="load benchmark results from directory", type=str)
parser.add_argument("-a", "--allocators", help="load allocator definitions from file", type=str)
parser.add_argument("-r", "--runs", help="how often the benchmarks run", default=3, type=int)
parser.add_argument("-v", "--verbose", help="more output", action='store_true')
parser.add_argument("-b", "--benchmarks", help="benchmarks to run", nargs='+')
parser.add_argument("-ns", "--nosum", help="don't produce plots", action='store_true')
parser.add_argument("-sd", "--resultdir", help="directory where all results go", type=str)
parser.add_argument("-rd", "--resultdir", help="directory where all results go", type=str)
parser.add_argument("--license", help="print license info and exit", action='store_true')
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment