Commit cf7f1bb4 authored by Florian Fischer's avatar Florian Fischer
Browse files

update Readme

parent 89a316bb
......@@ -16,20 +16,22 @@ git clone https://muhq.space/software/allocbench.git
## Usage
usage: bench.py [-h] [-s] [-l LOAD] [-a ALLOCATORS] [-r RUNS] [-v]
[-b BENCHMARKS [BENCHMARKS ...]] [-ns] [-rd RESULTDIR]
[--license]
usage: bench.py [-h] [-ds, --dont-save] [-l LOAD] [-a ALLOCATORS] [-r RUNS]
[-v] [-vdebug] [-b BENCHMARKS [BENCHMARKS ...]] [-ns]
[-rd RESULTDIR] [--license]
benchmark memory allocators
optional arguments:
-h, --help show this help message and exit
-s, --save save benchmark results in RESULTDIR
-ds, --dont-save don't save benchmark results in RESULTDIR
-l LOAD, --load LOAD load benchmark results from directory
-a ALLOCATORS, --allocators ALLOCATORS
load allocator definitions from file
-r RUNS, --runs RUNS how often the benchmarks run
-v, --verbose more output
-vdebug, --verbose-debug
debug output
-b BENCHMARKS [BENCHMARKS ...], --benchmarks BENCHMARKS [BENCHMARKS ...]
benchmarks to run
-ns, --nosum don't produce plots
......
......@@ -4,7 +4,7 @@ A benchmark in the context of allocbench is a command usable with exec and a
list of all possible arguments. The command is executed and measured for each
permutation of the specified arguments and for each allocator to test.
Benchmarks are implemented as python objects that have a function `run(runs, verbose)`.
Benchmarks are implemented as python objects that have a function `run(runs)`.
Other non mandatory functions are:
* load
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment