Skip to content
Snippets Groups Projects
README.md 3.9 KiB
Newer Older
# INTsight

## Usage

Luis Gerhorst's avatar
Luis Gerhorst committed
See [INTspect](https://gitlab.cs.fau.de/i4/intspect), it allows you to easily
execute benchmarks.
Luis Gerhorst's avatar
Luis Gerhorst committed
Here's a minimal example that demonstrates how you can benchmark softirqs from
within a shell when INTsight has been injected into a kernel. INTspect
essentially does exactly this, except that it gathers some additional
information from `proc`.

``` shell
cd /sys/kernel/debug/intsight
echo > init

# Set the parameters
echo softirq > bottom_handler
echo 1000    > reps

# Execute the benchmark
echo > prepare_trigger
echo > do_trigger
echo > postprocess_trigger

# Optional: Inspect the results
head csv_results/pmccntr
cat  reps # -> 1000

# Save the results and paramerters
cp -vrf . ~/intsight
After this, `~/intsight/csv_results` will contain the checkpoint names and
Luis Gerhorst's avatar
Luis Gerhorst committed
timestamps recorded.

## Sysfs Interface

When injected into a kernel, INTsight provides a `debugfs` interface accessible
from user space, usually mounted in `/sys/kernel/debug/intsight`. INTspect uses
this interface to communicate a given benchmark configuration to the kernel,
trigger its execution, and finally retrieve the generated data.

The following table documents the virtual files created. When INTspect executes
a benchmark, it creates a complete copy of this folder. The following
description therefore also documents the structure of the result folders
produced by INTspect.

| File Name | Type | Description |
|-----------|------|-------------|
Luis Gerhorst's avatar
Luis Gerhorst committed
| init | Write-only | Initialize INTsight, creates the other files listed here |
| bottom_handler | "softirq", "tasklet", or "workqueue" | Bottom half mechanism to be benchmarked |
| reps | Integer | Number of measurement runs to perform |
| delay_ms | Integer | Delay between measurement runs in milliseconds |
| delay_type | "udelay" or "usleep_range" | Respectively use active / passive waiting between measurement runs |
| checkpoint_capacity | Integer | Maximum number of checkpoints recorded per measurement run |
| prepare_trigger | Write-only | Prepare benchmark using the current parameters |
| do_trigger | Write-only | Execute the benchmark, blocks the writing thread for at least `reps` x `delay_ms` milliseconds |
| postprocess_trigger | Write-only | Expose the results, creating the `csv_results` folder |
| csv_results/name | Read-only CSV | One line per measurement run, each line contains the checkpoint names in the encountered order for this run |
| csv_results/* | Read-only CSVs | The recorded timestamps matching the checkpoint names in `csv_results/name` |
| vmalloc_checkpoint_matrix | Read-only Boolean | Determine whether the checkpoint buffer was small enough to be allocated using `kmalloc()`, or whether `vmalloc()` was required |
In general, the following procedure is followed when executing a benchmark:

1. Initialize INTsight by writing into `init`, this is only required once after
   booting.

2. Set the benchmark parameters by writing into `bottom_handler`, `reps`,
   `delay_ms`, `delay_type` and `checkpoint_capacity`.

3. Prepare, execute and postprocess the benchmark by writing into
   `prepare_trigger`, `do_trigger` and `postprocess_trigger`.

4. Retrieve the results by copying the `csv_results` folder. It is recommended
   to copy the whole `intsight` directory, since this also includes the
   parameters in the benchmark because the files used in step 2 can also be read
   out to reflect the current value.

Luis Gerhorst's avatar
Luis Gerhorst committed
For each enabled timestamp, writing into `postprocess_trigger` creates a file in
the `csv_results` folder. Like `csv_results/name`, these files contains one line
per measurement run (the number of runs performed, is set before the benchmark
by writing into `reps`). Each line in `csv_results/name` contains the checkpoint
names in the encountered order for this run (therefore, at most
`checkpoint_capacity` columns), the respective timestamps recorded during this
checkpoint, can be found by opening the other files in `csv_results`, and
looking in the same row/column.