Skip to content
Snippets Groups Projects
README.md 5.06 KiB
Newer Older
  • Learn to ignore specific revisions
  • # INTsight
    
    ## Usage
    
    
    Luis Gerhorst's avatar
    Luis Gerhorst committed
    See [INTspect](https://gitlab.cs.fau.de/i4/intspect), it allows you to easily
    execute benchmarks.
    
    Luis Gerhorst's avatar
    Luis Gerhorst committed
    Here's a minimal example that demonstrates how you can benchmark softirqs from
    within a shell when INTsight has been injected into a kernel. INTspect
    essentially does exactly this, except that it gathers some additional
    information from `proc`.
    
    ``` shell
    cd /sys/kernel/debug/intsight
    echo > init
    
    # Set the parameters
    echo softirq > bottom_handler
    echo 1000    > reps
    
    # Execute the benchmark
    echo > prepare_trigger
    echo > do_trigger
    echo > postprocess_trigger
    
    # Optional: Inspect the results
    head csv_results/pmccntr
    cat  reps # -> 1000
    
    # Save the results and paramerters
    
    cp -vrf . ~/intsight
    
    After this, `~/intsight/csv_results` will contain the checkpoint names and
    
    Luis Gerhorst's avatar
    Luis Gerhorst committed
    timestamps recorded.
    
    
    ## Sysfs Interface
    
    
    When injected into a kernel, INTsight provides a `debugfs` interface accessible
    
    from user space, usually mounted in `/sys/kernel/debug/intsight`. INTspect uses
    this interface to communicate a given benchmark configuration to the kernel,
    trigger its execution, and finally retrieve the generated data.
    
    
    The following table documents the virtual files created. When INTspect executes
    a benchmark, it creates a complete copy of this folder. The following
    description therefore also documents the structure of the result folders
    produced by INTspect.
    
    | File Name | Type | Description |
    |-----------|------|-------------|
    
    Luis Gerhorst's avatar
    Luis Gerhorst committed
    | init | Write-only | Initialize INTsight, creates the other files listed here |
    | bottom_handler | "softirq", "tasklet", or "workqueue" | Bottom half mechanism to be benchmarked |
    | reps | Integer | Number of measurement runs to perform |
    | delay_ms | Integer | Delay between measurement runs in milliseconds |
    | delay_type | "udelay" or "usleep_range" | Respectively use active / passive waiting between measurement runs |
    | checkpoint_capacity | Integer | Maximum number of checkpoints recorded per measurement run |
    
    | progress_interval | Integer | Print a progress indicator while executing the benchmark every Nth run |
    
    Luis Gerhorst's avatar
    Luis Gerhorst committed
    | prepare_trigger | Write-only | Prepare benchmark using the current parameters |
    | do_trigger | Write-only | Execute the benchmark, blocks the writing thread for at least `reps` x `delay_ms` milliseconds |
    | postprocess_trigger | Write-only | Expose the results, creating the `csv_results` folder |
    | csv_results/name | Read-only CSV | One line per measurement run, each line contains the checkpoint names in the encountered order for this run |
    | csv_results/* | Read-only CSVs | The recorded timestamps matching the checkpoint names in `csv_results/name` |
    | vmalloc_checkpoint_matrix | Read-only Boolean | Determine whether the checkpoint buffer was small enough to be allocated using `kmalloc()`, or whether `vmalloc()` was required |
    
    | do_trigger_since_boot | Read-only Integer | Counts the number of benchmarks executed since booting the system |
    
    In general, the following procedure is followed when executing a benchmark:
    
    1. Initialize INTsight by writing into `init`, this is only required once after
       booting.
    
    2. Set the benchmark parameters by writing into `bottom_handler`, `reps`,
    
       `delay_ms`, `delay_type`, `checkpoint_capacity` and `progress_interval`.
    
    
    3. Prepare, execute and postprocess the benchmark by writing into
       `prepare_trigger`, `do_trigger` and `postprocess_trigger`.
    
    4. Retrieve the results by copying the `csv_results` folder. It is recommended
       to copy the whole `intsight` directory, since this also includes the
    
       parameters for the benchmark (the files used in step 2 can also be read out
       to reflect the current value).
    
    ### `csv_results`
    
    For each timestamp enabled at compile time (see Kconfig option
    `INTSIGHT_TIMESTAMP_TYPE_*`), writing into `postprocess_trigger` creates a file
    in the `csv_results` folder. Like `csv_results/name`, these files contain one
    line per measurement run (the number of runs performed, is set before the
    benchmark by writing into `reps`). Each line in `csv_results/name` contains the
    checkpoint names in the encountered order for this run (therefore, at most
    
    `checkpoint_capacity` columns), the respective timestamps recorded during this
    checkpoint, can be found by opening the other files in `csv_results`, and
    looking in the same row/column.
    
    
    __Example:__
    
    Here's an example for the contents of the CSV results folder for a softirq
    benchmar with 3 measurement runs (the timestamps are also a little
    unrealistic). In practice, INTsight will by default record many other
    checkpoints both before, after and between the `irq` and `softirq` checkpoints,
    which mark the execution of the top half and the softirq requested from within
    it.
    
    File `csv_results/name`:
    
    ``` csv
    irq,softirq
    irq,softirq
    irq,mix_pool_bytes,softirq
    ```
    
    File `csv_results/tsc`:
    
    ``` csv
    100,2600
    200,2750
    150,1000,3500
    ```
    
    This minimal example would be interpreted as follows:
    
    - Delay between top and bottom half in the first run: 2600 - 100 = 2500ms, in
      the second run 2750 - 200 = 2550ms, and in the third 3500 - 150 = 3350ms.
    
    - In the third run, the kernel's entropy introduced a delay as it was invoked
      between the top and bottom half.