1. 30 May, 2022 9 commits
  2. 13 May, 2022 6 commits
  3. 10 May, 2022 2 commits
  4. 08 May, 2022 1 commit
  5. 04 May, 2022 1 commit
  6. 03 May, 2022 2 commits
  7. 26 Apr, 2022 3 commits
  8. 25 Apr, 2022 5 commits
  9. 24 Apr, 2022 1 commit
  10. 23 Apr, 2022 4 commits
    • Florian Schmaus's avatar
      Merge branch 'inc-sleep-sem-threshold' into 'master' · 2ab1777a
      Florian Schmaus authored
      increase the sleep semaphore threshold
      
      See merge request i4/manycore/emper!377
      2ab1777a
    • Florian Schmaus's avatar
      Merge branch 'pulse-eval' into 'master' · fca60937
      Florian Schmaus authored
      Pulse: initial pulse evaluation commit
      
      See merge request i4/manycore/emper!376
      fca60937
    • Florian Schmaus's avatar
      Merge branch 'fsearch-finer-fiber-control' into 'master' · 75a7ff00
      Florian Schmaus authored
      fsearch: add more fine grained control about the used fiber throttles
      
      See merge request i4/manycore/emper!375
      75a7ff00
    • Florian Fischer's avatar
      increase the sleep semaphore threshold · c54a6bd4
      Florian Fischer authored
      Also remove the negation of the condition (!> equals <=).
      
      We currently use the semaphore of our sleep strategy very greedy.
      This is due to skipping the semaphore's V() operation if we are sure
      that it does not kill our progress guaranty.
      
      When scheduling new work from within the runtime we skip the wakeup if
      we observe nobody sleeping. This is fine in terms of progress and
      limits the amount of atomic operations on global state to a minimum.
      
      Using a threshold of 0 (observe nobody sleeping) however introduces a
      race between inserting new work and going to sleep which harm the
      latency when the worker goes to sleep and is not notified about the
      new work.
      
      This race is common and can be observed in the pulse micro benchmark.
      A emper with a threshold of 0 shows high latency compared to using
      an io-based sleep strategy or when increasing the threshold.
      
      $ build-release/eval/pulse | python -c "<calc mean>"
      Starting pulse evaluation with pulse=1, iterations=30 and utilization=80
      mean: 1721970116.425
      
      $ build-increased-sem-threshold/eval/pulse | python -c "<calc mean"
      Starting pulse evaluation with pulse=1, iterations=30 and utilization=80
      mean: 1000023942.15
      
      $ build-pipe-release/eval/pulse | python -c "<calc mean>
      Starting pulse evaluation with pulse=1, iterations=30 and utilization=80
      mean: 1000030557.0861111
      
      $ build-pipe-no-completer/eval/pulse | python -c "<calc mean>"
      Starting pulse evaluation with pulse=1, iterations=30 and utilization=80
      mean: 1000021514.1805556
      
      I could not measure any significant overhead due to the more atomics
      on my 16 core machine using the fs-eval on a SSD.
      
      $ ./eval.py -r 50 -i emper-vanilla emper-inc-sem-threshold emper-pipe emper-pipe-no-completer
      ...
      $ ./summarize.py results/1599f44-dirty-pasture/<date>/ -f '{target}-{median} (+- {std})'
      duration_time:u:
      emper-vanilla-0.202106189 (+- 0.016075981812486713)
      emper-inc-sem-threshold-0.2049344115 (+- 0.015348506939891596)
      emper-pipe-0.21689131 (+- 0.015198438371285145)
      emper-pipe-no-completer-0.1372724185 (+- 0.005865720218833998)
      c54a6bd4
  11. 21 Apr, 2022 1 commit
  12. 14 Apr, 2022 1 commit
  13. 10 Apr, 2022 4 commits