Skip to content
Snippets Groups Projects
  1. Nov 15, 2024
  2. Mar 26, 2024
  3. Dec 16, 2022
  4. Nov 23, 2022
  5. Sep 16, 2022
  6. Jul 19, 2022
  7. Jun 09, 2022
  8. May 30, 2022
  9. Apr 10, 2022
  10. Mar 24, 2022
  11. Feb 28, 2022
  12. Feb 26, 2022
  13. Feb 24, 2022
  14. Feb 18, 2022
  15. Feb 11, 2022
  16. Feb 07, 2022
  17. Jan 27, 2022
  18. Jan 22, 2022
  19. Jan 21, 2022
  20. Jan 15, 2022
  21. Dec 14, 2021
  22. Dec 13, 2021
  23. Dec 10, 2021
    • Florian Fischer's avatar
      [CI] print the number of available CPUs · 2653df7e
      Florian Fischer authored
      2653df7e
    • Florian Fischer's avatar
      Introduce waitfree workstealing · 1c538024
      Florian Fischer authored
      Waitfree work stealing is configured with the meson option
      'waitfree_work_stealing'.
      
      The retry logic is intentionally left in the Queues and not lifted to
      the scheduler to reuse the load of an unsuccessful CAS.
      
      Consider the following pseudo code examples:
      
      steal() -> bool:                       steal() -> res
        load                                   load
      loop:                                    if empty return EMPTY
        if empty return EMPTY                  cas
        cas                                    return cas ? STOLEN : LOST_RACE
        if not WAITFREE and not cas:
          goto loop                          outer():
        return cas ? STOLEN : LOST_RACE      loop:
                                               res = steal()
      outer():                                 if not WAITFREE and res == LOST_RACE:
        steal()                                  goto loop
      
      In the right example the value loaded by a possible unsuccessful CAS
      can not be reused. And a loop of unsuccessful CAS' will result in
      double loads.
      
      The retries are configurable through a template variable maxRetries.
      * maxRetries < 0: indefinitely retries
      * maxRetries >= 0: maxRetries
      1c538024
  24. Dec 08, 2021
  25. Dec 06, 2021
  26. Oct 28, 2021
  27. Oct 13, 2021
    • Florian Fischer's avatar
      [meson] introduce lockless memory order and rename lockless option · 67b0c77a
      Florian Fischer authored
      The lockless algorithm can now be configured by setting -Dio_lockless_cq=true
      and the used memory ordering by setting -Dio_lockless_memory_order={weak,strong}.
      
      io_lockless_memory_order=weak:
          read with acquire
          write with release
      
      io_lockless_memory_order=strong:
          read with seq_cst
          write with seq_cst
      67b0c77a
  28. Oct 11, 2021
    • Florian Fischer's avatar
      [IoContext] implement lockless CQ reaping · d9d350d9
      Florian Fischer authored
      TODO: think about stats and possible ring buffer pointers overflow and ABA.
      d9d350d9
    • Florian Fischer's avatar
      implement IO stealing · 0abc29ad
      Florian Fischer authored
      IO stealing is analog to work-stealing and means that worker thread
      without work will try to steal IO completions (CQEs) from other worker's
      IoContexts. The work stealing algorithm is modified to check a victims
      CQ after findig their work queue empty.
      
      This approach in combination with future additions (global notifications
      on IO completions, and lock free CQE consumption) are a realistic candidate
      to replace the completer thread without loosing its benefits.
      
      To allow IO stealing the CQ must be synchronized which is already the
      case with the IoContext::cq_lock.
      Currently stealing workers always try to pop a single CQE (this could
      be configurable).
      Steal attempts are recorded in the IoContext's Stats object and
      successfully stolen IO continuations in the AbstractWorkStealingWorkerStats.
      
      I moved the code transforming CQEs into continuation Fibers from
      reapCompletions into a seperate function to make the rather complicated
      function more readable and thus easier to understand.
      
      Remove the default CallerEnvironment template arguments to make
      the code more explicit and prevent easy errors (not propagating
      the caller environment or forgetting the function takes a caller environment).
      
      io::Stats now need to use atomics because multiple thread may increment
      them in parallel from EMPER and the OWNER.
      And since using std::atomic<T*> in std::map is not easily possible we
      use the compiler __atomic_* builtins.
      
      Add, adjust and fix some comments.
      0abc29ad
  29. Oct 04, 2021
    • Florian Fischer's avatar
      [WakeupStrategy] fix the throttle algorithm for notifiaction from anywhere · baedc874
      Florian Fischer authored
      The throttle algorithm had the same problem like our sleep algorithms
      where notifications from anywhere may race with a worker going to
      sleep resulting in lost wakeups.
      In the sleep strategy we prevent those races by preventing sleep attempts
      when notifing from anywhere.
      The throttle algorithm also does now exactly this. A notifier from anywhere
      will now always set the WakeupStrategy state to notified.
      If the state was previously pending this new approach does not differ from
      the previous behavior and a sleeping worker will be notified.
      If the state was waking the waking worker skips its sleep if it observes
      the WakeupStrategy state as notified.
      baedc874
  30. Sep 27, 2021
    • Florian Fischer's avatar
      [log] improve timestamp scalability and increase LogBuffer size · 442ead84
      Florian Fischer authored
      std::localtime takes a global lock and is therefore not scalable and
      inapplicable for analyzing timing sensible bugs.
      Introduce a new option to add UTC timestamps. This allows on my system
      to double the CPU load while using mmapped logging.
      
      Also increase the LogBuffer size from 1MB to 1GB because I had some
      crashes where a renewed buffer was still used.
      442ead84
  31. Sep 24, 2021
  32. Sep 21, 2021
Loading