Skip to content
Snippets Groups Projects
  1. Jan 21, 2022
    • Florian Fischer's avatar
      [LinuxVersion] fix version comparison · 7eb1fff6
      Florian Fischer authored
      LinuxVersion used the assumption that both strings have the same
      amount of dot-separated components.
      But this is obviously not always the case.
      If we can't compare the two strings further they must been equal so far.
      7eb1fff6
    • Florian Fischer's avatar
      fix futex usage and shrinking counter · 9d627462
      Florian Fischer authored
      * sleeping workers have decremented the semaphore count before sleeping.
        But if they are notified specifically the semaphore counter is
        decremented excessively
        This results in unnecessary suspension/notifications because the
        counter is out of sync with the actual waiter count.
      * waitv expects that the futex size is specified in the futex flags
      * wake sleepers using FUTEX_PRIVATE_FLAG
      * futex_waitv returns the index of the woken futex -> wake on ret > -1
      * add debug output and asserts
      9d627462
    • Florian Fischer's avatar
      [Future] add getter for the return value · 88b015be
      Florian Fischer authored
      A getter not calling sem.wait is needed so we don't call sem.wait twice:
      once during Future::cancel() and Future::wait() to obtain the return
      value afterwards.
      88b015be
    • Florian Fischer's avatar
      [CancelFutureTest] add test case using all workers · d6dad951
      Florian Fischer authored
      Our only cancellation test case where it is possible that the cancellation
      must happen on a specific worker uses a single fiber.
      The introduced massCancelOnDifferentWorker() test case uses
      workerCount * 5 fibers and actively tries to provoke cancellation on
      other workers.
      d6dad951
    • Florian Fischer's avatar
      fix Future cancellation and enable test case · b9d204e6
      Florian Fischer authored
      Remember the IoContext where a Future was prepared and submit
      the CancelWrapper on the correct Worker using scheduleOn.
      b9d204e6
    • Florian Fischer's avatar
      replace inbox with mpscQueue · c826ce1e
      Florian Fischer authored
      c826ce1e
    • Florian Fischer's avatar
      [PipeSleepStrategy] fix specific state and sleeper count race · e0f54d46
      Florian Fischer authored
      Introducing a lock for each specific state greatly simplifies the
      algorithm, fixes a race and I expect it to be rather cheap.
      The fact that we have to check two conditions before sleeping
      and prepare resources dependent on those makes the algorithm
      complex and racy.
      We skip sleeping if we were notified specifically or the global sleeper
      count was less than 0.
      
      If we check our local state first and decrement the global sleeper count
      later. We could receive a notification after the decrement which causes
      the worker to skip sleeping making the decrement wrong and the whole counter
      unsound.
      
      Checking the local state first and mark us as sleeping preparing a
      read for the specific pipe has the problem that after the decrement if we
      should skip sleeping we have prepared sqes which we should submit needlessly
      because we are not actually sleeping.
      
      Ans decrementing the global count first has the same problem as the
      first one where the decrement is wrong if we skip sleeping after wards
      breaking the counter.
      
      All this is prevented by locking the specific state while we check both conditions.
      e0f54d46
    • Florian Fischer's avatar
      check sqe tags before reaping · c57a23be
      Florian Fischer authored
      Change the mechanisms how EMPER achieves the invariant that only the
      OWNER of an IoContext is allowed to reap new work notifications from it.
      Previously we used the state of the PipeSleepStrategy which proved complex
      and error prone.
      Now we always check if the completions we are about to reap contain any
      new work notifications and if so return early without reaping those.
      Now the behavior of reap locked equals the lock-less variants.
      c57a23be
    • Florian Fischer's avatar
      [PipeSleepStrategy] implement notifySpecific · 1aadf7fe
      Florian Fischer authored
      Implement notifySpecific by using a worker exclusive thread local
      sleepState and pipe.
      The sleepState previously waitInflight is no longer a member of IoContext
      a thread_local PipeSleepStrategy::SleepState is used instead.
      It is safe to use a thread_local object because onNewWorkNotification
      is only called by the Owner of the sleepState.
      Since others (ANYWHERE, EMPER) must not reap newWorkNotifications from
      a workers CQ.
      
      Update the algorithm documentation and other code comments.
      1aadf7fe
    • Florian Fischer's avatar
    • Florian Fischer's avatar
      fix/improve notifySpecific using semaphores · 15efdc5c
      Florian Fischer authored
      * Using SleeperState instead of boolean flags make the code more readable.
      * Don't try to notify ourselves resulting in an infinite loop.
      * Allocate the worker states cache line exclusive.
      * Add debug messages.
      * Back off for 1ms when notifying everyone to allow the specific worker
        to wake up.
      15efdc5c
    • Florian Fischer's avatar
      add semaphore using futex_waitv(2) supporting notify_specific · 96a846a1
      Florian Fischer authored
      The SpuriousFutex2Semaphore is able to notify a specific worker
      by using two futexes two wait on.
      
      One working like a normal semaphore used for global non specific
      notifications via notify() and notify_many().
      
      And a second one per worker which is based on a SleeperState.
      To notify a specific worker we change its SleeperState to Notified
      and call FUTEX_WAKE if needed.
      96a846a1
    • Florian Fischer's avatar
      pass a fiberHint through the onNewWork notifications · 207fba4d
      Florian Fischer authored
      The FiberHint is needed to decide in the runtime which worker to wake up.
      * Hint(Worker, FiberSource::inbox) -> try to notify the specific worker
      * Hint(FiberSource::{local,anywhereQueue}) -> notify anyone
      
      The first case is needed because due to the new worker local inbox queues
      we must notify the worker of the queue to prevent sleep locks.
      The SemaphoreSleepStrategy already has a notifySpecific implementation
      but it is very naive badly and we should implement new ones.
      
      The second case is the what the runtime has done before.
      Its WakeupStrategy decides how many workers the SleepStrategy should
      wake up.
      
      Also remove default CallerEnvironment template parameters to prevent
      errors where the CallerEnvironment was forgotten and not passed on a
      call side.
      207fba4d
    • Florian Fischer's avatar
      generalize fiber hints with new emper::FiberHint class · 0ea46b9c
      Florian Fischer authored
      The new class is used when specific location of a Fiber is needed
      it combines a emper::FiberSource with an workerid_t.
      This replaces the hints using TaggedPtrs with IoContext::PointerTags.
      
      IoContext::PointerTags::NewWork{Wsq,Aq} becomes
      IoContext::PointerTags::NewWorkNotification.
      0ea46b9c
    • Florian Fischer's avatar
      introduce new Scheduler::scheduleOn(fiber, workerId) function · 24993175
      Florian Fischer authored
      This function is needed to deal with worker local ressources: io_uring
      requests for example.
      
      Each worker now always has a MPSC inbox queue which was already used
      in the laws scheduling strategy.
      Fibers can be scheduled to a specific worker using the new
      Scheduler::scheduleOn method.
      
      Since the inbox queues are now always present we can use a single
      FiberSource enum combining AbstractWorkStealingStrategy::FiberSource
      and LawsStrategy::FiberSource.
      
      The laws strategy now uses the inbox queues as its priority queues.
      With the only differenze that when scheduling to a inbox queue
      using the Scheduler::scheduleOn the Fiber lifes only in the inbox
      queue and not also simultaneously in a WSQ.
      
      Unrelated code changes made while touching the code anyway:
      * Introduce FiberSource::io which hints that a Fiber comes from the
        worker's own CQ.
      * Strongly type the fiber's source in NextFiberResult.
      * Make all scheduler functions return std::optional<NextFiberResult>
      * Cleanup the identation in nextFiberResultViaWorkStealing
      24993175
  2. Jan 18, 2022
  3. Jan 16, 2022
  4. Jan 15, 2022
  5. Jan 14, 2022
  6. Jan 13, 2022
  7. Jan 12, 2022
  8. Jan 11, 2022
  9. Jan 10, 2022
  10. Dec 27, 2021
  11. Dec 25, 2021
    • Florian Fischer's avatar
      make cancellation in all emper variants sound · 9c0f2143
      Florian Fischer authored
      * Document data races of a future's state.
      * Get and set a Future's state only through methods. This helps to
        add possibly needed atomic operations.
      * Use atomics to get/set cancel and prepare state in IO_SINGLE_URING vaiant
      * Add more IO debug messages
      * Use the BPS of Futures with callbacks similar to those of forgotten
        ones to signal their preparation. The preparation mark the
        last moment where the Future is used in EMPER and after that the memory
        can be dropped.
        ATTENTION: This means not that the used resources of the IO request
        can be dropped. The kernel may still use a supplied buffer for example.
      * Fix Future chain cancellation in SubmitActor
      9c0f2143
  12. Dec 24, 2021
    • Florian Fischer's avatar
      properly cancel future callbacks · 95722c1b
      Florian Fischer authored
      Currently canceling Futures would never happen because we
      issued the cancel request only with the pointer of the future.
      This works more by coincidence than by design because
      the PointerTags::Future tagged onto the submitted future pointer is 0.
      
      This does not work for callbacks because they are tagged with a
      PointerTags != 0 and they don't use their callback pointer rather
      than the future pointer.
      
      Fix this by exporting the tagging from IoContext::prepareFutureChain
      into IoContext::createFutureTag and use this when submitting a cancel
      sqe.
      
      Warn the user that they have to manually take care of the memory safety
      of the callback because we can not await the callback in Future::cancel.
      
      Add a test case to CancelFutureTest.
      95722c1b
Loading