1. 15 Aug, 2021 1 commit
  2. 14 Aug, 2021 1 commit
  3. 11 Aug, 2021 1 commit
  4. 04 Aug, 2021 2 commits
  5. 02 Aug, 2021 1 commit
    • junji hashimoto's avatar
      Feature `python train.py --cache disk` (#4049) · 2d990632
      junji hashimoto authored
      
      
      * Add cache-on-disk and cache-directory to cache images on disk
      
      * Fix load_image with cache_on_disk
      
      * Add no_cache flag for load_image
      
      * Revert the parts('logging' and a new line) that do not need to be modified
      
      * Add the assertion for shapes of cached images
      
      * Add a suffix string for cached images
      
      * Fix boundary-error of letterbox for load_mosaic
      
      * Add prefix as cache-key of cache-on-disk
      
      * Update cache-function on disk
      
      * Add psutil in requirements.txt
      
      * Update train.py
      
      * Cleanup1
      
      * Cleanup2
      
      * Skip existing npy
      
      * Include re-space
      
      * Export return character fix
      
      Co-authored-by: default avatarGlenn Jocher <glenn.jocher@ultralytics.com>
      2d990632
  6. 31 Jul, 2021 1 commit
  7. 30 Jul, 2021 1 commit
    • IneovaAI's avatar
      Add `python train.py --freeze N` argument (#4238) · bceb57b9
      IneovaAI authored
      
      
      * Add freeze as an argument
      
      I train on different platforms and sometimes I want to freeze some layers. I have to go into the code and change it and also keep track of how many layers I froze on what platform. Please add the number of layers to freeze as an argument in future versions thanks.
      
      * Update train.py
      
      * Update train.py
      
      * Cleanup
      
      Co-authored-by: default avatarGlenn Jocher <glenn.jocher@ultralytics.com>
      bceb57b9
  8. 29 Jul, 2021 2 commits
  9. 28 Jul, 2021 3 commits
  10. 26 Jul, 2021 1 commit
  11. 25 Jul, 2021 1 commit
    • Glenn Jocher's avatar
      New CSV Logger (#4148) · 96e36a7c
      Glenn Jocher authored
      * New CSV Logger
      
      * cleanup
      
      * move batch plots into Logger
      
      * rename comment
      
      * Remove total loss from progress bar
      
      * mloss :-1 bug fix
      
      * Update plot_results()
      
      * Update plot_results()
      
      * plot_results bug fix
      96e36a7c
  12. 24 Jul, 2021 2 commits
    • Glenn Jocher's avatar
      Refactor train.py and val.py `loggers` (#4137) · efe60b56
      Glenn Jocher authored
      * Update loggers
      
      * Config
      
      * Update val.py
      
      * cleanup
      
      * fix1
      
      * fix2
      
      * fix3 and reformat
      
      * format sweep.py
      
      * Logger() class
      
      * cleanup
      
      * cleanup2
      
      * wandb package import fix
      
      * wandb package import fix2
      
      * txt fix
      
      * fix4
      
      * fix5
      
      * fix6
      
      * drop wandb into utils/loggers
      
      * fix 7
      
      * rename loggers/wandb_logging to loggers/wandb
      
      * Update message
      
      * Update message
      
      * Update message
      
      * cleanup
      
      * Fix x axis bug
      
      * fix rank 0 issue
      
      * cleanup
      efe60b56
    • Glenn Jocher's avatar
      Update train.py (#4136) · 63dd65e7
      Glenn Jocher authored
      * Refactor train.py
      
      * Update imports
      
      * Update imports
      
      * Update optimizer
      
      * cleanup
      63dd65e7
  13. 21 Jul, 2021 1 commit
  14. 19 Jul, 2021 1 commit
    • Glenn Jocher's avatar
      `val.py` refactor (#4053) · f7d85620
      Glenn Jocher authored
      
      
      * val.py refactor
      
      * cleanup
      
      * cleanup
      
      * cleanup
      
      * cleanup
      
      * save after eval
      
      * opt.imgsz bug fix
      
      * wandb refactor
      
      * dataloader to train_loader
      
      * capitalize global variables
      
      * runs/hub/exp to runs/detect/exp
      
      * refactor wandb logging
      
      * Refactor wandb operations (#4061)
      
      Co-authored-by: default avatarAyush Chaurasia <ayush.chaurarsia@gmail.com>
      f7d85620
  15. 17 Jul, 2021 1 commit
  16. 14 Jul, 2021 1 commit
  17. 08 Jul, 2021 1 commit
  18. 05 Jul, 2021 1 commit
    • Glenn Jocher's avatar
      Evolution commented `hyp['anchors']` fix (#3887) · 8930e22c
      Glenn Jocher authored
      Fix for `KeyError: 'anchors'` error when start hyperparameter evolution:
      ```bash
      python train.py --evolve
      ```
      
      ```bash
      Traceback (most recent call last):
        File "E:\yolov5\train.py", line 623, in <module>
          hyp[k] = max(hyp[k], v[1])  # lower limit
      KeyError: 'anchors'
      ```
      8930e22c
  19. 04 Jul, 2021 1 commit
  20. 30 Jun, 2021 1 commit
  21. 28 Jun, 2021 1 commit
    • yellowdolphin's avatar
      Fix warmup `accumulate` (#3722) · 3974d725
      yellowdolphin authored
      * gradient accumulation during warmup in train.py
      
      Context:
      `accumulate` is the number of batches/gradients accumulated before calling the next optimizer.step().
      During warmup, it is ramped up from 1 to the final value nbs / batch_size. 
      Although I have not seen this in other libraries, I like the idea. During warmup, as grads are large, too large steps are more of on issue than gradient noise due to small steps.
      
      The bug:
      The condition to perform the opt step is wrong
      > if ni % accumulate == 0:
      This produces irregular step sizes if `accumulate` is not constant. It becomes relevant when batch_size is small and `accumulate` changes many times during warmup.
      
      This demo also shows the proposed solution, to use a ">=" condition instead:
      https://colab.research.google.com/drive/1MA2z2eCXYB_BC5UZqgXueqL_y1Tz_XVq?usp=sharing
      
      
      
      Further, I propose not to restrict the number of warmup iterations to >= 1000. If the user changes hyp['warmup_epochs'], this causes unexpected behavior. Also, it makes evolution unstable if this parameter was to be optimized.
      
      * replace last_opt_step tracking by do_step(ni)
      
      * add docstrings
      
      * move down nw
      
      * Update train.py
      
      * revert math import move
      
      Co-authored-by: default avatarGlenn Jocher <glenn.jocher@ultralytics.com>
      3974d725
  22. 26 Jun, 2021 1 commit
  23. 25 Jun, 2021 2 commits
  24. 24 Jun, 2021 1 commit
    • Glenn Jocher's avatar
      Add optional dataset.yaml `path` attribute (#3753) · f79d7479
      Glenn Jocher authored
      * Add optional dataset.yaml `path` attribute
      
      @KalenMike
      
      * pass locals to python scripts
      
      * handle lists
      
      * update coco128.yaml
      
      * Capitalize first letter
      
      * add test key
      
      * finalize GlobalWheat2020.yaml
      
      * finalize objects365.yaml
      
      * finalize SKU-110K.yaml
      
      * finalize SKU-110K.yaml
      
      * finalize VisDrone.yaml
      
      * NoneType fix
      
      * update download comment
      
      * voc to VOC
      
      * update
      
      * update VOC.yaml
      
      * update VOC.yaml
      
      * remove dashes
      
      * delete get_voc.sh
      
      * force coco and coco128 to ../datasets
      
      * Capitalize Argoverse_HD.yaml
      
      * Capitalize Objects365.yaml
      
      * update Argoverse_HD.yaml
      
      * coco segments fix
      
      * VOC single-thread
      
      * update Argoverse_HD.yaml
      
      * update data_dict in test handling
      
      * create root
      f79d7479
  25. 23 Jun, 2021 2 commits
  26. 21 Jun, 2021 2 commits
  27. 20 Jun, 2021 2 commits
  28. 19 Jun, 2021 4 commits
    • Glenn Jocher's avatar
      Add torch DP warning (#3698) · c1af67dc
      Glenn Jocher authored
      c1af67dc
    • Glenn Jocher's avatar
      Eliminate `total_batch_size` variable (#3697) · b3e2f4e0
      Glenn Jocher authored
      * Eliminate `total_batch_size` variable
      
      * cleanup
      
      * Update train.py
      b3e2f4e0
    • Glenn Jocher's avatar
      Update DDP for `torch.distributed.run` with `gloo` backend (#3680) · fad27c00
      Glenn Jocher authored
      * Update DDP for `torch.distributed.run`
      
      * Add LOCAL_RANK
      
      * remove opt.local_rank
      
      * backend="gloo|nccl"
      
      * print
      
      * print
      
      * debug
      
      * debug
      
      * os.getenv
      
      * gloo
      
      * gloo
      
      * gloo
      
      * cleanup
      
      * fix getenv
      
      * cleanup
      
      * cleanup destroy
      
      * try nccl
      
      * return opt
      
      * add --local_rank
      
      * add timeout
      
      * add init_method
      
      * gloo
      
      * move destroy
      
      * move destroy
      
      * move print(opt) under if RANK
      
      * destroy only RANK 0
      
      * move destroy inside train()
      
      * restore destroy outside train()
      
      * update print(opt)
      
      * cleanup
      
      * nccl
      
      * gloo with 60 second timeout
      
      * update namespace printing
      fad27c00
    • lb-desupervised's avatar
      Slightly modify CLI execution (#3687) · bfb2276b
      lb-desupervised authored
      
      
      * Slightly modify CLI execution
      
      This simple change makes it easier to run the primary functions of this
      repo (train/detect/test) from within Python. An object which represents
      `opt` can be constructed and fed to the `main` function of each of these
      modules, rather than having to call the lower level functions directly,
      or run the module as a script.
      
      * Update export.py
      
      Add CLI parsing update for more convenient module usage within Python.
      
      Co-authored-by: default avatarLewis Belcher <lb@desupervised.io>
      bfb2276b