PKG-INFO 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324
  1. Metadata-Version: 2.4
  2. Name: ultralytics
  3. Version: 8.3.63
  4. Summary: Ultralytics YOLO 🚀 for SOTA object detection, multi-object tracking, instance segmentation, pose estimation and image classification.
  5. Author-email: Glenn Jocher <glenn.jocher@ultralytics.com>, Jing Qiu <jing.qiu@ultralytics.com>
  6. Maintainer-email: Ultralytics <hello@ultralytics.com>
  7. License: AGPL-3.0
  8. Project-URL: Homepage, https://ultralytics.com
  9. Project-URL: Source, https://github.com/ultralytics/ultralytics
  10. Project-URL: Documentation, https://docs.ultralytics.com
  11. Project-URL: Bug Reports, https://github.com/ultralytics/ultralytics/issues
  12. Project-URL: Changelog, https://github.com/ultralytics/ultralytics/releases
  13. Keywords: machine-learning,deep-learning,computer-vision,ML,DL,AI,YOLO,YOLOv3,YOLOv5,YOLOv8,YOLOv9,YOLOv10,YOLO11,HUB,Ultralytics
  14. Classifier: Development Status :: 4 - Beta
  15. Classifier: Intended Audience :: Developers
  16. Classifier: Intended Audience :: Education
  17. Classifier: Intended Audience :: Science/Research
  18. Classifier: License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)
  19. Classifier: Programming Language :: Python :: 3
  20. Classifier: Programming Language :: Python :: 3.8
  21. Classifier: Programming Language :: Python :: 3.9
  22. Classifier: Programming Language :: Python :: 3.10
  23. Classifier: Programming Language :: Python :: 3.11
  24. Classifier: Programming Language :: Python :: 3.12
  25. Classifier: Topic :: Software Development
  26. Classifier: Topic :: Scientific/Engineering
  27. Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
  28. Classifier: Topic :: Scientific/Engineering :: Image Recognition
  29. Classifier: Operating System :: POSIX :: Linux
  30. Classifier: Operating System :: MacOS
  31. Classifier: Operating System :: Microsoft :: Windows
  32. Requires-Python: >=3.8
  33. Description-Content-Type: text/markdown
  34. License-File: LICENSE
  35. Requires-Dist: numpy>=1.23.0
  36. Requires-Dist: numpy<2.0.0; sys_platform == "darwin"
  37. Requires-Dist: matplotlib>=3.3.0
  38. Requires-Dist: opencv-python>=4.6.0
  39. Requires-Dist: pillow>=7.1.2
  40. Requires-Dist: pyyaml>=5.3.1
  41. Requires-Dist: requests>=2.23.0
  42. Requires-Dist: scipy>=1.4.1
  43. Requires-Dist: torch>=1.8.0
  44. Requires-Dist: torch!=2.4.0,>=1.8.0; sys_platform == "win32"
  45. Requires-Dist: torchvision>=0.9.0
  46. Requires-Dist: tqdm>=4.64.0
  47. Requires-Dist: psutil
  48. Requires-Dist: py-cpuinfo
  49. Requires-Dist: pandas>=1.1.4
  50. Requires-Dist: seaborn>=0.11.0
  51. Requires-Dist: ultralytics-thop>=2.0.0
  52. Provides-Extra: dev
  53. Requires-Dist: ipython; extra == "dev"
  54. Requires-Dist: pytest; extra == "dev"
  55. Requires-Dist: pytest-cov; extra == "dev"
  56. Requires-Dist: coverage[toml]; extra == "dev"
  57. Requires-Dist: mkdocs>=1.6.0; extra == "dev"
  58. Requires-Dist: mkdocs-material>=9.5.9; extra == "dev"
  59. Requires-Dist: mkdocstrings[python]; extra == "dev"
  60. Requires-Dist: mkdocs-redirects; extra == "dev"
  61. Requires-Dist: mkdocs-ultralytics-plugin>=0.1.8; extra == "dev"
  62. Requires-Dist: mkdocs-macros-plugin>=1.0.5; extra == "dev"
  63. Provides-Extra: export
  64. Requires-Dist: onnx>=1.12.0; extra == "export"
  65. Requires-Dist: coremltools>=7.0; (platform_system != "Windows" and python_version <= "3.11") and extra == "export"
  66. Requires-Dist: scikit-learn>=1.3.2; (platform_system != "Windows" and python_version <= "3.11") and extra == "export"
  67. Requires-Dist: openvino>=2024.0.0; extra == "export"
  68. Requires-Dist: tensorflow>=2.0.0; extra == "export"
  69. Requires-Dist: tensorflowjs>=3.9.0; extra == "export"
  70. Requires-Dist: tensorstore>=0.1.63; (platform_machine == "aarch64" and python_version >= "3.9") and extra == "export"
  71. Requires-Dist: keras; extra == "export"
  72. Requires-Dist: flatbuffers<100,>=23.5.26; platform_machine == "aarch64" and extra == "export"
  73. Requires-Dist: numpy==1.23.5; platform_machine == "aarch64" and extra == "export"
  74. Requires-Dist: h5py!=3.11.0; platform_machine == "aarch64" and extra == "export"
  75. Provides-Extra: solutions
  76. Requires-Dist: shapely>=2.0.0; extra == "solutions"
  77. Requires-Dist: streamlit; extra == "solutions"
  78. Provides-Extra: logging
  79. Requires-Dist: comet; extra == "logging"
  80. Requires-Dist: tensorboard>=2.13.0; extra == "logging"
  81. Requires-Dist: dvclive>=2.12.0; extra == "logging"
  82. Provides-Extra: extra
  83. Requires-Dist: hub-sdk>=0.0.12; extra == "extra"
  84. Requires-Dist: ipython; extra == "extra"
  85. Requires-Dist: albumentations>=1.4.6; extra == "extra"
  86. Requires-Dist: pycocotools>=2.0.7; extra == "extra"
  87. Dynamic: license-file
  88. <p align="center">
  89. <img src="assets/icon.png" width="110" style="margin-bottom: 0.2;"/>
  90. <p>
  91. <h2 align="center">YOLOv13: Real-Time Object Detection with Hypergraph-Enhanced Adaptive Visual Perception</h2>
  92. <p align="center">
  93. <a href="https://arxiv.org/abs/2506.17733">
  94. <img src="https://img.shields.io/badge/arXiv-Paper-b31b1b.svg" alt="arXiv">
  95. </a>
  96. <a href="https://github.com/iMoonLab">
  97. <img src="https://img.shields.io/badge/iMoonLab-Homepage-blueviolet.svg" alt="iMoonLab">
  98. </a>
  99. </p>
  100. <div align="center">
  101. <img src="assets/framework.png">
  102. </div>
  103. ## Updates
  104. - 2025/07/19: [HuggingFace Spaces Demo](https://huggingface.co/spaces/atalaydenknalbant/Yolov13) is online. Thanks to [Atalay](https://github.com/atalaydenknalbant)!
  105. - 2025/06/27: [Converting YOLOv13](https://github.com/kaylorchen/ai_framework_demo) to Huawei Ascend (OM), Rockchip (RKNN) formats is supported. Thanks to [kaylorchen](https://github.com/kaylorchen)!
  106. - 2025/06/25: [FastAPI REST API](https://github.com/iMoonLab/yolov13/tree/main/examples/YOLOv13-FastAPI-REST-API) is supported. Thanks to [MohibShaikh](https://github.com/MohibShaikh)!
  107. - 2025/06/24: 🔥 **The paper of YOLOv13 can be downloaded**: [🔗 YOLOv13: Real-Time Object Detection with Hypergraph-Enhanced Adaptive Visual Perception](https://arxiv.org/abs/2506.17733).
  108. - 2025/06/24: [Android deployment](https://github.com/mpj1234/ncnn-yolov13-android/tree/main) is supported. Thanks to [mpj1234](https://github.com/mpj1234)!
  109. - 2025/06/22: YOLOv13 model weights released.
  110. - 2025/06/21: The code of YOLOv13 has been open-sourced.
  111. <h2>Table of Contents</h2>
  112. - [Technical Briefing 💡](#technical-briefing-)
  113. - [Main Results 🏆](#main-results-)
  114. - [1. MS COCO Benchmark](#1-ms-coco-benchmark)
  115. - [2. Visualizations](#2-visualizations)
  116. - [Quick Start 🚀](#quick-start-)
  117. - [1. Install Dependencies](#1-install-dependencies)
  118. - [2. Validation](#2-validation)
  119. - [3. Training](#3-training)
  120. - [4. Prediction](#4-prediction)
  121. - [5. Export](#5-export)
  122. - [Related Projects 🔗](#related-projects-)
  123. - [Cite YOLOv13 📝](#cite-yolov13-)
  124. ## Technical Briefing 💡
  125. **Introducing YOLOv13**—the next-generation real-time detector with cutting-edge performance and efficiency. YOLOv13 family includes four variants: Nano, Small, Large, and X-Large, powered by:
  126. * **HyperACE: Hypergraph-based Adaptive Correlation Enhancement**
  127. * Treats pixels in multi-scale feature maps as hypergraph vertices.
  128. * Adopts a learnable hyperedge construction module to adaptively exploring high-order correlations between vertices.
  129. * A message passing module with linear complexity is leveraged to effectively aggregate multi-scale features with the guidance of high-order correlations to achieve effective visual perception of complex scenarios.
  130. * **FullPAD: Full-Pipeline Aggregation-and-Distribution Paradigm**
  131. * Uses the HyperACE to aggregate multi-scale features of the backbone and extract high-order correlations in the hypergraph space.
  132. * FullPAD paradigm further leverages three separate tunnels to forward these correlation-enhanced features to the connection between the backbone and neck, the internal layers of the neck, and the connection between the neck and head, respectively. In this way, YOLOv13 achieves fine‑grained information flow and representational synergy across the entire pipeline.
  133. * FullPAD significantly improves gradient propagation and enhances the detection performance.
  134. * **Model Lightweighting via DS-based Blocks**
  135. * Replaces large-kernel convolutions with blocks building based on depthwise separable convolutions (DSConv, DS-Bottleneck, DS-C3k, DS-C3k2), preserving receptive field while greatly reducing parameters and computation.
  136. * Achieves faster inference speed without sacrificing accuracy.
  137. > YOLOv13 seamlessly combines hypergraph computation with end-to-end information collaboration to deliver a more accurate, robust, and efficient real-time detection solution.
  138. ## Main Results 🏆
  139. ### 1. MS COCO Benchmark
  140. **Table 1. Quantitative comparison with other state-of-the-art real-time object detectors on the MS COCO dataset**
  141. | **Method** | **FLOPs (G)** | **Parameters(M)** | **AP<sub>50:95</sub><sup>val</sup>** | **AP<sub>50</sub><sup>val</sup>** | **AP<sub>75</sub><sup>val</sup>** | **Latency (ms)** |
  142. | :--- | :---: | :---: | :---: | :---: | :---: | :---: |
  143. | YOLOv6-3.0-N | 11.4 | 4.7 | 37.0 | 52.7 | – | 2.74 |
  144. | Gold-YOLO-N | 12.1 | 5.6 | 39.6 | 55.7 | – | 2.97 |
  145. | YOLOv8-N | 8.7 | 3.2 | 37.4 | 52.6 | 40.5 | 1.77 |
  146. | YOLOv10-N | 6.7 | 2.3 | 38.5 | 53.8 | 41.7 | 1.84 |
  147. | YOLO11-N | 6.5 | 2.6 | 38.6 | 54.2 | 41.6 | 1.53 |
  148. | YOLOv12-N | 6.5 | 2.6 | 40.1 | 56.0 | 43.4 | 1.83 |
  149. | **YOLOv13-N** | **6.4** | **2.5** | **41.6** | **57.8** | **45.1** | **1.97** |
  150. | | | | | |
  151. | YOLOv6-3.0-S | 45.3 | 18.5 | 44.3 | 61.2 | – | 3.42 |
  152. | Gold-YOLO-S | 46.0 | 21.5 | 45.4 | 62.5 | – | 3.82 |
  153. | YOLOv8-S | 28.6 | 11.2 | 45.0 | 61.8 | 48.7 | 2.33 |
  154. | RT-DETR-R18 | 60.0 | 20.0 | 46.5 | 63.8 | – | 4.58 |
  155. | RT-DETRv2-R18 | 60.0 | 20.0 | 47.9 | 64.9 | – | 4.58 |
  156. | YOLOv9-S | 26.4 | 7.1 | 46.8 | 63.4 | 50.7 | 3.44 |
  157. | YOLOv10-S | 21.6 | 7.2 | 46.3 | 63.0 | 50.4 | 2.53 |
  158. | YOLO11-S | 21.5 | 9.4 | 45.8 | 62.6 | 49.8 | 2.56 |
  159. | YOLOv12-S | 21.4 | 9.3 | 47.1 | 64.2 | 51.0 | 2.82 |
  160. | **YOLOv13-S** | **20.8** | **9.0** | **48.0** | **65.2** | **52.0** | **2.98** |
  161. | | | | | |
  162. | YOLOv6-3.0-L | 150.7 | 59.6 | 51.8 | 69.2 | – | 9.01 |
  163. | Gold-YOLO-L | 151.7 | 75.1 | 51.8 | 68.9 | – | 10.69 |
  164. | YOLOv8-L | 165.2 | 43.7 | 53.0 | 69.8 | 57.7 | 8.13 |
  165. | RT-DETR-R50 | 136.0 | 42.0 | 53.1 | 71.3 | – | 6.93 |
  166. | RT-DETRv2-R50 | 136.0 | 42.0 | 53.4 | 71.6 | – | 6.93 |
  167. | YOLOv9-C | 102.1 | 25.3 | 53.0 | 70.2 | 57.8 | 6.64 |
  168. | YOLOv10-L | 120.3 | 24.4 | 53.2 | 70.1 | 57.2 | 7.31 |
  169. | YOLO11-L | 86.9 | 25.3 | 52.3 | 69.2 | 55.7 | 6.23 |
  170. | YOLOv12-L | 88.9 | 26.4 | 53.0 | 70.0 | 57.9 | 7.10 |
  171. | **YOLOv13-L** | **88.4** | **27.6** | **53.4** | **70.9** | **58.1** | **8.63** |
  172. | | | | | |
  173. | YOLOv8-X | 257.8 | 68.2 | 54.0 | 71.0 | 58.8 | 12.83 |
  174. | RT-DETR-R101 | 259.0 | 76.0 | 54.3 | 72.7 | – | 13.51 |
  175. | RT-DETRv2-R101| 259.0 | 76.0 | 54.3 | 72.8 | – | 13.51 |
  176. | YOLOv10-X | 160.4 | 29.5 | 54.4 | 71.3 | 59.3 | 10.70 |
  177. | YOLO11-X | 194.9 | 56.9 | 54.2 | 71.0 | 59.1 | 11.35 |
  178. | YOLOv12-X | 199.0 | 59.1 | 54.4 | 71.1 | 59.3 | 12.46 |
  179. | **YOLOv13-X** | **199.2** | **64.0** | **54.8** | **72.0** | **59.8** | **14.67** |
  180. ### 2. Visualizations
  181. <div>
  182. <img src="assets/vis.png" width="100%" height="100%">
  183. </div>
  184. **Visualization examples of YOLOv10-N/S, YOLO11-N/S, YOLOv12-N/S, and YOLOv13-N/S.**
  185. <div>
  186. <img src="assets/hyperedge.png" width="60%" height="60%">
  187. </div>
  188. **Representative visualization examples of adaptive hyperedges. The hyperedges in the first and second columns mainly focus on the high-order interactions among objects in the foreground. The third column mainly focuses on the high-order interactions between the background and part of the foreground. The visualization of these hyperedges can intuitively reflect the high-order visual associations modeled by the YOLOv13.**
  189. ## Quick Start 🚀
  190. ### 1. Install Dependencies
  191. ```
  192. wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu11torch2.2cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
  193. conda create -n yolov13 python=3.11
  194. conda activate yolov13
  195. pip install -r requirements.txt
  196. pip install -e .
  197. ```
  198. YOLOv13 suppports Flash Attention acceleration.
  199. ### 2. Validation
  200. [`YOLOv13-N`](https://github.com/iMoonLab/yolov13/releases/download/yolov13/yolov13n.pt)
  201. [`YOLOv13-S`](https://github.com/iMoonLab/yolov13/releases/download/yolov13/yolov13s.pt)
  202. [`YOLOv13-L`](https://github.com/iMoonLab/yolov13/releases/download/yolov13/yolov13l.pt)
  203. [`YOLOv13-X`](https://github.com/iMoonLab/yolov13/releases/download/yolov13/yolov13x.pt)
  204. Use the following code to validate the YOLOv13 models on the COCO dataset. Make sure to replace `{n/s/l/x}` with the desired model scale (nano, small, plus, or ultra).
  205. ```python
  206. from ultralytics import YOLO
  207. model = YOLO('yolov13{n/s/l/x}.pt') # Replace with the desired model scale
  208. ```
  209. ### 3. Training
  210. Use the following code to train the YOLOv13 models. Make sure to replace `yolov13n.yaml` with the desired model configuration file path, and `coco.yaml` with your coco dataset configuration file.
  211. ```python
  212. from ultralytics import YOLO
  213. model = YOLO('yolov13n.yaml')
  214. # Train the model
  215. results = model.train(
  216. data='coco.yaml',
  217. epochs=600,
  218. batch=256,
  219. imgsz=640,
  220. scale=0.5, # S:0.9; L:0.9; X:0.9
  221. mosaic=1.0,
  222. mixup=0.0, # S:0.05; L:0.15; X:0.2
  223. copy_paste=0.1, # S:0.15; L:0.5; X:0.6
  224. device="0,1,2,3",
  225. )
  226. # Evaluate model performance on the validation set
  227. metrics = model.val('coco.yaml')
  228. # Perform object detection on an image
  229. results = model("path/to/your/image.jpg")
  230. results[0].show()
  231. ```
  232. ### 4. Prediction
  233. Use the following code to perform object detection using the YOLOv13 models. Make sure to replace `{n/s/l/x}` with the desired model scale.
  234. ```python
  235. from ultralytics import YOLO
  236. model = YOLO('yolov13{n/s/l/x}.pt') # Replace with the desired model scale
  237. model.predict()
  238. ```
  239. ### 5. Export
  240. Use the following code to export the YOLOv13 models to ONNX or TensorRT format. Make sure to replace `{n/s/l/x}` with the desired model scale.
  241. ```python
  242. from ultralytics import YOLO
  243. model = YOLO('yolov13{n/s/l/x}.pt') # Replace with the desired model scale
  244. model.export(format="engine", half=True) # or format="onnx"
  245. ```
  246. ## Related Projects 🔗
  247. - The code is based on [Ultralytics](https://github.com/ultralytics/ultralytics). Thanks for their excellent work!
  248. - Other wonderful works about Hypergraph Computation:
  249. - "Hypergraph Neural Networks": [[paper](https://arxiv.org/abs/1809.09401)] [[code](https://github.com/iMoonLab/HGNN)]
  250. - "HGNN+: General Hypergraph Nerual Networks": [[paper](https://ieeexplore.ieee.org/abstract/document/9795251)] [[code](https://github.com/iMoonLab/DeepHypergraph)]
  251. - "SoftHGNN: Soft Hypergraph Neural Networks for General Visual Recognition": [[paper](https://arxiv.org/abs/2505.15325)] [[code](https://github.com/Mengqi-Lei/SoftHGNN)]
  252. ## Cite YOLOv13 📝
  253. ```bibtex
  254. @article{yolov13,
  255. title={YOLOv13: Real-Time Object Detection with Hypergraph-Enhanced Adaptive Visual Perception},
  256. author={Lei, Mengqi and Li, Siqi and Wu, Yihong and et al.},
  257. journal={arXiv preprint arXiv:2506.17733},
  258. year={2025}
  259. }
  260. ```