You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

25 lines
1.6 KiB

5 months ago
5 months ago
基于显卡在大模型推理任务中的性能表现,性能排名如下:
1. **H800**
2. **A100**
3. **A800**
4. **L40/L40S**
5. **L20**
6. **A40**
7. **V100**
8. **A10**
9. **T4**
## 参考资料
- [性能比较NVIDIA A100、H100 和 H800](https://medium.com/@KarlHavard/performance-comparison-nvidia-a100-h100-h800-04db98c58648) 得出结论H100 > H800 > A100
- [A100和A800的区别](http://www.531pool.com/index.php?m=home&c=View&a=index&aid=99) 得出结论A100 > A800
- [A100 vs V100 Deep Learning Benchmarks](https://lambdalabs.com/blog/nvidia-a100-vs-v100-benchmarks) 得出结论A100 > V100
- [Comparing NVIDIA A100 and NVIDIA L40S](https://www.hpcwire.com/2023/10/30/comparing-nvidia-a100-and-nvidia-l40s-which-gpu-is-ideal-for-ai-and-graphics-intensive-workloads/) 得出结论L40S≈A100
- [NVIDIA A40 Deep Learning Benchmarks](https://lambdalabs.com/blog/nvidia-rtx-a40-benchmarks) 得出结论A40 40G > V100 16G
- [NVIDIA Tesla T4 v.s V100](https://celikmustafa89.medium.com/nvidia-tesla-t4-v-s-v100-60b94ec8f564) 得出结论V100 > T4
- [Comparing NVIDIA GPUs for AI: T4 vs A10](https://www.baseten.co/blog/comparing-nvidia-gpus-for-ai-t4-vs-a10/) 得出结论A10 > T4
- [NVIDIA-A10-PCIe-vs-NVIDIA-Tesla-V100-PCIe-32-GB](https://www.topcpu.net/gpu-c/NVIDIA-A10-PCIe-vs-NVIDIA-Tesla-V100-PCIe-32-GB) A10 V100没有对比结论根据与其他GPU的比较暂认为相当
- [l20-vs-l40](https://www.topcpu.net/en/soc-c/l20-vs-l40) [L20介绍](https://mp.weixin.qq.com/s/h62PnywRJT0q-8_WRbVJdg) 得出结论L40S略强于L20
这些资源将帮助你根据具体的推理任务需求选择最合适的显卡。