NVIDIA architectures

This revision is from 2024/06/27 13:22. You can Restore it.

  • Tesla
    • G80: November 8, 2006
    • GT200: June 16, 2008
  • Fermi
    • GF100: March 26, 2010
    • GF110: November 9, 2010
  • Kepler
    • GK104: March 22, 2012
    • GK110: May 14, 2013
  • Maxwell
    • GM107: February 18, 2014
    • GM204: September 18, 2014
    • GM200: March 17, 2015

NVLink was introduced with the Pascal architecture. The first GPUs to support NVLink were the NVIDIA GeForce RTX 2080 (Turing architecture, 2018). Tesla P100 GPUs with NVLink were in SXM2 (Server Socket Module) form factor and Mezzanine form factor and NOT PCIe form factor.

  • Pascal
    • GP104: May 27, 2016
    • GP100: April 5, 2016

Tensor Cores were first introduced with the Volta architecture. The first GPU to feature Tensor Cores was the Nvidia Tesla V100, NVIDIA Titan V (GV100) which was released in December 2017.

  • Volta
    • GV100: December 7, 2017
  • Turing
    • TU102: September 20, 2018
    • TU104: October 17, 2018
  • Ampere
    • GA100: May 14, 2020
    • GA102: September 17, 2020
  • Hopper
    • GH100: March 22, 2022
  • Ada Lovelace
    • AD102: September 20, 2022
  • Blackwell
    • Expected in 2024

---

Nvidia GPU models with 16GB of RAM or more, released between Pascal architecture up to Hopper architecture:

  • Pascal Architecture (2016)
    • Tesla P100: Up to 16GB HBM2
    • Tesla P40: 24GB GDDR5
    • Tesla P4: 16GB GDDR5
  • Volta Architecture (2017)
    • Tesla V100: 16GB and 32GB HBM2 variants
    • Quadro GV100: 32GB HBM2
  • Turing Architecture (2018)
    • Quadro RTX 8000: 48GB GDDR6
    • Quadro RTX 6000: 24GB GDDR6
    • Quadro RTX 5000: 16GB GDDR6
    • Tesla T4: 16GB GDDR6
  • Ampere Architecture (2020)
    • A100: 40GB and 80GB HBM2e variants
    • A40: 48GB GDDR6
    • RTX A6000: 48GB GDDR6
    • RTX A5000: 24GB GDDR6
    • RTX A4000: 16GB GDDR6
    • A30: 24GB HBM2
    • A10: 24GB GDDR6
  

📝 📜 ⏱️ ⬆️