900-21001-0000-000 by NVIDIA Corporation

Image shown is a representation only.

Manufacturer NVIDIA Corporation
Manufacturer's Part Number 900-21001-0000-000
Description NVIDIA; Product Type: Graphic Card; Host Interface: PCI Express 4.0; Form Factor: Plug-in Card; Slot Space Required: Dual; Platform Supported: PC;
NAME DESCRIPTION
Chipset Manufacturer: NVIDIA
Standard Memory: 40 GB
Cooler Type: Passive Cooler
Multi-GPU Technology: NVLink
Product Name: NVIDIA A100 Tensor Core GPU
Chipset Model: A100
Platform Supported: PC
Card Length: Full-length
Brand Name: NVIDIA
Memory Technology: HBM2
Form Factor: Plug-in Card
Marketing Information:

Accelerating the Most Important Work of Our Time

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes. And third-generation Tensor Cores accelerate every precision for diverse workloads, speeding time to insight and time to market.

The Most Powerful End-to-End AI and HPC Data Center Platform

A100 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGC. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

Deep Learning Training

AI models are exploding in complexity as they take on next-level challenges such as accurate conversational AI and deep recommender systems. Training them requires massive compute power and scalability.

NVIDIA A100’s third-generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with third-generation NVIDIA® NVLink®, NVIDIA NVSwitch, PCI Gen4, NVIDIA Mellanox InfiniBand, and the NVIDIA Magnum IO software SDK, it’s possible to scale to thousands of A100 GPUs. This means that large AI models like BERT can be trained in just 37 minutes on a cluster of 1,024 A100s, offering unprecedented performance and scalability.

NVIDIA’s training leadership was demonstrated in MLPerf 0.6, the first industry-wide benchmark for AI training.

Product Type: Graphic Card
Host Interface: PCI Express 4.0
Card Height: Full-height
Slot Space Required: Dual