900-2G503-0000-000 by NVIDIA Corporation

Image shown is a representation only.

Manufacturer NVIDIA Corporation
Manufacturer's Part Number 900-2G503-0000-000
Description NVIDIA; Product Type: Graphic Card; Form Factor: SXM2; Marketing Information: The Most Advanced Data Center GPU Ever Built.NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.GROUNDBREAKING INNOVATIONS VOLTA ARCHITECTUREBy pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning. TENSOR CORE Equipped with 640 Tensor Cores, Tesla V100 delivers 120 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.NEXT GENERATION NVLINKNVIDIA NVLink in Tesla V100 delive; Cooler Type: Passive Cooler; Platform Supported: PC;
NAME DESCRIPTION
Chipset Manufacturer: NVIDIA
Standard Memory: 16 GB
Cooler Type: Passive Cooler
Product Name: Tesla V100 Graphic Card
Chipset Model: V100
Platform Supported: PC
Brand Name: NVIDIA
Memory Technology: HBM2
Form Factor: SXM2
Chipset Line: Tesla
Marketing Information: The Most Advanced Data Center GPU Ever Built.
NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.

GROUNDBREAKING INNOVATIONS

VOLTA ARCHITECTURE
By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

TENSOR CORE
Equipped with 640 Tensor Cores, Tesla V100 delivers 120 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

NEXT GENERATION NVLINK
NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server.
Product Type: Graphic Card
API Supported: OpenACC, OpenCL, DirectCompute