株式会社オブライト
Infrastructure2026-05-17

NVIDIA H100

Also known as: NVIDIA H100 / H100 GPU / エヌビディアH100

NVIDIA's Hopper-generation datacenter GPU. One of the most widely used GPUs for LLM training and inference, featuring Tensor Cores and high-speed NVLink for multi-GPU communication in cloud and on-premises AI infrastructure.


Overview

H100 is the primary GPU for training and serving large LLMs like GPT-4. It is available as AWS p4de, GCP A3, and Azure NDH100 instance types, with NVIDIA NIM inference microservices optimized for its architecture.

AI Agent Infrastructure

NVIDIA's enterprise AI agent platform NemoClaw assumes H100-class clusters for high-throughput inference. See NVIDIA NemoClaw agent platform guide.

Related Columns

Related Terms

Feel free to contact us

Contact Us