株式会社オブライト
AI Model2026-05-17

DeepSeek

Also known as: DeepSeek / DeepSeek V4 / DeepSeek R1

Open-weights LLMs from China's DeepSeek. DeepSeek V4 features a 1.6 T-parameter MoE with 1 M context and drew global attention for its ultra-low training cost.


Overview

DeepSeek V4 Preview is released as open weights with a 1 M context window. Its reported training cost — a fraction of GPT-4's — sparked global debate about AI development economics. See DeepSeek V4 release.

R1 reasoning model

DeepSeek R1 specialises in maths and coding via chain-of-thought reinforcement learning, with growing adoption as a cost-efficient local alternative to Claude.

Related Columns

Related Terms

Feel free to contact us

Contact Us