Foundation Model
Also known as: Foundation Model / 基盤モデル
A large model pre-trained on broad data that can be adapted to many downstream tasks via fine-tuning or prompting. The category includes LLMs, vision models, audio models, and multimodal systems.
Overview
The term 'foundation model' was coined by Stanford HAI in 2021 to describe large models pre-trained on broad data that can be adapted to many tasks with minimal additional training. The category covers text (LLMs), image (diffusion models), audio (Whisper), and multimodal systems.
Business implication
Foundation models eliminate the need to train from scratch. Businesses build task-specific applications on top via RAG or fine-tuning, making high-capability AI accessible to SMBs without massive compute budgets.
Related Columns
Feel free to contact us
Contact Us