株式会社オブライト
AI2026-05-17

Fine-tuning

Also known as: Fine-tuning / ファインチューニング / 微調整

Additional training of a pre-trained foundation model on task- or domain-specific data to specialize its behavior or style. LoRA and QLoRA have made fine-tuning accessible without full parameter updates.


Overview

Fine-tuning continues training a pre-trained foundation model (GPT, Llama, Qwen, etc.) on a domain-specific or task-specific dataset to improve accuracy or output consistency. Full fine-tuning updates all parameters; parameter-efficient methods like LoRA and QLoRA update only a small adapter, making the process feasible on consumer hardware.

Fine-tuning vs RAG

Fine-tuning excels at internalizing writing style, tone, and domain vocabulary. RAG is better for retrieving up-to-date factual knowledge. Production systems often combine both: RAG for retrieval and fine-tuning for style and domain alignment.

Related Columns

Related Terms

Feel free to contact us

Contact Us