株式会社オブライト
AI2026-05-17

Distillation (Knowledge Distillation)

Also known as: Distillation / Knowledge Distillation / 知識蒸留

Training a smaller 'student' model to mimic the output distribution of a larger 'teacher' model, compressing capabilities into a lighter-weight model suited for edge deployment or cost reduction.


Overview

Knowledge distillation, introduced by Hinton et al. in 2015, trains a small 'student' model on soft labels (probability distributions) produced by a large 'teacher' model. In the LLM era, distillation transfers reasoning capabilities from frontier models to smaller SLMs. The DeepSeek R1 distilled model family is a well-known recent example.

Business application

Distilling from frontier-model (GPT-4o, Claude Opus) responses into a small local model can deliver near-equivalent performance on specific tasks while eliminating ongoing API costs — an effective strategy for on-premises deployment and cost optimization.

Related Columns

AI
Small Language Models Are the Star of 2026: Why SMBs Should Adopt SLMs Now and How to Get Started
Gartner has named Domain-Specific Language Models a top strategic technology trend for 2026. Small Language Models (SLMs) are transforming AI adoption for SMBs with lower costs, higher accuracy for specific tasks, and zero data leakage risk. This guide covers benefits, leading models, practical use cases, and step-by-step adoption.
AI
Hybrid AI Strategy Guide — Achieving 50% Cost Reduction with Cloud API + Local LLM [2026]
A practical guide to reducing AI operational costs by over 50% with a hybrid AI strategy combining cloud APIs and local LLMs. Learn optimal architecture design and implementation steps using local models like Qwen 3.5 and DeepSeek R1 with Claude, GPT, and Gemini.
AI
AI API Cost Optimization in the Pay-Per-Use Era — Smart Strategies for Claude, GPT, Gemini & Local LLMs [2026]
Comprehensive guide to AI API cost optimization in the pay-per-use era. Covers Claude, GPT, Gemini pricing comparisons, 5 reduction techniques including prompt caching, batch APIs, local LLM hybrid operations, monthly cost simulations, and ROI calculation methods.
AI
Gemma 4 Complete Guide — Features, System Requirements & Ollama Setup [2026]
Complete guide to Google Gemma 4 (released April 2, 2026): 4 model variants (E2B/E4B/26B MoE/31B Dense), Apache 2.0 license, system requirements, multimodal capabilities, AIME 89% benchmark, 140+ languages, and step-by-step Ollama installation and setup instructions.

Related Terms

Feel free to contact us

Contact Us