RAG (Retrieval-Augmented Generation)
Also known as: Retrieval-Augmented Generation / 検索拡張生成
A technique that retrieves relevant information from an external knowledge base and grounds an LLM response on it — the mainstream approach for connecting LLMs to up-to-date or proprietary data.
Overview
RAG retrieves relevant material from an external source — a vector DB, search engine, or internal wiki — and injects it into the LLM prompt before generation. Because it grounds the model on fresh or proprietary data without retraining, RAG is the dominant pattern for enterprise LLM deployments.
Common use cases
Internal knowledge Q&A, customer-support bots, product-doc search, legal research, and medical literature search. Google AI Overviews and Claude/ChatGPT web-search features are also RAG patterns. See Building a RAG knowledge base with OpenClaw for a practical implementation guide.
Related Columns
Feel free to contact us
Contact Us