株式会社オブライト
AI2026-05-17

Alignment

Also known as: Alignment / AIアライメント / AI整合

The research field and engineering practice of ensuring AI systems act in accordance with human intentions, values, and ethics. RLHF, DPO, and Constitutional AI are its primary technical implementations.


Overview

Alignment encompasses both technical work — training LLMs to be helpful, harmless, and honest via RLHF, DPO, and Constitutional AI — and broader research on long-term AI safety. Anthropic was founded with AI alignment as its central mission.

Regulatory context

The EU AI Act, the US Executive Order on AI, and Japan's AI Safety Guidelines all require organizations to ensure AI alignment. For businesses deploying LLMs, verifying that model providers' alignment policies are consistent with intended use cases is part of responsible AI governance.

Related Columns

Related Terms

Feel free to contact us

Contact Us