GENIAC Project and Japan's AI Strategy — The Future of Domestic LLMs Shown by Rakuten AI 3.0
Rakuten AI 3.0, born from the GENIAC project led by METI and NEDO. Featuring a 700-billion-parameter MoE architecture and achieving an 8.88 score on the Japanese MT-Bench. Released under Apache 2.0 license, this article examines the strategic importance of domestic LLMs in Japan's AI industrial policy and the significance of data sovereignty.
The Full Picture of the GENIAC Project — National AI Strategy Led by METI and NEDO
GENIAC (Generative AI Accelerator Challenge) is a core project of Japan's AI industrial policy, promoted by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO). Launched in earnest in fiscal year 2023, this initiative aims to break away from dependence on foreign large language models (LLMs) and establish Japan's own technological foundation. Rakuten AI 3.0, developed by Rakuten Group as a result of the GENIAC project, is an LLM with approximately 700 billion parameters using a Mixture of Experts (MoE) architecture, scheduled for release in spring 2026. It achieved a high score of 8.88 on the Japanese MT-Bench, surpassing GPT-4o in performance. By being released under the Apache 2.0 license, it creates an environment where domestic companies and research institutions can freely utilize the model.
The Overall Picture of Japan's AI Industrial Policy — Data Sovereignty and Technological Independence
The Japanese government positions ensuring independence in AI technology as a critical policy issue. Currently, there is heavy reliance on frontier models developed by US companies such as ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google), raising concerns from the perspectives of data sovereignty, security, and industrial competitiveness. The GENIAC project is designed as a strategic response to these challenges, comprehensively promoting domestic LLM development support, computing resource infrastructure, human resource development, and regulatory framework construction. In developing Rakuten AI 3.0, Rakuten utilized its in-house multi-node GPU cluster and high-quality training data specialized for Japanese, achieving Japanese language processing performance difficult to realize with foreign models.
Rakuten AI 3.0's MoE Architecture — Balancing Efficiency and Cost Reduction
Despite being a large-scale model with approximately 700 billion parameters, Rakuten AI 3.0 activates only about 40 billion parameters per token through the adoption of a Mixture of Experts (MoE) architecture. Specifically, it consists of 8 specialized experts and 1 always-active shared expert, with a mechanism that dynamically selects the optimal expert based on input content. This design achieves up to 90% cost reduction compared to third-party frontier models while maintaining high inference performance. It demonstrates particularly excellent performance in tasks such as text generation, code generation, and document analysis/extraction. Through training specialized for Japanese, the accuracy of nuance and context understanding has been significantly improved.
Breaking Free from Foreign Model Dependence — Advantages in Security and Customizability
While large-scale foreign models like GPT-4o, Claude, and Gemini offer high performance, they pose risks in terms of data handling, privacy, and service continuity. Particularly for companies handling sensitive business data or personal information, depending on foreign cloud APIs presents challenges from data sovereignty and compliance perspectives. Since Rakuten AI 3.0 is open-sourced under the Apache 2.0 license, companies can deploy the model in their own environments and operate under complete control. Furthermore, fine-tuning and domain adaptation specialized for Japanese are facilitated, enabling the development of specialized applications in fields such as government, healthcare, legal, and education. Within the Rakuten ecosystem, it is integrated into various services through the Rakuten AI Gateway, with practical implementation advancing.
Social Significance of Domestic LLMs — Deployment in Government, Healthcare, Legal, and Education Sectors
The development of domestic LLMs holds importance not merely as a technical achievement but as social infrastructure. In the government sector, models capable of accurately understanding Japanese official documents and laws are required for digitization of citizen services and automation of document processing. In healthcare, analysis of electronic medical records, diagnostic support, and summarization of medical literature demand specialized terminology and context understanding. In the legal field, efficiency improvements in contract review, case law search, and legal research are anticipated. In education, individualized learning support, report grading, and multilingual teaching material generation become possible. Open-source models like Rakuten AI 3.0 serve as a foundation for building highly specialized AI assistants by applying custom fine-tuning in these fields.
Strategic Importance of Data Sovereignty and Domestic AI — Competitive Advantage from Technological Independence
Data sovereignty refers to the right to keep a nation's data under its own laws and control, becoming increasingly important in the AI era. When depending on foreign models, there are risks that training data and user inputs are sent to overseas servers, making data location and usage methods opaque. Domestic models like Rakuten AI 3.0, being developed and operated within Japan, ensure transparency in data governance and facilitate compliance with regulations such as GDPR and the Personal Information Protection Act. Additionally, technological independence brings strategic advantages in international AI competition. Through open-sourcing, domestic startups and research institutions can develop new services based on the model, activating the entire AI industry ecosystem. METI and NEDO support the growth of this ecosystem through provision of computing resources, funding, and human resource development programs.
The Future of Domestic LLMs and AI Implementation Support — Expansion from Shinagawa, Tokyo
The release of Rakuten AI 3.0 represents a significant milestone in Japan's AI industrial policy, with expectations for accelerated practical implementation and widespread adoption of domestic LLMs. For companies to introduce such advanced AI technologies and achieve business process optimization and new service development, specialized knowledge and support are essential. Oflight Inc., based in Shinagawa Ward, Tokyo, provides AI implementation support and consulting services centered in Shinagawa, Minato, Shibuya, Setagaya, Meguro, and Ota wards. Through comprehensive support tailored to corporate needs, including domestic LLM utilization strategy formulation, fine-tuning support, and secure deployment construction, we promote AI adoption in Japan. Let us maximize the achievements of the GENIAC project and co-create a competitive future together.
Feel free to contact us
Contact Us