株式会社オブライト
Services
About
Company
Column
Contact
日本語
日本語
メニューを開く
Column
オープンソースLLM
Articles tagged "オープンソースLLM"
7 articles
AI
2026-04-24
DeepSeek V4 Preview Released — 1.6T MoE / 1M-Token Context Open-Weight Model [April 2026]
Overview of DeepSeek V4 Preview, released on April 24, 2026: two open-weight Mixture-of-Experts variants (V4-Pro at 1.6T total / 49B active and V4-Flash at 284B / 13B), 1-million-token context, weights on Hugging Face, and rollout via API and chat — based on official information.
DeepSeek V4
オープンソースLLM
MoE
AI
2026-04-24
Kimi K2.6 Goes GA — Preview Lifted, SWE-Bench Pro 58.6 / HLE 54.0 Open-Source Frontrunner [April 2026]
Moonshot AI's Kimi K2.6 reached general availability on April 21, 2026. Reported benchmarks: SWE-Bench Pro 58.6 and HLE-Full (with tools) 54.0, leading the open-weight class. Available across Kimi.com, the Kimi app, the API, and Kimi Code CLI; weights on Hugging Face under a Modified MIT License. This is a follow-up update to our earlier comprehensive guide.
Kimi K2.6
Moonshot AI
オープンソースLLM
AI
2026-04-24
Qwen 3.6-27B Released — Dense 27B Leads Agentic Coding, 40 tok/s on RTX 3090 [April 2026]
Qwen 3.6-27B Dense from Alibaba's Qwen Team, released April 22, 2026: 77.2 on SWE-bench Verified, 59.3 on Terminal-Bench 2.0 (matching Claude 4.5 Opus), 262K-to-1M context, Apache 2.0 license, and 40 tok/s on an RTX 3090 with Q4_K_M — summarized from official sources.
Qwen 3.6
Alibaba
オープンソースLLM
AI
2026-04-22
Kimi K2.6 Complete Guide — 13-Hour Long-Horizon Coding, 300 Parallel Agents & HLE 54.0 Open-Source SOTA [April 2026]
Kimi K2.6, released by Moonshot AI on April 20, 2026, achieves open-source SOTA with HLE w/tools 54.0 and SWE-Bench Pro 58.6, surpassing GPT-5.4 and Claude Opus 4.6. Complete guide covering 13-hour long-horizon coding, 300 parallel agent swarms, and OpenClaw integration.
Kimi K2.6
Moonshot AI
エージェントスウォーム
AI
2026-04-10
GLM-5.1 Complete Guide — #1 SWE-bench Pro Open-Source LLM [April 2026]
GLM-5.1 by Z.ai (released April 7, 2026) is the first open-source LLM to top SWE-bench Pro at 58.4%, surpassing GPT-5.4 (57.7%) and Claude Opus 4.6 (57.3%). This guide covers its 744B/40B-active MoE architecture, MIT license, 8-hour autonomous task capability, and setup via Ollama.
GLM-5.1
Z.ai
SWE-bench
AI
2026-04-10
MiniMax M2.5 Complete Guide — Lightning Attention Achieves 80.2% SWE-bench [2026]
MiniMax M2.5 achieves 80.2% on SWE-bench Verified using proprietary Lightning Attention in a 230B MoE model. Full breakdown of architecture, benchmarks, license terms, and setup instructions.
MiniMax M2.5
SWE-bench
Lightning Attention
AI
2026-03-17
SMB AI Adoption Strategy with Rakuten AI 3.0: Achieving 90% Cost Reduction
Explore AI adoption strategies for SMBs leveraging Rakuten AI 3.0's Apache 2.0 license and Hugging Face release, achieving up to 90% cost reduction. Learn practical methods to utilize Japanese-specialized strengths in business operations and Rakuten AI Gateway integration possibilities.
Rakuten AI 3.0
中小企業
AI導入