株式会社オブライト

Articles tagged "ローカルAI"

8 articles

AI2026-04-04
Building a Claude Replacement with Qwen 3.5-9B — Practical Migration Guide [2026]
A practical migration guide to building a Claude replacement using Qwen 3.5-9B. Apache 2.0 license, 262K context, runs on 16GB RAM. Complete coverage from Ollama setup to API migration, prompt conversion, and cost comparison.
Qwen 3.5Claude代替Ollama
AI2026-04-03
Gemma 4 Hardware Requirements & System Specs — VRAM, GPU & Memory Guide for Local AI [2026]
Detailed Gemma 4 hardware requirements and system specs for local AI: VRAM/RAM requirements per variant (E2B 4GB, E4B 6GB, 26B MoE 16GB, 31B Dense 24GB+), recommended GPUs (RTX 3060/4070/4090, A100/H100), Apple Silicon (M1-M4) performance, quantization (Q4_K_M), and budget-based configurations from $0 to $5000.
Gemma 4ハードウェアGPU
AI2026-03-16
Complete Guide to Ollama × OpenClaw — Building Multi-Model AI Agents on Mac mini
By combining Ollama and OpenClaw, you can build AI agents on Mac mini that dynamically switch between multiple LLMs. This article provides detailed practical steps from Ollama installation to model management, OpenClaw integration configuration, and performance comparison. We introduce how to build a local AI infrastructure that can be adopted by SMBs and startups, especially in Shinagawa, Minato, Shibuya, Setagaya, Meguro, and Ota wards.
OllamaOpenClawMac mini
AI2026-03-13
Qwen3.5-9B × OpenClaw — Complete Guide to Building AI Agents on Mac mini
A comprehensive guide to building high-performance AI agents with Qwen3.5-9B using Mac mini M4 and OpenClaw. Covers hardware requirements, LINE/Slack/Discord integration, and performance benchmarks.
Qwen3.5-9BOpenClawMac mini
AI2026-03-04
Qwen3.5-9B Complete Guide: Run on Ollama with Just 5GB — Features, Benchmarks & Use Cases
Comprehensive guide to Qwen3.5-9B: Ollama setup instructions, hybrid Gated DeltaNet + Sparse MoE architecture, 262K context window, GPQA 81.7 and IFBench 76.5 (beating GPT-5.2's 75.4), comparison with GPT-4o-mini and Claude Haiku, and practical business use cases. Runs on just 5GB RAM.
Qwen3.5SLM小規模言語モデル
AI2026-03-04
Qwen3.5-9B Local Setup Guide: Step-by-Step Installation on Mac, Windows & Linux
A complete step-by-step guide to installing Qwen3.5-9B locally on Mac, Windows, and Linux. Covers setup via Ollama, llama.cpp, and vLLM, along with quantization options (GGUF Q4/Q5/Q8), GPU acceleration (CUDA/Metal), Docker deployment, API server configuration, performance tuning, and troubleshooting. Practical instructions for developers and IT administrators looking to run SLMs on-premises.
Qwen3.5ローカルAI環境構築
AI2026-03-04
Qwen3.5-9B Cost Optimization: Cloud API vs Local Deployment TCO Analysis
A thorough TCO comparison between running Qwen3.5-9B as a local AI and using cloud APIs. Covers hardware costs, electricity, maintenance, break-even analysis, and ROI calculation frameworks for SMBs in Shinagawa, Minato, and Shibuya.
Qwen3.5コスト最適化TCO
AI2026-02-28
Small Language Models Are the Star of 2026: Why SMBs Should Adopt SLMs Now and How to Get Started
Gartner has named Domain-Specific Language Models a top strategic technology trend for 2026. Small Language Models (SLMs) are transforming AI adoption for SMBs with lower costs, higher accuracy for specific tasks, and zero data leakage risk. This guide covers benefits, leading models, practical use cases, and step-by-step adoption.
SLM小規模言語モデルローカルAI