株式会社オブライト
AI2026-03-16

Getting Started with AI Agent Development using Ollama and OpenClaw — A Step-by-Step Beginner's Guide

Build your own AI agent easily with Mac mini M4, Ollama, and OpenClaw. This beginner-friendly guide walks you through everything from Homebrew installation to model downloads and LINE/Slack integration, step by step. Supporting Tokyo-area businesses in Shinagawa, Minato, Shibuya, and beyond with AI agent implementation.


Introduction: Why Local LLM × OpenClaw?

In recent years, cloud-based AI services like ChatGPT, Claude, and Gemini have become widespread. However, when enterprises and individuals prioritize privacy, cost, and customizability, locally-running large language models (LLMs) are gaining attention. Ollama is an open-source tool that makes it easy to run LLMs locally on macOS, Linux, and Windows, offering a rich model library including Llama 3, Qwen, Gemma, Mistral, Phi, CodeLlama, and DeepSeek. On the other hand, OpenClaw is an open-source AI agent platform that runs multi-agent systems on Mac mini and integrates with multiple messaging channels such as LINE, Slack, Discord, WhatsApp, Telegram, and iMessage. In this article, we will provide a step-by-step guide for beginners to start AI agent development using Ollama + OpenClaw, covering everything from environment setup to testing and troubleshooting.

Prerequisites: Required Environment and Hardware

The recommended environment for this guide is Mac mini M4 (Apple Silicon) running macOS Sequoia or later. Mac mini M4, equipped with 16GB or more unified memory and a Neural Engine optimized for AI inference, delivers sufficient performance for running local LLMs. Using the latest macOS version ensures compatibility with Ollama and OpenClaw. An internet connection is required for initial model downloads and Homebrew package installations. Storage capacity should be at least 50GB free, as model files can range from several GB to tens of GB. If you are comfortable with basic Terminal.app operations, you can follow this guide smoothly.

Step 1: Installing Homebrew

Homebrew is a package manager for macOS that simplifies the installation of tools like Ollama. If Homebrew is not yet installed, open Terminal and run the following command: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` After installation completes, run `brew --version` to confirm the version is displayed. On Apple Silicon, Homebrew is installed to `/opt/homebrew`, so add the following line to your shell configuration file (`~/.zshrc` or similar) to add it to your PATH: ```bash eval "$(/opt/homebrew/bin/brew shellenv)" ``` After updating the configuration, restart Terminal or run `source ~/.zshrc` to apply the changes.

Step 2: Installing Ollama and Downloading Your First Model

Once Homebrew is ready, install Ollama by running the following command in Terminal: ```bash brew install ollama ``` After installation, start Ollama in the background: ```bash ollama serve ``` This command launches a REST API server on `localhost:11434` (OpenAI-compatible API). Open a new Terminal window and download your first model. For example, to use the lightweight and high-performance "Llama 3.2 3B", run: ```bash ollama pull llama3.2:3b ``` Downloading may take several minutes to tens of minutes. Once complete, run `ollama list` to view installed models. To verify Ollama is working correctly, run `ollama run llama3.2:3b` to start an interactive chat in Terminal. Type `/bye` to exit.

Step 3: Installing OpenClaw and Basic Configuration

OpenClaw is cloned from GitHub and installed locally. In Terminal, navigate to your working directory and run: ```bash git clone https://github.com/oflight-inc/openclaw.git cd openclaw npm install ``` After installing dependencies, create a `.env` configuration file. Copy `.env.example` from the repository and edit it: ```bash cp .env.example .env open .env ``` In the `.env` file, review and edit the following key settings: - `OLLAMA_BASE_URL=http://localhost:11434`: Ollama endpoint - `DEFAULT_MODEL=llama3.2:3b`: Default model name - `AGENT_NAME=MyFirstAgent`: Agent name (optional) OpenClaw's basic configuration is now complete. Next, configure integration with messaging channels.

Step 4: Integrating with LINE Channel

OpenClaw supports multiple channels including LINE, Slack, Discord, WhatsApp, Telegram, and iMessage. Here, we'll use LINE, widely used in Japan, as an example. First, visit LINE Developers (https://developers.line.biz/) and create a provider and Messaging API channel. After creating the channel, go to the "Messaging API settings" tab and retrieve the following: - Channel access token (long-lived) - Webhook URL (to be configured later) Next, add the following to OpenClaw's `.env` file: ```bash LINE_CHANNEL_ACCESS_TOKEN=your_line_channel_access_token LINE_CHANNEL_SECRET=your_line_channel_secret ``` Start OpenClaw: ```bash npm run dev ``` OpenClaw runs on `http://localhost:3000` by default. Return to the LINE Developers console and set the Webhook URL to `http://your-public-url/api/line/webhook` (for local development, use a tunneling service like ngrok to obtain a public URL). Enable the webhook and disable response messages. Now, when users send messages on LINE, OpenClaw will respond using Ollama's LLM.

Step 5: Integrating with Slack Channel (Optional)

Slack integration is also popular. Visit Slack API (https://api.slack.com/) and create a new app. Under "OAuth & Permissions", add the following Bot Token Scopes: `chat:write`, `channels:read`, `groups:read`, `im:read`, `mpim:read`. Install the app to your workspace and retrieve the Bot User OAuth Token. Add the following to `.env`: ```bash SLACK_BOT_TOKEN=xoxb-your-slack-bot-token SLACK_SIGNING_SECRET=your-slack-signing-secret ``` Enable "Event Subscriptions" and set the Request URL to `http://your-public-url/api/slack/events`. Subscribe to `message.channels`, `message.groups`, `message.im`, and `message.mpim` events. Restart OpenClaw and invite the bot to a Slack channel (`/invite @YourBot`) to start using your agent on Slack.

Step 6: Testing and Verifying Agent Operation

Once LINE or Slack integration is complete, test the agent by sending a message. For LINE, add the bot as a friend and send "Hello" in the chat. Within a few seconds, you should receive a response from the LLM running on Ollama. If no response arrives, check OpenClaw's Terminal logs for error messages. Common causes include incorrect Webhook URL, token typos, Ollama not running, or model not downloaded. Verify the model is listed with `ollama list` and the API responds with `curl http://localhost:11434/api/tags`. Once working properly, try complex questions, switching between Japanese and English, code generation requests, and more to explore your agent's capabilities.

Common Troubleshooting Tips

Here are common issues beginners encounter and their solutions: **1. Ollama won't start**: If port 11434 is already in use when running `ollama serve`, another process may be occupying the port. Check with `lsof -i :11434` and terminate the process. **2. Model download is slow**: Ollama models range from several GB to tens of GB. Download speed depends on Wi-Fi environment and time of day. Use wired connection or download during off-peak hours. **3. OpenClaw won't start**: If `npm install` errors occur, check your Node.js version. Node.js 18+ is recommended. Check with `node --version` and switch versions using nvm if needed. **4. LINE bot doesn't respond**: If webhook verification fails in LINE Developers console, verify the public URL is correctly set and OpenClaw is running. When using ngrok, the URL changes on each restart and must be updated. **5. Responses are slow or timeout**: If Mac mini M4 memory is insufficient or using a large model, inference may take longer. Start with lightweight models (3B-7B parameters) and switch models as needed.

Next Steps: Multi-Agent and Tool Integration

Once your basic agent is working, explore OpenClaw's powerful features: multi-agent routing and tool integration. OpenClaw allows you to define multiple agents (e.g., customer support, technical Q&A, internal FAQ) and automatically route user questions to the appropriate agent. By adding tools (Function Calling) such as external APIs, databases, web scraping, calendar integration, you can build practical AI agents beyond simple chatbots. OpenClaw's documentation (https://github.com/oflight-inc/openclaw) includes tool definition samples and multi-agent best practices. Additionally, the security sandbox feature enables fine-grained control over agent access to external resources, making it safe for enterprise environments.

Conclusion: AI Agent Implementation Support by Oflight Inc.

In this article, we provided a beginner-friendly, step-by-step guide to AI agent development using Ollama + OpenClaw. By combining Mac mini M4 with local LLMs, you can build highly customizable AI agents while protecting privacy and reducing costs. Oflight Inc., headquartered in Shinagawa-ku, Tokyo, provides comprehensive support for businesses in Shinagawa, Minato, Shibuya, Setagaya, Meguro, Ota, and across the Tokyo area, covering OpenClaw setup, custom agent development, multi-channel integration, tool integration, and ongoing maintenance. We propose optimal AI solutions tailored to your needs, including internal FAQ bots, customer support automation, sales assistance agents, and technical support bots. If you're considering implementation, please feel free to contact us. Let's take the first step in AI agent development together.

Feel free to contact us

Contact Us