Are you wondering how to install OpenClaw? You are in the right place here. In this guide, we will walk you through the process of setting up OpenClaw on both Mac and Windows. Whether you are using macOS or Windows, it will provide you with all the necessary steps to get your AI assistant up and running today!
What Is OpenClaw AI?
OpenClaw AI is a cutting-edge artificial intelligence framework. It is often primarily used by Developers to build, train, and deploy advanced machine learning models. Furthermore, it offers a highly flexible architecture. This means you can integrate it easily into existing software environments.
OpenClaw is not a traditional model training framework, but rather a self-hosted gateway. Running on your computer or server, it connects chat applications like Telegram, WhatsApp, Discord, and iMessage to AI assistants.
This article focuses on two key tasks: completing the installation on Windows/macOS and confirming the Gateway is running successfully. Subsequent steps for integrating chat platforms and configuring models will provide the shortest viable path.
Also Read: 8 Best Free AI Chatbots Worth Trying Right Now (2026)
Preparations Before the Installation of OpenClaw
Before you start the deployment process, make sure your system meets the following requirements:
- Node.js: Node 22 or later (if not installed, the installation script handles it automatically)
- WSL2: For Windows users
- OpenClaw.app: Recommended for macOS users
- Homebrew (Mac only): A package manager simplifying software installation on macOS.
- Ollama (optional): To use environments with local large language models (e.g., Deepseek or Minimax), avoiding cloud costs.
Note: Running the OpenClaw local model via the Ollama connection requires higher hardware specifications on the user’s device. For ordinary users with limited hardware capabilities, it is recommended to select the most cost-effective online API model based on individual needs. Refer to the API model recommendation list below.
How to Set up OpenClaw on Windows
Follow these steps to successfully deploy OpenClaw on a Windows 11 device:
Step 1: Run PowerShell as Administrator
Start by launching PowerShell with administrative privileges. This is important because certain system-level scripts require higher-level access.
Step 2: Set Execution Policy
By default, Windows may block the execution of scripts. To allow scripts to run, execute the following command in PowerShell and type “Y” to confirm the changes:
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
Step 3: Use the One-Line Installer
Copy and paste the installation command:
“iwr -useb https://openclaw.ai/install.ps1 | iex”.
This script will automatically detect your operating system and install Node.js if it’s missing.
Note: Windows Defender or your firewall might occasionally flag Node apps, so ensure your firewall isn’t blocking localhost connections if you run into pairing issues.
Step 4: Complete the Onboarding Process
Once the installation finishes, you will be prompted to complete the onboarding process in the terminal. Here, you’ll need to:
- Acknowledge the security consent for running AI agents on your system.
- Choose “Quick Start” mode for faster setup.
- Select a model. If you’re using Ollama, you can skip model selection for now or manually enter a dummy model to finish the setup.
Read more: 10 Best Free AI Code Generators in 2026
How to Set up OpenClaw on macOS
Deploying OpenClaw on macOS is a seamless experience for both Apple Silicon (M1, M2, M3, M4) and Intel chip users:
Step 1: Open Terminal
Open the Terminal app on your Mac by using Spotlight Search.
Step 2: Execute the Curl Command
Run the one-line curl installer command:
“curl -fsSL https://openclaw.ai/install.sh | bash”
It will also check for and install Homebrew and Node.js if they are not already installed.
Step 3: Understand the Security Prompts
Mac users will encounter security prompts regarding the “Least Privilege” principle. Navigate the prompt using the arrow keys and select Yes to proceed.
Step 4: Choose Your Interface
OpenClaw offers both a Terminal User Interface (TUI) and a Web-based interface. For a streamlined experience, you can opt to “Hatch in TUI” to interact directly with your AI assistant via the terminal.
Connecting OpenClaw with Ollama (Both Platforms)
To run your models locally without cloud dependency, you’ll need to integrate OpenClaw with Ollama:
Step 1: Install Ollama
Download and install Ollama from its official website. Ollama is necessary for running models like Deepseek and Minimax locally.
Step 2: Launch OpenClaw via Ollama
To start OpenClaw with a specific model, use the following command:
openclaw –model Minimax
Choose a model, such as Deepseek or Minimax, for download and initialization.
Step 3: Ensure Only One Gateway is Running
If you encounter any port errors, make sure no other OpenClaw gateway is running. Use the following command to stop any existing instances: openclaw stop.
Accessing the Web Control Panel
After the gateway starts, you can access the OpenClaw graphical interface via a browser:
Step 1: Obtain the Token
Open the configuration file located at ~/.openclaw/openclaw.json in your user home directory to find the dedicated gateway token. Note that this file is monitored by the Gateway, and many changes take effect automatically. Exercise caution when modifying it.
Step 2: Open Browser
Access http://127.0.0.1:18789 in your browser to enter the OpenClaw local control panel. If authentication is enabled, enter the token in the UI.
Step 3: Select AI Models
Choose a service provider (e.g., Anthropic or OpenAI). Using OAuth typically offers a smoother experience than configuring raw API keys. You can also set interaction channels for the AI agent (e.g., Telegram, Discord, WhatsApp).
Important Note: OpenClaw will warn that the AI agent will execute real commands on your device. Proceed with caution!
Quick Post-Installation Verification
If you’ve strictly followed the installation steps above but remain uncertain whether it’s properly installed or functioning correctly, use these steps for quick verification.
Step 1: Check Service Status
Run the following commands sequentially in the terminal:
“openclaw gateway status”
“openclaw status”
“openclaw health”
Under normal conditions, you should see results similar to:
- gateway: running
- status: running
- health: healthy
As long as the service status is running and health is healthy, the core components have successfully started.
Step 2: Access the Local Control Interface
Open the following URL in your browser:
http://127.0.0.1:18789/
If the Control UI management interface loads successfully, it indicates:
- The Gateway is running
- Local port listening is normal
- OpenClaw is operational
This method is more intuitive than directly searching for tokens and better suited for new users to quickly verify installation results.
Troubleshooting Common Issues
If you cannot access the page:
- Re-confirm if “openclaw status” shows running
- Check if your local firewall is blocking port 18789
- Attempt to restart the service:
“openclaw restart”
Quick Verification Criteria
Installation is confirmed successful if all three conditions are met:
- openclaw status displays ‘running’
- openclaw health displays “healthy”
- Browser can open http://127.0.0.1:18789/ normally
Completing these steps confirms OpenClaw is successfully installed and running properly.
How to Connect OpenClaw to Telegram, WhatsApp, and More
After correctly configuring the API in OpenClaw, the next step is to integrate it with your preferred chat platform. Below is a complete step-by-step guide using Telegram and WhatsApp as examples to help you complete the final setup.
Step 1: Prepare Your Bot or Application Credentials
Before linking OpenClaw to any platform, you need to create a bot or app on that platform.
For Telegram
- Open Telegram and search for BotFather.
- Start a chat and type “/newbot.”
- Follow the instructions to:
- Name your bot
- Create a username
- Copy the generated Bot Token.
You will use this token inside OpenClaw.
For WhatsApp
WhatsApp integration usually requires:
- WhatsApp Business API
- A third-party service provider
- Webhook configuration
Make sure you obtain:
- API Key
- Webhook URL
- Phone number ID (if required)
Step 2: Configure Webhook or API in OpenClaw
Now go back to OpenClaw.
- Open Platform Integration Settings
- Select your target platform (Telegram, Discord, WhatsApp, etc.)
- Paste your:
- Bot Token
- Webhook URL (if required)
- Channel or Chat ID
If webhook configuration is required:
- Set your webhook URL to: https://your-server-address/webhook
- Make sure your server supports HTTPS.
Step 3: Enable Message Routing
Inside OpenClaw:
- Go to Message Routing / Automation
- Choose:
- Default Model
- Trigger Rules (e.g., respond to all messages or only when mentioned)
- Save settings.
This ensures messages from your chat platform are forwarded to your selected API model.
Step 4: Test Your Bot
Go to your chosen platform:
- Send a message to your bot
- Wait for the response
If configured correctly:
- Message > OpenClaw > API > Response > Chat Platform
You should receive a reply within a few seconds.
Step 5: Customize Behavior (Optional but Recommended)
To improve user experience:
1. Set a System Prompt
Example:
You are a helpful AI assistant. Respond clearly and concisely.
2. Set Rate Limits
Prevent spam by limiting:
- Requests per minute
- Token usage per conversation
3. Add Command Triggers
For example:
- /help
- /reset
- /summary
This makes your bot feel more professional.
Step 6: Monitor Logs and Usage
To ensure stability:
- Check API usage limits
- Monitor error logs
- Watch for:
- 401 errors (invalid key)
- 429 errors (rate limit)
- Timeout errors
Keeping logs clean ensures long-term stability.
Step 7: Final Check Before You Start
Before going live, make sure everything is properly configured: confirm that your API key is valid, your bot token is correct, the webhook is active and reachable, the appropriate model is selected, and message routing is fully enabled.
Once all these elements are verified and working together smoothly, your OpenClaw integration should be ready to operate reliably across your chosen chat platform.
How A Reliable VPN Enhances OpenClaw Setup
When deploying OpenClaw, especially for accessing international AI models, a VPN plays a crucial role:
Accessing International AI Models
If you’re using OpenClaw in regions with network restrictions like China, LightningX VPN is a great solution. It has over 2,000 server nodes worldwide, allowing you to bypass geo-blocks and access international AI models like OpenAI (ChatGPT), Google Gemini, Midjourney, and Stable Diffusion. With strong encryption and high privacy standards, it ensures secure API calls to overseas AI models.
Ensuring Smooth AI Execution
For OpenClaw’s performance, LightningX VPN provides fast, stable connections with no speed limits, reducing latency and preventing interruptions. It also unblocks global social media and streaming platforms, making it easy to access external data sources.
Plus, it supports Windows, macOS, Linux, and more, ensuring seamless operation across devices. For users outside restricted regions, VPNs can also improve overall performance by unblocking websites and services required during the AI agent’s execution.

AI API Model Selection List (Low-Cost Priority Ranking)
| Vendor | Representative Model | Official Website | Price (Input / Output) | Comprehensive Price | Core Advantages | Remarks |
| Qwen (Ali) | qwen-turbo | dashscope.aliyun.com | $0.05 / $0.20 | $0.25 | Absolutely low price; Chinese-friendly | Ideal for budget-sensitive projects |
| OpenAI | gpt-5-nano | developers.openai.com | $0.05 / $0.40 | $0.45 | Most mature ecosystem; best compatibility | Extremely low-cost entry point |
| GLM (Z.AI) | GLM-4.7-FlashX | [bigmodel.cn / docs.z.ai](https://bigmodel.cn / docs.z.ai) | $0.07 / $0.40 | $0.47 | Chinese-friendly; Lightweight Agent | Friendly for domestic migration |
| Gemini 2.5 Flash-Lite | ai.google.dev | $0.10 / $0.40 | $0.50 | Low latency; Multimodal | Configuration details are slightly more complex | |
| xAI | grok-4-1-fast-reasoning | x.ai/api | $0.20 / $0.50 | $0.70 | Strong tool calls | Tools are charged separately |
| DeepSeek | deepseek-chat | platform.deepseek.com | $0.27 / $1.10 | $1.37 | Good balance of quality and cost | Based on cache miss |
| Qwen (Ali) | qwen-plus (non-thinking) | dashscope.aliyun.com | $0.40 / $1.20 | $1.60 | More balanced; Stronger Chinese support | The thinking mode is more expensive |
| OpenAI | gpt-5-mini | developers.openai.com | $0.25 / $2.00 | $2.25 | More stable and powerful | Default model option available |
| GLM (Z.AI) | GLM-4.7 | [bigmodel.cn / docs.z.ai](https://bigmodel.cn / docs.z.ai) | $0.60 / $2.20 | $2.80 | Stronger for multi-step execution | More expensive than FlashX |
| Anthropic | Claude Haiku 4.5 | platform.claude.com | $1.00 / $5.00 | $6.00 | Good quality, fast speed | The low-priced model is relatively expensive |
Top3 Recommendations for OpenClaw API Integration
TOP 1: Qwen qwen-turbo: The absolute low-price leader, friendly for Chinese scenarios.
TOP 2: DeepSeek deepseek-chat: The most balanced, inexpensive, compatible with OpenAI, easy to integrate, and output price significantly lower than most mid-to-high-end models.
TOP 3: OpenAI gpt-5-nano: Ultra-low price + the most standard API ecosystem, the easiest to manage.
Additional Suggestion: If you’re not just focusing on the absolute lowest price but also care about “quality/cost balance”, you can consider using DeepSeek’s deepseek-chat as your primary default model.
Final Words
Setting up OpenClaw on both Mac and Windows is straightforward when following the right steps. This guide has provided you with everything you need to set up OpenClaw on your device, as well as recommended AI models to enhance the experience. Start deploying your AI assistant today!















