Integration2026-03-093 min readopenclawai agentcustom apitutorial

How to use MoleAPI as the AI backend for OpenClaw

OpenClaw is a trending open-source AI agent that supports custom API endpoints. Here's how to point it at MoleAPI to access GPT-4o, Claude, Gemini, and more through a single key.


OpenClaw has been gaining a lot of attention lately. It's an open-source, locally-running AI personal assistant that automates tasks like file management, email handling, and messaging notifications — across macOS, Windows, and Linux.

One of its standout features: full support for custom AI model API providers. You're not locked into any single vendor. Any OpenAI-compatible endpoint can be plugged in. MoleAPI fits this model exactly.

Why connect MoleAPI to OpenClaw

Without a gateway, using OpenClaw with multiple AI models means juggling separate API keys from OpenAI, Anthropic, Google, and others — each with different pricing, sign-up requirements, and geographic availability.

MoleAPI gives you a single unified endpoint:

  • One key for all models: GPT-4o, Claude 3.5, Gemini 2.0 — all through the same base URL
  • Pay-as-you-go: No need to pre-load credits at multiple vendors; one account covers everything
  • Switch models without changing code: Change one parameter in the OpenClaw config and you're on a different model
  • New models on day one: When providers ship new models, MoleAPI adds them quickly

Setup

Step 1: Get your MoleAPI key

Go to home.moleapi.com, create an account, and generate an API key. New accounts get free credits to start with.

You'll need two things:

  • API Key: a string like sk-xxxxxxxx
  • Base URL: https://api.moleapi.com/v1

Step 2: Add MoleAPI as a custom provider in OpenClaw

Open your OpenClaw config file (usually at ~/.openclaw/openclaw.json) and add a provider entry under models.providers:

{
  "models": {
    "providers": [
      {
        "name": "MoleAPI",
        "baseUrl": "https://api.moleapi.com/v1",
        "apiKey": "sk-your-moleapi-key",
        "api": "openai-completions",
        "models": [
          {
            "id": "gpt-4o",
            "contextWindow": 128000,
            "maxTokens": 16384,
            "reasoning": false
          },
          {
            "id": "claude-sonnet-4-5",
            "contextWindow": 200000,
            "maxTokens": 8192,
            "reasoning": false
          },
          {
            "id": "gemini-2.0-flash",
            "contextWindow": 1048576,
            "maxTokens": 8192,
            "reasoning": false
          }
        ]
      }
    ]
  }
}

Alternatively, the openclaw onboard command walks you through adding a custom provider interactively.

Step 3: Select the model in OpenClaw

Once configured, the models you defined will appear in OpenClaw's model picker. Select one, and all automated tasks will use it.

Choosing the right model

Different OpenClaw tasks benefit from different models:

TaskRecommended modelWhy
General conversation / file tasksgpt-4oFast, well-rounded
Long documents / email summariesclaude-sonnet-4-5Massive context window
High-frequency automationgemini-2.0-flashVery low latency
Reasoning / code executiono3-miniDedicated reasoning, cost-effective

See the full list of models MoleAPI supports at /models.

What OpenClaw can do with a capable backend

Once you've pointed OpenClaw at MoleAPI, you can run:

  • Email automation: sort inboxes, draft replies, extract key information
  • File workflows: organize downloads, rename batches, process documents
  • Messaging integrations: monitor Slack, Discord, Telegram and respond or forward based on rules
  • Task triggers: kick off AI workflows from calendar events or file changes

Everything runs locally. Your data stays on your machine. Only the inference call goes to the model provider — through MoleAPI.

How this compares to direct provider APIs

Direct integration with OpenAI or Anthropic works fine. But if you want to experiment across models, keep billing centralized, or just reduce the credential management overhead, a unified gateway is the cleaner path.

Next steps