Can @moltbot Run Jobs on a Specific OpenClaw Worker Node?
One important yet easily forgotten feature is the master-worker models in OpenClaw (previous Clawdbot, moltbot). If OpenClaw could only support a single-worker node, it's essentially the same as what I did for Claude code plugins for Slack, with supporting bi-directional communication between the target channel and the Claude code environment.
By default, @moltbot does not automatically know about worker-gpu-01.
You must explicitly connect chat → OpenClaw Gateway → node routing.
This post explains exactly when it works, how it works, and what must be configured.
The Required Execution Chain
For this to work, the following pipeline must exist:
Slack / WeChat (@moltbot)
↓
Moltbot service
↓
OpenClaw Gateway (master)
↓
OpenClaw Node (worker-gpu-01)
If any link is missing, the job will not run on the worker.
What Happens When You Type a Command in Chat
Example chat message:
@moltbot run training on worker-gpu-01
This does not directly execute anything on the GPU machine.
Instead, the following steps occur.
Step-by-Step Execution Flow
1. Moltbot Receives the Message
- Moltbot parses the intent (
run training) - Determines this is an agent task, not a local shell command
At this point, nothing is executed yet.
2. Moltbot Forwards the Request to OpenClaw Gateway
Moltbot forwards:
- User intent
- Conversation context
- Requested target node (
worker-gpu-01)
Moltbot acts purely as a front-end trigger.
3. OpenClaw Gateway Translates Intent → Tool Call
The Gateway:
- Verifies
worker-gpu-01is connected and approved - Converts the intent into a routed tool invocation, e.g.:
{
"tool": "system.run",
"host": "node",
"node": "worker-gpu-01",
"cmd": "python train.py"
}
- Routes this request to the target node
4. Worker Node Executes Locally
On worker-gpu-01:
- The node receives the tool call
- Executes the command using its own GPU, filesystem, and environment
- Streams status and results back to the Gateway
5. Results Flow Back to Chat
- Node → Gateway (status, logs, exit code)
- Gateway → Moltbot
- Moltbot → chat response
From the user’s perspective:
“I asked in chat, and a GPU machine started working.”
Required Configuration Checklist
✅ 1. Moltbot Must Be Connected to the Same Gateway
- Moltbot must have API / socket access to the OpenClaw Gateway
- Moltbot never talks to nodes directly
If Moltbot only runs local scripts → ❌ node routing will not work.
✅ 2. Node Names Must Exist and Be Stable
The Gateway must already list the node:
openclaw node list
Expected output:
worker-gpu-01 connected healthy
If the node name doesn’t exist → the request fails.
✅ 3. Chat Intent Must Map to a Routable Action
Commands that usually work:
run training on worker-gpu-01execute job on gpu nodeuse worker-gpu-01 to train model
Commands that fail unless you add routing logic:
- “run this somewhere”
- “use GPU please”
Most teams add a small routing layer to:
- detect node names
- attach
node=worker-gpu-01automatically
✅ 4. Gateway Policy Must Allow Remote Execution
If the Gateway:
- disables
system.runon nodes - or requires manual approval
Then Moltbot-triggered jobs may pause waiting for approval.
This is a policy choice, not a bug.
What Does Not Work (Common Misconceptions)
❌ Moltbot talking directly to worker machines
❌ Moltbot bypassing the Gateway
❌ Moltbot starting jobs if the node is offline
❌ Automatic load balancing without routing logic
Moltbot is not a scheduler by default — it is a messenger and trigger.
Making This Feel Natural in Chat
Most teams standardize on one of these patterns:
1. Explicit Node Targeting
@moltbot run train.py on worker-gpu-01
2. Role-Based Routing
@moltbot run training (gpu)
Gateway selects a GPU-capable node automatically.
3. Named Jobs
@moltbot start experiment exp42
Where exp42 maps to:
- predefined command
- predefined node type
Final Takeaway
Yes, you can trigger jobs on specific OpenClaw worker nodes from chat — but only when Moltbot is correctly wired to the OpenClaw Gateway, and the Gateway performs explicit node routing.
If you’d like, I can also help you:
- design a chat command grammar
- add GPU / CPU auto-routing
- publish a “safe lab policy” to prevent jobs running on the master