Proven patterns for tool orchestration, data pipelines, conditional workflows, agent definitions, lifecycle hooks for observability, and checkpoint/resume for long-running LLM workflows.
Ralph Loop Pattern
Autonomous coding loop based on the Ralph Wiggum technique by Geoff Huntley. Each iteration: reload context from disk, pick one task, execute it, test, commit on green. The loop is the outer process; the LLM decides what to work on.
>> ralph.mld - autonomous coding agent loop
import { @claudePoll } from "@mlld/claude-poll"
var @tools = "Read,Write,Edit,Glob,Grep,Bash(git:*),Bash(npm:*)"
>> Cheap model picks the most important task from current plan
exe llm @pickTask(plan, specs) = [
let @prompt = `Given this plan and these specs, identify the SINGLE most
important next task. Search before assuming something isn't implemented.
Return JSON: { "task": "...", "type": "implement|fix|test", "files": [...] }
<plan>
@plan
</plan>
<specs>
@specs
</specs>
IMPORTANT: Write your JSON response to @mx.outPath using the Write tool.`
@claudePoll(@prompt, "haiku", ".", "Read,Glob,Grep,Write", @mx.outPath)
=> <@mx.outPath>? | @parse.llm
]
>> Worker executes the task with full agent capabilities
exe llm @doTask(task, specs) = [
let @prompt = `# Task
@task.task
## Specs
@specs
Implement this task. Search the codebase before assuming anything is
not implemented. After implementing, run tests for just this change.
IMPORTANT: Write your result JSON to @mx.outPath using the Write tool.`
@claudePoll(@prompt, "sonnet", ".", @tools, @mx.outPath)
=> <@mx.outPath>?
]
>> Validate with tests
exe @test() = [
let @out = sh { npm test 2>&1 }
=> { pass: @out.exitCode == 0, output: @out }
]
>> The loop
loop(endless) [
>> Fresh context every iteration — the context IS the history
let @plan = <fix_plan.md>
when @plan.trim() == "" => done "complete"
let @specs = <specs/*.md>
>> One task per loop — trust the LLM to pick what matters
let @task = @pickTask(@plan, @specs)
let @result = @doTask(@task, @specs)
>> Test backpressure — only commit what passes
let @check = @test()
when @check.pass => run sh { git commit -am "@task.task" && git push }
continue
]
Core principles:
- One task per loop — Each iteration picks a single task and executes it. Narrowing scope keeps context usage low and outcomes predictable.
- Fresh context from disk — Plan and specs reload every iteration. No chat history carried forward. The filesystem is the state.
- Test backpressure — Tests gate commits. Failing iterations aren't fatal; the next iteration sees the current state and adapts.
- LLM picks the work — The cheap classifier decides priority. The orchestrator doesn't encode task selection logic.
Crash recovery — The llm label on @pickTask and @doTask enables automatic caching. If the loop crashes mid-iteration, re-running the script replays completed LLM calls from cache.
mlld run ralph # auto-resumes via cache
mlld run ralph --resume @doTask # re-run all worker calls
mlld run ralph --new # fresh run, clear cache
Hook telemetry:
hook @progress after op:loop = [
log `iteration @mx.loop.iteration`
]
With pacing — Add a delay between iterations to avoid hammering APIs:
loop(endless, 5s) [
...
]
With a cap — Limit total iterations:
loop(50) [
...
]
Guarded Tool Export
Expose mlld functions as MCP tools with fixed context parameters and security guards. The agent sees a narrow interface; guards enforce what data can flow through.
Define the function, guard, and tool collection:
exe @searchIssues(org: string, repo: string, query: string) = cmd {
gh issue list -R @org/@repo --search "@query" --json number,title
} with { description: "Search GitHub issues" }
guard @noSecrets before op:exe = when [
@input.any.mx.labels.includes("secret") => deny "Secret data cannot flow to tools"
* => allow
]
var tools @agentTools = {
searchIssues: {
mlld: @searchIssues,
bind: { org: "mlld-lang", repo: "mlld" },
expose: ["query"],
description: "Search mlld issues by keyword"
}
}
export { @searchIssues, @agentTools }
The agent sees one parameter (query). The bound org and repo are invisible and fixed. The guard blocks any call carrying secret-labeled data.
Serve it:
mlld mcp tools.mld --tools-collection @agentTools
The --tools-collection flag tells the MCP server to use the reshaped tool definitions instead of raw exports.
Give it to an agent:
Point any MCP client at the command. For Claude Code:
{
"mcpServers": {
"my-tools": {
"command": "npx",
"args": ["mlld", "mcp", "tools.mld", "--tools-collection", "@agentTools"]
}
}
}
Add operation labels for policy:
var tools @agentTools = {
searchIssues: {
mlld: @searchIssues,
bind: { org: "mlld-lang", repo: "mlld" },
expose: ["query"],
labels: ["read-only"],
description: "Search mlld issues"
},
createIssue: {
mlld: @createIssue,
bind: { org: "mlld-lang", repo: "mlld" },
expose: ["title", "body"],
labels: ["destructive"],
description: "Create an mlld issue"
}
}
Guards can then check @mx.op.labels.includes("destructive") to block or require approval for write operations. See mcp-guards for after-guard patterns that validate tool outputs.
Prose Execution
The prose {} syntax executes LLM-interpreted DSL skills. By default it uses OpenProse, but any custom interpreter can be configured.
What is Prose Execution?
Prose execution invokes skills that an LLM interprets at runtime. Unlike run js {} which executes deterministically, prose {} sends content to an LLM with specific skills enabled. This enables complex multi-agent workflows defined in a domain-specific language.
Setup
-
Install the OpenProse plugin in Claude Code:
/plugin marketplace add git@github.com:openprose/prose.git /plugin install open-prose@prose -
Restart Claude Code and boot OpenProse:
/prose-boot -
Skills will prompt for approval on first use.
Basic Usage
import { @opus } from @mlld/prose
exe @research(topic) = prose:@opus {
session "Research @topic"
agent researcher { model: sonnet, skills: [web-search] }
researcher: find current information about @topic
output findings
}
run @research("quantum computing trends")
Key Concepts
session - Names the workflow for context
agent - Defines an agent with model and skills
loop until - Iterates with semantic exit conditions:
exe @refine(draft) = prose:@opus {
session "Refine document"
loop until **the draft meets publication standards** {
critique @draft
revise based on critique
}
}
parallel - Run tasks concurrently:
exe @gather(topics) = prose:@opus {
session "Research multiple topics"
parallel for each topic in @topics {
research topic
}
combine results
}
Template Files
For complex workflows, use external files:
exe @workflow(ctx) = prose:@opus "./workflow.prose"
exe @workflow(ctx) = prose:@opus "./workflow.prose.att" >> ATT interpolation
Custom Interpreters
Use any LLM-interpreted DSL by configuring different skills:
import { @claude } from @mlld/claude
>> Create a custom model executor
exe @myModel(prompt) = @claude(@prompt, "opus", @root)
>> Configure with custom skills
var @myDSL = {
model: @myModel,
skills: ["my-custom:boot", "my-custom:run"]
}
exe @process(data) = prose:@myDSL {
>> Your custom DSL syntax here
analyze @data
output result
}
The skill determines how the LLM interprets the prose content. OpenProse is one implementation - you can create your own DSL skills or use other prose interpreters.
OpenProse Requirements
For OpenProse specifically:
- Claude Code with Opus (only model that reliably interprets OpenProse syntax)
- OpenProse skills approved:
open-prose:prose-boot,open-prose:prose-compile,open-prose:prose-run
See mlld howto exe-prose for syntax details. OpenProse docs: https://prose.md