Configuration includes project settings, environment variables, frontmatter, SDK modes, and checkpoint/resume for caching LLM call results.
Config Files
mlld uses dual configuration:
mlld-config.json- Your project settings (edit manually)mlld-lock.json- Auto-generated locks (don't edit)
mlld validate warning suppression lives in mlld-config.json:
{
"validate": {
"suppressWarnings": ["exe-parameter-shadowing"]
}
}
Use suppression when a warning is intentional and reviewed.
Paths and URLs
Paths can be literal, interpolated, or resolver-based.
var @dir = "./docs"
var @userFile = "data/@username/profile.json"
var @template = 'templates/@var.html' >> literal '@'
>> URLs as sources
show <https://raw.githubusercontent.com/org/repo/main/README.md>
var @remote = <https://example.com/README.md>
Environment Variables
Allow env vars in config, then import via @input.
mlld-lock.json:
{
"security": {
"allowedEnv": ["MLLD_NODE_ENV", "MLLD_API_KEY", "MLLD_GITHUB_TOKEN"]
}
}
Usage:
import { @MLLD_NODE_ENV, @MLLD_API_KEY } from @input
show `Running in @MLLD_NODE_ENV`
All env vars must be prefixed with MLLD_.
policy
A policy object combines all security configuration into a single declaration.
policy @p = {
defaults: {
rules: [
"no-secret-exfil",
"no-sensitive-exfil",
"no-untrusted-destructive",
"no-untrusted-privileged"
]
},
operations: {
exfil: ["net:w"],
destructive: ["fs:w"],
privileged: ["sys:admin"]
},
auth: {
claude: "ANTHROPIC_API_KEY"
},
capabilities: {
allow: ["cmd:git:*"],
danger: ["@keychain"]
}
}
defaults sets baseline behavior. rules enables built-in security rules that block dangerous label-to-operation flows. unlabeled optionally auto-labels all data that has no user-assigned labels -- set to "untrusted" to treat unlabeled data as untrusted, or "trusted" to treat it as trusted. This is opt-in; without it, unlabeled data has no trust label.
operations groups semantic exe labels under risk categories. You label functions with what they DO (net:w, fs:w), and policy classifies those as risk types (exfil, destructive). This is the two-step pattern -- see policy-operations.
auth defines caller-side credential mappings for using auth:name. It accepts short form ("API_KEY") and object form ({ from, as }). Policy auth composes with standalone auth; caller policy entries override same-name module bindings.
capabilities controls what operations are allowed at all. allow whitelists command patterns. danger marks capabilities that require explicit opt-in.
danger: ["@keychain"] is required for keychain sources declared in policy.auth. Standalone top-level auth declarations do not require danger.
needs declarations are module requirement checks. They do not replace capability policy rules.
Export/import: Share policies across scripts:
export { @p }
>> In another file
import policy @p from "./policies.mld"
Policies compose with union() -- combine multiple config objects into one policy. The most restrictive rules win.
Policy Capabilities
The capabilities object controls what operations can run.
policy @p = {
capabilities: {
allow: ["cmd:git:*", "cmd:npm:*", "fs:r:**", "fs:w:@root/tmp/**"],
danger: ["@keychain", "fs:r:~/.ssh/*"],
deny: ["sh"]
}
}
run cmd { git status }
Tool restrictions:
| Pattern | Matches |
|---|---|
cmd:git:* |
git with any subcommands |
cmd:npm:install:* |
npm install with any args |
sh |
Shell access |
Command allow/deny patterns evaluate against the interpolated command text, including @var substitutions.
Filesystem patterns:
| Pattern | Access |
|---|---|
fs:r:** |
Read any path |
fs:w:@root/tmp/** |
Write under tmp (implies read) |
fs:r:~/.config/* |
Read home config files |
Flat syntax (shorthand):
policy @p = {
allow: ["cmd:echo:*", "fs:r:**"],
deny: { sh: true, network: true }
}
Both forms are equivalent. The nested form (capabilities: { ... }) is more explicit; the flat form places allow/deny at the top level as shorthand.
How allow and danger interact:
allow and danger are two independent gates. allow is the general whitelist: it controls whether an operation is permitted at all. danger is a separate opt-in gate for sensitive operations that mlld considers inherently risky — reading SSH keys, force-pushing, running sudo, accessing the keychain, and similar. Both gates must pass for an operation to proceed.
mlld ships with a built-in default danger list (defined in core/policy/danger.ts) covering credential files, destructive commands, and security-bypass flags. When an operation matches the default danger list, policy blocks it unless the policy's danger array explicitly includes a matching pattern. This check runs independently of allow — an operation that matches allow but falls on the danger list is still blocked.
policy @p = {
allow: ["cmd:git:*", "fs:r:**"],
deny: ["sh"]
}
>> allow matches cmd:git:* — but git push --force is on the
>> default danger list. Without danger: ["cmd:git:push:*:--force"],
>> this is blocked with "Dangerous capability requires allow.danger".
run cmd { git push origin main --force }
To unblock it, add the matching pattern to danger:
policy @p = {
allow: ["cmd:git:*", "fs:r:**"],
danger: ["cmd:git:push:*:--force"],
deny: ["sh"]
}
The same double-gate applies to filesystem access. allow: ["fs:r:**"] permits reading all files, but reading ~/.ssh/id_rsa still requires danger: ["fs:r:~/.ssh/*"] because that path matches the default danger list.
Danger list: Operations matching danger require explicit opt-in. Without danger: ["@keychain"], keychain access is blocked even if other rules allow it.
Keychain allow/deny patterns live under policy.keychain and match service/account paths (with {projectname} from mlld-config.json).
Common mistakes:
toolsin env config enforces runtime tool access (Bashfor shell commands, tool names for MCP calls)capabilities.denyhandles command-pattern policy rules (for examplecmd:git:push)- Keychain access requires both
danger: ["@keychain"]in capabilities ANDprojectnameinmlld-config.json no-secret-exfildoesn't blockshow/log— add label flow rules forop:showandop:log(seepolicy-auth)
See policy-auth for credential flow, env-config for environment restrictions.
Operation Risk Labels
Classify operations by risk using the two-step pattern: label exe functions with semantic labels describing WHAT they do, then map those to risk categories in policy.
>> Step 1: Semantic labels describe the operation
exe net:w @postToSlack(msg) = run cmd { slack-cli "@msg" }
exe fs:w @deleteFile(path) = run cmd { rm -rf "@path" }
>> Step 2: Policy groups semantic labels under risk categories
policy @p = {
defaults: { rules: ["no-secret-exfil", "no-untrusted-destructive"] },
operations: {
exfil: ["net:w"],
destructive: ["fs:w"]
}
}
Now secret data cannot flow to @postToSlack (exfil rule) and untrusted data cannot flow to @deleteFile (destructive rule).
Why two steps?
- Reusability: Many functions share the same semantic label (
net:wapplies to Slack, email, webhooks). Changing the risk classification ofnet:wupdates all of them at once. - Flexibility: The same exe definition works under different policies. A dev policy might allow
net:w; a production policy classifies it asexfil. - Composability: Semantic labels are stable across teams and libraries. Risk classifications are a policy decision, not a code decision.
Risk categories:
| Category | Meaning |
|---|---|
exfil |
Sends data outside the system |
destructive |
Deletes or modifies data irreversibly |
privileged |
Requires elevated permissions |
Multiple labels: Combine when an operation has multiple risks:
exe net:w, fs:w @exportAndDelete(data) = run cmd { backup_and_delete "@data" }
policy @p = {
operations: { exfil: ["net:w"], destructive: ["fs:w"] }
}
Alternative -- direct risk labeling: You can label exe functions directly with risk categories, skipping the mapping step:
exe exfil @sendToServer(data) = run cmd { curl -d "@data" https://api.example.com }
exe destructive @deleteFile(path) = run cmd { rm -rf "@path" }
This is simpler but couples exe definitions to risk categories. The two-step pattern is preferred for maintainability.
Complete example:
policy @p = {
defaults: { rules: ["no-secret-exfil"] },
operations: { exfil: ["net:w"] }
}
var secret @patientRecords = <clinic/patients.csv>
exe net:w @post(data) = run cmd { curl -d "@data" https://api.example.com }
show @post(@patientRecords)
Error: Rule 'no-secret-exfil': label 'secret' cannot flow to 'exfil'
Policy Label Flow Rules
The labels block in policy defines which data labels can flow to which operations.
policy @p = {
labels: {
secret: {
deny: ["op:cmd", "op:show", "net:w"]
},
"src:mcp": {
deny: ["op:cmd:git:push", "op:cmd:git:reset", "destructive"],
allow: ["op:cmd:git:status", "op:cmd:git:log"]
}
}
}
Deny/allow targets are operation labels -- both auto-applied (op:cmd, op:show) and user-declared (net:w, destructive, safe).
Prefix matching: A deny on op:cmd:git blocks all git subcommands (op:cmd:git:push, op:cmd:git:reset, etc.).
Most-specific-wins: When deny covers a prefix but allow covers a more specific path, the specific rule wins. Given deny: ["op:cmd:git"] and allow: ["op:cmd:git:status"], git status is allowed but git push is blocked.
Label-flow policy evaluates declared labels and taint labels (src:*, dir:*) attached to values.
Built-in rules vs. explicit deny lists: For common protection patterns, use defaults.rules with built-in rules like no-secret-exfil instead of writing explicit deny lists. See policy-operations for the two-step classification pattern where semantic labels (e.g., net:w) are mapped to risk categories (e.g., exfil) via policy.operations.
In composed policies: Label deny/allow rules from all composed policy layers merge via union. A deny on secret → op:cmd from ANY layer blocks that flow in the merged policy. See policy-composition for merge rules.
Complete denial example:
policy @p = {
labels: {
secret: { deny: ["op:show"] }
}
}
var secret @customerList = <internal/customers.csv>
show @customerList
Error: Label 'secret' cannot flow to 'op:show' -- the policy blocks secret-labeled data from reaching show.
See labels-sensitivity for declaring labels, labels-source-auto for source label rules.
Policy Composition
Multiple policies compose automatically when imported or declared.
>> Team policy allows echo and git
/policy @p1 = { capabilities: { allow: ["cmd:echo:*", "cmd:git:*"] } }
>> Project policy allows echo and node
/policy @p2 = { capabilities: { allow: ["cmd:echo:*", "cmd:node:*"] } }
>> Effective: only echo (intersection of both policies)
/run { echo "allowed by both" }
Import pattern:
/import policy @baseline from "./baseline.mld"
/import policy @company from "./company.mld"
/policy @localPolicy = { deny: { sh: true } }
Composition rules:
| Field | Rule | Effect |
|---|---|---|
allow |
Intersection | Must be allowed by ALL policies |
deny |
Union | Denied by ANY policy |
danger |
Intersection | Must be opted into by ALL |
limits |
Minimum | Most restrictive wins |
Note: If allow lists have no overlap, the intersection is empty and all operations are blocked. Ensure shared baseline commands appear in all layers.
Profile selection considers composed policy. The first profile whose requires all pass is selected:
/policy @p = { deny: { sh: true } }
/profiles {
full: { requires: { sh } },
readonly: { requires: { } }
}
>> Selects "readonly" because sh is denied
/show @mx.profile
Label deny rules and auth configs from all layers merge via union — a deny on secret → op:cmd from ANY layer blocks that flow in the merged policy.
See security-policies for basic definition, policy-capabilities for capability syntax, policy-label-flow for label rules.
Policy Auth
using auth:* injects credentials as environment variables using sealed paths.
Why sealed paths matter: injected credentials bypass string interpolation. They are set at process env level and do not pass through prompt-controlled template text.
auth @brave = "BRAVE_API_KEY"
policy @p = {
auth: {
claude: { from: "keychain", as: "ANTHROPIC_API_KEY" },
github: { from: "env:GH_TOKEN", as: "GH_TOKEN" },
brave: "BRAVE_API_KEY"
}
}
run cmd { claude -p "hello" } using auth:claude with { policy: @p }
Standalone auth and policy.auth use the same mapping shape. Use policy.auth when callers need to remap module auth names.
Config forms
| Field | Purpose |
|---|---|
from |
Source: "keychain:path", "keychain", or "env:VAR" |
as |
Target environment variable name |
Short form examples:
auth @brave = "BRAVE_API_KEY"
policy @p = {
auth: {
brave: "BRAVE_API_KEY",
claude: { from: "keychain", as: "ANTHROPIC_API_KEY" }
}
}
Expansion rules:
"BRAVE_API_KEY"->{ from: "keychain:mlld-env-{projectname}/BRAVE_API_KEY", as: "BRAVE_API_KEY" }{ from: "keychain", as: "ANTHROPIC_API_KEY" }->{ from: "keychain:mlld-env-{projectname}/ANTHROPIC_API_KEY", as: "ANTHROPIC_API_KEY" }
Resolution order
For using auth:name, mlld resolves in this order:
- Auth captured on the executable where it was defined
- Caller
policy.auth - Caller standalone
auth
Caller bindings override same-name captured bindings.
Keychain behavior
Keychain paths use service/account and support {projectname} from mlld-config.json.
Resolution for from: "keychain:...":
- Read keychain entry
- If missing, read
process.env[as] - If both missing, throw
Unsupported provider schemes (for example op://...) fail with an explicit error.
policy.keychain.allow and policy.keychain.deny still gate keychain access.
danger: ["@keychain"] is required for policy.auth keychain sources. Standalone auth declares keychain intent directly and does not require danger.
Linux keychain access uses secret-tool (libsecret). Ensure secret-tool is on PATH.
policy @p = {
auth: {
claude: { from: "keychain:mlld-env-{projectname}/claude", as: "ANTHROPIC_API_KEY" }
},
keychain: {
allow: ["mlld-env-{projectname}/*"],
deny: ["system/*"]
},
capabilities: { danger: ["@keychain"] }
}
run cmd { claude -p "hello" } using auth:claude with { policy: @p }
Label flow checks for using auth:*
Auth injection keeps secrets out of command strings, but policy label flow checks still apply to env injection. Secrets injected via using auth:* are treated as secret input for policy checks, and using @var as ENV uses the variable's labels.
policy @p = {
auth: { api: { from: "env:SECRET", as: "API_KEY" } },
labels: { secret: { deny: ["exfil"] } }
}
>> BLOCKED: secret flows to exfil-labeled operation
exe exfil @send() = run cmd { curl -H "Auth: $API_KEY" ... } using auth:api
show @send()
Explicit variable injection
var secret @token = "computed-value"
run cmd { tool } using @token as TOOL_KEY
Direct keychain access in templates/commands is blocked; use auth or policy.auth with using auth:* instead.
Note: no-secret-exfil blocks secrets flowing through exfil-labeled operations. To also block direct show or log of secrets, add label flow rules:
policy @p = {
labels: { secret: { deny: ["op:show", "op:log"] } }
}
env
Environments are mlld's primitive for execution contexts. They encapsulate credentials, isolation, capabilities, and state.
var @sandbox = {
provider: "@mlld/env-docker", >> Docker container for process isolation
fs: { read: [".:/app"], write: ["/tmp"] }, >> Mount host . as /app, allow writes to /tmp
net: "none" >> No network access
}
env @sandbox [
run cmd { npm test }
]
Why environments matter for security:
- Credential isolation - Auth injected via sealed paths, not exposed as strings
- Capability restriction - Limit what tools and operations agents can use
- Blast radius - Contain failures within environment boundaries
Environments are values:
var @task = "Review code"
var @cfg = { auth: "claude", tools: ["Read", "Write"] }
var @readonly = { ...@cfg, tools: ["Read"] }
env @readonly [ run cmd { claude -p @task } ]
Compute, compose, and pass environments like any other value.
Use object spread for plain object derivation. The with { ... } clause is env-directive config syntax (for env @cfg with { ... } [ ... ]).
For enforcement boundaries (what mlld enforces locally vs what requires a sandbox provider), see the table in env-config.
Providers add isolation:
| Provider | Isolation | Use Case |
|---|---|---|
| (none) | Local execution | Dev with specific auth |
@mlld/env-docker |
Container | Process isolation |
@mlld/env-sprites |
Cloud sandbox | Full isolation + state |
Without a provider, commands run locally with specified credentials.
Complete sandbox example:
Combine environment config with policy to restrict an agent:
policy @p = {
capabilities: {
allow: ["cmd:claude:*"], >> Only allow claude commands
deny: ["sh"] >> Block shell access
}
}
var @sandbox = {
tools: ["Read", "Write", "Bash"], >> Allow Read/Write plus command execution
mcps: [] >> Block MCP servers in this block
}
env @sandbox [
run cmd { claude -p "Analyze code" }
]
For a complete working example with Docker isolation, credentials, and guards, see sandbox-demo in llm/run/j2bd/security/impl/sandbox-demo.mld.
Reading order: env-config for configuration fields, env-blocks for scoped execution, policy-capabilities for restrictions, policy-auth for credentials.
Environment Directive
The env directive creates scoped execution contexts that combine process isolation, credential management, and capability control.
For concepts and configuration details, see env-overview, env-config, and env-blocks.
Sandboxed execution with credentials:
var @sandbox = {
provider: "@mlld/env-docker",
fs: { read: [".:/app"], write: ["/tmp"] },
net: "none",
tools: ["Read", "Bash"],
mcps: []
}
env @sandbox [
run cmd { claude -p "Analyze the codebase" } using auth:claude
]
The provider runs commands in a Docker container. fs restricts filesystem mounts, net blocks network access, tools limits runtime tool availability, and mcps: [] blocks MCP servers. Credentials flow through sealed paths via using auth:* — never interpolated into command strings.
Local execution with different auth:
var @cfg = { auth: "claude-alt" }
env @cfg [
run cmd { claude -p @task } using auth:claude-alt
]
Without a provider, commands run locally. Use this for credential rotation across calls (e.g., multiple API keys to avoid per-account rate limits).
Config fields:
| Field | Purpose |
|---|---|
provider |
Isolation provider ("@mlld/env-docker", "@mlld/env-sprites") |
auth |
Authentication reference from policy |
tools |
Runtime tool allowlist |
mcps |
MCP server allowlist ([] blocks all) |
fs |
Filesystem access (passed to provider) |
net |
Network restrictions (passed to provider) |
limits |
Resource limits (passed to provider) |
profile |
Explicit profile selection |
profiles |
Profile definitions for policy-based selection |
Capability attenuation with with:
var @sandbox = {
provider: "@mlld/env-docker",
tools: ["Read", "Write", "Bash"]
}
env @sandbox with { tools: ["Read"] } [
>> Only Read is available here
run cmd { claude -p @task }
]
with derives a restricted child inline. Children can only narrow parent capabilities, never extend them.
Tool scope formats:
env @config with { tools: ["read", "write"] } [...]
env @config with { tools: "read, write" } [...]
env @config with { tools: "*" } [...]
var @subset = { read: @readTool, write: @writeTool }
env @config with { tools: @subset } [...]
Profile selection:
var @cfg = {
profiles: {
full: { requires: { sh: true } },
readonly: { requires: {} }
}
}
env @cfg with { profile: "readonly" } [
run cmd { claude -p @task }
]
When no profile is specified, the first profile whose requirements are satisfied by the active policy is selected.
Return values:
var @result = env @config [
let @data = run cmd { fetch-data }
=> @data
]
Scoped environment:
The env block creates a child environment. Variables defined inside don't leak out, but the block can access parent scope variables.
var @input = "test"
env @config [
let @processed = @input | @transform
=> @processed
]
Environment Configuration
Environment configuration objects control isolation, credentials, and resource limits.
var @sandbox = {
provider: "@mlld/env-docker",
fs: { read: [".:/app"], write: ["/tmp"] },
net: "none",
limits: { mem: "512m", cpu: 1.0, timeout: 30000 }
}
env @sandbox [
run cmd { npm test }
]
Configuration fields:
| Field | Values | Purpose |
|---|---|---|
provider |
"@mlld/env-docker", etc. |
Isolation provider |
fs |
{ read: [...], write: [...] } |
Filesystem access |
net |
"none", "host", "limited" |
Network restrictions |
limits |
{ mem, cpu, timeout } |
Resource limits |
auth |
"credential-name" |
Auth reference from policy |
tools |
["Read", "Write", "Bash"] |
Runtime tool allowlist for commands and MCP tools |
mcps |
[], [server-config] |
Runtime MCP server allowlist |
Important: tools and mcps enforce runtime access inside env blocks.
| Field | Enforced locally by mlld? | Notes |
|---|---|---|
tools |
Yes | mlld restricts available tools |
mcps |
Yes | mlld restricts available MCP servers |
fs |
No - requires container provider | mlld passes config but cannot enforce filesystem restrictions without a sandbox |
net |
No - requires container provider | mlld passes config but cannot enforce network restrictions without a sandbox |
limits |
No - requires container provider | mlld passes config but cannot enforce resource limits without a sandbox |
- Include
Bashintoolsto allowrun cmd,run sh, and shell-backed command executables. - Set
mcps: []to block all MCP tool calls, or list servers to allow specific MCP sources. - Use
capabilities.denyfor command-pattern policy rules (for examplecmd:git:push).
Advanced: MCP configuration via @mcpConfig():
Define an @mcpConfig() function to provide profile-based MCP server configuration:
var @cfg = {
profiles: {
full: { requires: { sh: true } },
readonly: { requires: {} }
}
}
exe @mcpConfig() = when [
@mx.profile == "full" => {
servers: [{ command: "mcp-server", tools: "*" }]
}
@mx.profile == "readonly" => {
servers: [{ command: "mcp-server", tools: ["list", "get"] }]
}
* => { servers: [] }
]
env @cfg with { profile: "readonly" } [
show @list()
]
The function is called when an env block spawns, with @mx.profile set from the with { profile } clause. When no profile is specified, the first profile whose requirements are satisfied by the active policy is selected. Explicit with { profile: "name" } overrides this automatic selection.
Compose with with:
var @readonly = @sandbox with { fs: { read: [".:/app"], write: [] } }
See env-overview for concepts, env-directive for block syntax.
Environment Blocks
Execute directives within a scoped environment using env @config [ ... ].
var @sandbox = { tools: ["Read", "Write", "Bash"] }
env @sandbox [
run cmd { echo "inside sandbox" }
]
The environment is active only within the block and released on exit.
Return values:
var @config = { tools: ["Read", "Write"] }
var @result = env @config [
=> "completed"
]
show @result
Use => to return a value from the block.
Inline derivation with with:
var @sandbox = { tools: ["Read", "Write", "Bash"] }
var @result = env @sandbox with { tools: ["Read"] } [
=> "read-only mode"
]
show @result
Derives a restricted environment inline without naming it.
Named child environments:
var @sandbox = { tools: ["Read", "Write", "Bash"] }
var @readOnly = { ...@sandbox, tools: ["Read"] }
env @readOnly [
run cmd { cat README.md }
]
Child environments can only restrict parent capabilities, never extend them.
Notes:
- Directives inside blocks use bare syntax (no
/prefix) - Environment resources are released when the block exits
with { ... }is env directive config syntax (env @cfg with { ... } [ ... ]), not a general object-modifier expression- See
env-overviewfor concepts,env-configfor configuration fields
Auth
Use auth to declare credentials at module scope without requiring callers to import policy objects.
Standalone auth
auth @brave = "BRAVE_API_KEY"
exe @search(q) = js { /* uses process.env.BRAVE_API_KEY */ } using auth:brave
Short form expands to:
from: "keychain:mlld-env-{projectname}/BRAVE_API_KEY"as: "BRAVE_API_KEY"- runtime resolution: keychain first, then
process.env.BRAVE_API_KEY
Long forms
auth @brave = { from: "keychain", as: "BRAVE_API_KEY" }
auth @brave = { from: "keychain:custom-service/custom-account", as: "BRAVE_API_KEY" }
auth @brave = { from: "env:SOME_OTHER_VAR", as: "BRAVE_API_KEY" }
from: "keychain" expands to keychain:mlld-env-{projectname}/<as>.
Unknown provider schemes (for example op://...) fail with a clear error until provider support is added.
Policy composition
policy.auth still works and accepts the same short/long forms:
policy @p = {
auth: {
brave: "BRAVE_API_KEY",
claude: { from: "keychain", as: "ANTHROPIC_API_KEY" }
}
}
Resolution order for using auth:name:
- Auth captured on the executable where it was defined
- Caller
policy.auth - Caller standalone
auth
Caller definitions override same-name module auth.
Keychain CLI
mlld keychain add BRAVE_API_KEY
mlld keychain get BRAVE_API_KEY
mlld keychain list
mlld keychain rm BRAVE_API_KEY
mlld keychain import .env
Entries are stored as service=mlld-env-{projectname} / account=<name>.