Secure LLM scripting.
Finally.
Agents and orchestrators in code you can actually read.
secure?
Prompt injection isn't an LLM problem,
it's an infrastructure problem.
mlld tracks what data is and enforces where it can go at the runtime level. The LLM doesn't get a vote.
No magic. No proprietary pixie dust. Just classic security principles applied to a new problem. mlld's primitives help you do the work of securing your stuff.
llm scripting?
If you've experienced the pain,
you know what you need it to be.
Tired of repeating yourself
"I'd do a lot more with LLMs if constantly assembling and re-assembling context wasn't such a chore."
Tired of wrong tools for the job
"I just want to script LLMs. Don't give me a chat app or an uber-agent or a magic black box. Give me a unix pipe."
Tired of shipping without guardrails
"I can't ship LLM workflows because I can't secure them. Everyone handwaves 'defense in depth' and nobody has auditable tooling for it."