Architecture
The intent pipeline
Every Zenus command flows through the same pipeline:
User input (natural language)
↓
Intent Translation (LLM + context)
↓
IntentIR validation (typed schema, Pydantic)
↓
Safety checks (risk assessment, confirmation policy)
↓
Plan analysis (dependency graph, failure history)
↓
Parallel Executor (ThreadPoolExecutor)
↓
Tool dispatch (FileOps, SystemOps, GitOps, ...)
↓
Action Tracker (transaction recording)
↓
Knowledge Graph ingestion
↓
Execution summary
IntentIR — the safety contract
IntentIR is a Pydantic model that sits between the LLM and your system. The LLM never produces shell commands directly — it produces a structured JSON object that is validated, then executed by typed tool methods.
Key fields:
class IntentIR(BaseModel):
goal: str # human-readable goal
steps: list[Step] # typed, validated action sequence
is_question: bool # Q&A short-circuit
search_provider: str | None # "web" | "llm" | null
search_category: str | None # sports | tech | academic | news | general
cannot_answer: bool # model knows it can't answer
Each Step has a risk level (0–3). The orchestrator enforces that risk ≥ 2 always
requires requires_confirmation = True, regardless of what the LLM returns.
Memory layers
| Layer | Lifetime | Storage | Purpose |
|---|---|---|---|
| Session memory | Current process | RAM | Conversation context |
| World model | Persistent | ~/.zenus/world_model.json | System facts, preferences |
| Intent history | Permanent | ~/.zenus/history/ | Audit trail |
| Failure patterns | Persistent | ~/.zenus/failures.db | Failure learning |
| Intent cache | Process + disk | ~/.zenus/cache/ | LRU + 1h TTL, skip LLM |
| Knowledge graph | Persistent | ~/.zenus/knowledge_graph.json | Entity relationships |
Parallel execution
The DependencyAnalyzer builds a DAG from the IntentIR steps. Steps with no
dependencies between them are submitted to a ThreadPoolExecutor concurrently.
Steps that depend on an earlier step’s output run after it completes. This gives automatic 2–5× speedup on batch operations with no user configuration.
Intelligence modules
- Failure Analyzer — queries
~/.zenus/failures.dbbefore each execution; warns if a matching failure exists. - Tree of Thoughts — for high-stakes steps (risk ≥ 2), generates 3 candidate plans and evaluates them by confidence, risk, and speed.
- Self-Reflection — critiques the plan, checks assumptions, revises if needed.
- Goal Inference — detects 11 goal types (deploy, debug, migrate, security…) and adds implied steps (backups, tests, verification).
- Multi-Agent — spawns ResearcherAgent, PlannerAgent, ExecutorAgent, ValidatorAgent for complex tasks.
- Prompt Evolution — auto-tunes system prompts based on success rate; A/B tests variants.
- Model Router — routes simple tasks to fast/cheap models (DeepSeek), complex tasks to powerful models (Claude).
Package structure
zenus/
├── packages/
│ ├── core/ zenus-core — orchestrator, tools, brain, memory, safety
│ ├── cli/ zenus-cli — CLI entry point, argument routing, REPL
│ ├── tui/ zenus-tui — terminal dashboard
│ ├── voice/ zenus-voice — STT, wake word, TTS, pipeline
│ └── visualization/ zenus-visualization — charts, tables, diffs
Long-term direction
Today Zenus is a Python layer on Linux:
User → Python App (Zenus) → Linux → Hardware
The long-term target moves the AI intent layer closer to the hardware:
User → Python AI Layer → Custom OS Services (Rust/C++) → Hardware
The Python layer preserves the full AI/ML ecosystem. The lower layer provides tighter control over scheduling, memory, and security policies without general-purpose OS overhead.
See the Roadmap for the phased plan.