Features
Shipped, user-visible capabilities.
CLI
Interactive chat and one-shot run/skill commands with a persistent daemon that starts automatically and manages its own lifecycle. Sessions can be resumed by ID prefix or picked from history.
- Model picker that queries provider APIs for available models
- Fuzzy search and autocomplete for file paths, sessions, commands, and skills
- File and directory attachments via
@path - Slash commands and skill invocation
- Engineering skills for structured workflows (plan, build, review)
- Configurable locale
- Multi-line input
- Custom terminal renderer with React reconciler and structured output
- Auto-update on startup with progress UI
- Update flags to force or skip auto-update (
--update,--no-update) - One-line install script
Agent execution
Single-pass lifecycle with four phases:
resolve → prepare → generate → finalize
The model runs once, effects apply inline, and the lifecycle completes. Explicit completion signals (done, no_op, blocked) let the caller distinguish outcomes.
- Pre/post-tool-call effect pipeline (auto-install deps, format, lint)
- Workspace profile detection with auto-detected install, lint, format, and test commands
- Configurable model reasoning level (low, medium, high) with provider-specific mapping
- Multi-provider support (OpenAI, Anthropic, Google, Vercel)
- Provider rate limit awareness with sliding window pacing and exponential backoff
- Proactive token budgeting with system prompt reservation and priority-based allocation
- Step budget enforcement for cost protection
- Two-tier result cache for read-only and search tools with cross-task persistence
- Streaming progress output with real-time token usage
- Inline task checklist for multi-step tasks
Tools
- Find/search/read files with gitignore awareness
- Edit/create/delete files
- AST-based structural code editing with workspace-wide scope
- Git status/diff/log/show/add/commit
- Shell and test execution
- Web search/fetch
Memory
On-demand memory toolkit (memory-search, memory-add, memory-remove) with three-scope persistent storage (session, project, user). Memory is not injected into the system prompt — the model searches for relevant context when it needs it.
- Automatic observation via distiller with
@observedirectives - Semantic recall with embeddings and cosine similarity ranking
- Hybrid retrieval scoring (cosine similarity + TF-IDF token overlap)
- Topic tags on observations for filtered recall
Safety and control
- Workspace sandbox boundary enforcement for filesystem access
- Cooperative interruption and queued message handling
Diagnostics
- Lifecycle trace with SQLite-backed indexed queries
- Structured logs with level, session, and time filtering
- Token usage reporting with prompt breakdown per turn
- Status command with JSON output
- Scoped debug logging with wildcard tag matching
Feature-flagged
Implemented but gated behind feature flags. See Configuration for setup.
- AGENTS.md sync (
syncAgents) — sync AGENTS.md into project memory for on-demand recall - Undo checkpoints (
undoCheckpoints) — session-level undo via write-tool checkpoints - Parallel workspaces (
parallelWorkspaces) — git worktree management and workspace-scoped sessions - Cloud sync (
cloudSync) — portable memory and sessions across machines with EdDSA JWT auth