Configuration
Configuration is loaded from user scope and project scope, with project values overriding user values.
Paths:
- User:
~/.acolyte/config.toml - Project:
<cwd>/.acolyte/config.toml
Common commands
acolyte config list
acolyte config set model gpt-5-mini
acolyte config set locale en
acolyte config set --project logFormat json
acolyte config unset openaiBaseUrl
Vercel AI Gateway (recommended)
The fastest way to get started. The Vercel AI Gateway provides unified access to 20+ providers with a single API key.
acolyte init vercel
acolyte config set model anthropic/claude-sonnet-4
When a direct provider key is also set (e.g. ANTHROPIC_API_KEY), Acolyte prefers the direct connection. When it’s missing, requests fall back to the gateway automatically — no prefix or config change needed.
# Explicitly target a provider only available through the gateway
acolyte config set model vercel/xai/grok-4.1
# Override the gateway base URL
acolyte config set vercelBaseUrl https://custom-gateway.example.com/v1
Provider base URLs
Each provider has a configurable base URL with a sensible default:
openaiBaseUrl: OpenAI API base (default:https://api.openai.com/v1). Set to a local endpoint for OpenAI-compatible providers (Ollama, vLLM, etc.).anthropicBaseUrl: Anthropic API base (default:https://api.anthropic.com/v1). Must end with/v1.googleBaseUrl: Google AI API base (default:https://generativelanguage.googleapis.com).
Local models
Configure an OpenAI-compatible local endpoint directly in project config, then set the model explicitly:
acolyte config set --project openaiBaseUrl http://localhost:11434/v1
ollama pull <model>
acolyte config set --project model openai-compatible/<model>
Localization
locale: active UI language (defaults toen).- English messages are defined in src/i18n/en.ts. Additional locales are loaded from
src/i18n/locales/*.jsonat startup.
Logging
logFormat: log output format (logfmt|json, default:logfmt).
logfmt emits one key=value line per entry:
2026-03-20T12:00:00.000Z level=info msg="request started" model=gpt-5-mini
json emits one JSON object per line with typed fields:
{"ts":"2026-03-20T12:00:00.000Z","level":"info","msg":"request started","model":"gpt-5-mini"}
acolyte config set logFormat json
Feature flags
Feature flags are opt-in toggles for experimental behavior, configured under [features] in config.toml.
Enable via TOML:
[features]
syncAgents = true
Enable via CLI:
acolyte config set features.syncAgents true
Available flags
| Flag | Description |
|---|---|
syncAgents | Sync AGENTS.md into a deterministic project memory record (mem_agentsmd). The model recalls it via memory-search instead of prompt injection. |
undoCheckpoints | Write tools create undo checkpoints under .acolyte/undo/<sessionId>/. The model can list and restore via undo-list and undo-restore. |
parallelWorkspaces | Enable /workspaces chat commands for managing git worktrees and workspace-scoped sessions. |
cloudSync | Use the cloud API for memory and session storage. Requires acolyte login. |
All settable keys
| Key | Description |
|---|---|
port | daemon server port (default: 6767) |
locale | UI language (default: en) |
model | model |
temperature | generation temperature (0.0 to 2.0) |
reasoning | reasoning level for supported models (low, medium, high) |
openaiBaseUrl | OpenAI API base URL |
anthropicBaseUrl | Anthropic API base URL |
googleBaseUrl | Google AI API base URL |
vercelBaseUrl | Vercel AI Gateway base URL |
logFormat | log output format (logfmt or json) |
embeddingModel | embedding model for semantic recall |
distillModel | model used for memory distillation |
replyTimeoutMs | max reply wait time in ms (min 1000, default 180000) |
features.syncAgents | opt-in: sync AGENTS.md to project memory and omit it from prompt |
features.undoCheckpoints | opt-in: capture write-tool undo checkpoints |
features.parallelWorkspaces | opt-in: enable /workspaces chat commands |
features.cloudSync | opt-in: use cloud API for memory and session storage |