Skip to main content

Configuration

Configuration is loaded from user scope and project scope, with project values overriding user values.

Paths:

  • User: ~/.acolyte/config.toml
  • Project: <cwd>/.acolyte/config.toml

Common commands

acolyte config list
acolyte config set model gpt-5-mini
acolyte config set locale en
acolyte config set --project logFormat json
acolyte config unset openaiBaseUrl

The fastest way to get started. The Vercel AI Gateway provides unified access to 20+ providers with a single API key.

acolyte init vercel
acolyte config set model anthropic/claude-sonnet-4

When a direct provider key is also set (e.g. ANTHROPIC_API_KEY), Acolyte prefers the direct connection. When it’s missing, requests fall back to the gateway automatically — no prefix or config change needed.

# Explicitly target a provider only available through the gateway
acolyte config set model vercel/xai/grok-4.1

# Override the gateway base URL
acolyte config set vercelBaseUrl https://custom-gateway.example.com/v1

Provider base URLs

Each provider has a configurable base URL with a sensible default:

  • openaiBaseUrl: OpenAI API base (default: https://api.openai.com/v1). Set to a local endpoint for OpenAI-compatible providers (Ollama, vLLM, etc.).
  • anthropicBaseUrl: Anthropic API base (default: https://api.anthropic.com/v1). Must end with /v1.
  • googleBaseUrl: Google AI API base (default: https://generativelanguage.googleapis.com).

Local models

Configure an OpenAI-compatible local endpoint directly in project config, then set the model explicitly:

acolyte config set --project openaiBaseUrl http://localhost:11434/v1
ollama pull <model>
acolyte config set --project model openai-compatible/<model>

Localization

  • locale: active UI language (defaults to en).
  • English messages are defined in src/i18n/en.ts. Additional locales are loaded from src/i18n/locales/*.json at startup.

Logging

  • logFormat: log output format (logfmt | json, default: logfmt).

logfmt emits one key=value line per entry:

2026-03-20T12:00:00.000Z level=info msg="request started" model=gpt-5-mini

json emits one JSON object per line with typed fields:

{"ts":"2026-03-20T12:00:00.000Z","level":"info","msg":"request started","model":"gpt-5-mini"}
acolyte config set logFormat json

Feature flags

Feature flags are opt-in toggles for experimental behavior, configured under [features] in config.toml.

Enable via TOML:

[features]
syncAgents = true

Enable via CLI:

acolyte config set features.syncAgents true

Available flags

FlagDescription
syncAgentsSync AGENTS.md into a deterministic project memory record (mem_agentsmd). The model recalls it via memory-search instead of prompt injection.
undoCheckpointsWrite tools create undo checkpoints under .acolyte/undo/<sessionId>/. The model can list and restore via undo-list and undo-restore.
parallelWorkspacesEnable /workspaces chat commands for managing git worktrees and workspace-scoped sessions.
cloudSyncUse the cloud API for memory and session storage. Requires acolyte login.

All settable keys

KeyDescription
portdaemon server port (default: 6767)
localeUI language (default: en)
modelmodel
temperaturegeneration temperature (0.0 to 2.0)
reasoningreasoning level for supported models (low, medium, high)
openaiBaseUrlOpenAI API base URL
anthropicBaseUrlAnthropic API base URL
googleBaseUrlGoogle AI API base URL
vercelBaseUrlVercel AI Gateway base URL
logFormatlog output format (logfmt or json)
embeddingModelembedding model for semantic recall
distillModelmodel used for memory distillation
replyTimeoutMsmax reply wait time in ms (min 1000, default 180000)
features.syncAgentsopt-in: sync AGENTS.md to project memory and omit it from prompt
features.undoCheckpointsopt-in: capture write-tool undo checkpoints
features.parallelWorkspacesopt-in: enable /workspaces chat commands
features.cloudSyncopt-in: use cloud API for memory and session storage