Custom LLM Models
Junie CLI supports custom models defined via JSON profiles. This feature allows you to integrate with local providers (e.g., Ollama), enterprise proxies, or any LLM endpoint that follows the supported API formats.
Configuration
Location and Discovery
By default, custom models are discovered from JSON files located in:
User-scope:
$JUNIE_HOME/models/*.jsonProject-scope:
.junie/models/*.json
The filename (without the .json extension) is used as the profile identifier.
You can control where Junie searches for custom models using the following command-line options:
Option | Default | Description |
|---|---|---|
|
| Enable or disable adding custom models from default locations (per user / per project). |
| — | Additional folders where custom models should be found. Can be specified multiple times. |
Profile Structure
A custom model profile consists of a top-level configuration and two optional model roles:
Top-level properties: Serve as the default configuration (base URL, API key, API type, and extra headers) for the models in the profile.
primaryModel: The model used for main reasoning and code generation tasks.fasterModel: The model used for internal helper tasks like summarizing context or classifying tasks.
If primaryModel or fasterModel is not explicitly defined, they inherit the top-level properties.
Merging Logic
Overrides in primaryModel or fasterModel are merged with the top-level defaults:
Simple fields (ID, base URL, API key, API type) are replaced by the override if present.
Headers (
extraHeaders) are merged: headers defined in the override are added to the top-level headers.
Supported API Types
Junie supports the following API formats for custom models:
OpenAICompletionOpenAIResponsesGoogleAnthropic
Example Profile
Below is an example of a profile named local-ollama.json:
In this example, the primary model will be qwen2.5-coder:7b, and the faster model will be qwen2.5-coder:1.5b. Both will use the same base URL, API type, and extra headers.
Using Custom Models
Once a profile is created, you can select it using the /model command or the --model flag. Custom models are identified by a custom: prefix followed by the profile ID:
In the interactive TUI, custom models appear in the model selection list after the built-in providers.