OpenCode (AI)
Configure OpenCode for all AI features in Copilot
OpenCode Integration
Copilot uses OpenCode for all AI-backed features: code analysis, progress detection, error detection, PR descriptions, and the copilot agent. OpenRouter has been removed in favor of OpenCode.
Why OpenCode
- Multi-provider: OpenCode supports 75+ LLM providers (Anthropic, OpenAI, Gemini, local models via Ollama, etc.), so you can switch between paid and free models without changing code.
- Single backend: One server (OpenCode) handles API keys and model routing; this action only needs the server URL and model name.
- Consistent API: The same configuration works for GitHub Actions and local CLI.
Requirements
- OpenCode server must be running and reachable (e.g.
http://localhost:4096or your deployed URL). - Model in
provider/modelformat (e.g.opencode/kimi-k2.5-free,anthropic/claude-3-5-sonnet). - API keys are configured on the OpenCode server (not in this action). OpenCode reads them from environment variables (and optionally from
~/.local/share/opencode/auth.jsonif you use the/connectcommand in the TUI). When the action starts the server withopencode-start-server: true, it passes the job'senvto the OpenCode process, so any provider key you set in the workflow is available to OpenCode.
How OpenCode expects provider credentials
OpenCode reads provider API keys and options from environment variables (and optionally from ~/.local/share/opencode/auth.json when using the /connect command in the TUI).
In GitHub Actions with opencode-start-server: true: the action starts a headless OpenCode server (opencode serve). There is no TUI and /connect is not available during the run. Credentials must be provided only via environment variables in the job's env (e.g. secrets). The action passes the job's env to the OpenCode process so any variable you set in the workflow is available to OpenCode.
You can also reference env vars in an opencode.json config via {env:VAR_NAME} (see OpenCode Config – Variables).
Provider credentials reference
The following table lists the environment variables OpenCode uses for each provider, as documented in OpenCode Providers. Set the ones required by your chosen provider in the job's env (e.g. GitHub Actions env: or secrets).
Single API-key providers
| Provider | Environment variable | Notes |
|---|---|---|
| OpenAI | OPENAI_API_KEY | |
| Anthropic | ANTHROPIC_API_KEY | |
| OpenRouter | OPENROUTER_API_KEY | |
| OpenCode Zen | (via /connect or API key in TUI) | opencode.ai/auth |
| Groq | GROQ_API_KEY | |
| DeepSeek | (via /connect; store key in auth or config) | |
| 302.AI | (via /connect) | |
| Baseten | (via /connect) | |
| Cerebras | (via /connect) | |
| Cloudflare AI Gateway | See multi-var below | |
| Cortecs | (via /connect) | |
| Deep Infra | (via /connect) | |
| Fireworks AI | (via /connect) | |
| Helicone | (via /connect) | |
| Hugging Face | (via /connect) | |
| IO.NET | (via /connect) | |
| Moonshot AI | (via /connect) | |
| MiniMax | (via /connect) | |
| Nebius Token Factory | (via /connect) | |
| Ollama Cloud | (via /connect) |
Many of the “via /connect” providers also accept an API key from config using {env:PROVIDER_API_KEY} in opencode.json; the exact env name may follow the provider’s SDK (e.g. FIREWORKS_API_KEY). For CI with opencode-start-server: true, prefer setting the key in the job's env and, if needed, defining the provider in opencode.json with "apiKey": "{env:YOUR_SECRET_ENV}". If a provider cannot be configured via env and you need /connect, see If you need /connect or providers not exposed via env above.
Multi-variable / special auth
| Provider | Environment variables | Notes |
|---|---|---|
| Amazon Bedrock | AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEYor AWS_PROFILEor AWS_BEARER_TOKEN_BEDROCKoptional: AWS_REGION | IAM, profile, or Bedrock bearer token. Docs. |
| Azure OpenAI | API key via /connect or configAZURE_RESOURCE_NAME | Resource name is part of the endpoint URL. |
| Azure Cognitive Services | API key via /connect or configAZURE_COGNITIVE_SERVICES_RESOURCE_NAME | |
| Cloudflare AI Gateway | CLOUDFLARE_ACCOUNT_IDCLOUDFLARE_GATEWAY_IDCLOUDFLARE_API_TOKEN | |
| Google Vertex AI | GOOGLE_CLOUD_PROJECToptional: VERTEX_LOCATIONGOOGLE_APPLICATION_CREDENTIALS (path to service account JSON) | Or use gcloud auth application-default login. |
| GitLab Duo | GITLAB_TOKEN (Personal Access Token)optional: GITLAB_INSTANCE_URL, GITLAB_AI_GATEWAY_URL | Self-hosted: set GITLAB_INSTANCE_URL. |
Local / no API key
| Provider | Configuration | Notes |
|---|---|---|
| Ollama | opencode.json with options.baseURL (e.g. http://localhost:11434/v1) | No env key; local server. |
| LM Studio | Same: baseURL (e.g. http://127.0.0.1:1234/v1) | |
| llama.cpp (llama-server) | Same: baseURL (e.g. http://127.0.0.1:8080/v1) |
Using local providers (Ollama, LM Studio, etc.) with this action
This action does not require API keys or block any provider. Local providers (Ollama, LM Studio, llama.cpp) work the same way: you set opencode-model to provider/model-id (e.g. ollama/llama2, or the model ID you use in LM Studio). No credentials are needed.
- With
opencode-start-server: true: The action starts OpenCode with the repo as the working directory, so OpenCode loads config from the project (e.g.opencode.jsonin the repo root). Add aproviderentry for Ollama/LM Studio/llama.cpp with the rightbaseURLinopencode.json. The runner must have the local server (Ollama, LM Studio, etc.) already running and reachable at that URL—for example on a self-hosted runner where Ollama/LM Studio is installed, or in a job that starts the local server in a previous step and then runs this action. - With your own OpenCode server (
opencode-server-url): Point the action at a server that already has the local provider configured in its OpenCode config. No API key is required for that provider.
Example opencode.json in the repo (for opencode-start-server: true with Ollama on the runner):
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": { "baseURL": "http://localhost:11434/v1" },
"models": { "llama2": { "name": "Llama 2" } }
}
}
}
Then set opencode-model: 'ollama/llama2' (or the model ID you defined). See OpenCode Providers – Ollama for details.
Example: GitHub Actions with multiple providers
Set the variables for the provider you use in the workflow env. Only one primary provider is needed for the model you choose:
- uses: vypdev/copilot@master
env:
# Option A: Anthropic
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
# Option B: OpenAI
# OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# Option C: OpenRouter (many models)
# OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
# Option D: Amazon Bedrock
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# AWS_REGION: us-east-1
with:
opencode-start-server: true
opencode-model: 'anthropic/claude-3-5-sonnet'
For the full list of providers and any new env var names, see the official OpenCode Providers documentation.
GitHub Action inputs
| Input | Description | Default |
|---|---|---|
opencode-server-url | OpenCode server URL | http://localhost:4096 |
opencode-model | Model in provider/model format | opencode/kimi-k2.5-free |
opencode-start-server | If true, the action starts an OpenCode server at the beginning of the job and stops it when the job ends. No need to install or run OpenCode yourself. Pass provider API keys via the job's env (see How OpenCode expects provider API keys). | true |
Example (using your own OpenCode server):
- uses: vypdev/copilot@master
with:
opencode-server-url: 'http://your-opencode-host:4096'
opencode-model: 'anthropic/claude-3-5-sonnet'
Example (action starts and stops OpenCode for you; no separate server needed). Set the provider API key in env using the variable name OpenCode expects (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY, or OPENROUTER_API_KEY):
- uses: vypdev/copilot@master
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} # or OPENAI_API_KEY, OPENROUTER_API_KEY, etc.
with:
opencode-start-server: true
opencode-model: 'anthropic/claude-3-5-sonnet'
Environment variables (CLI / local)
OPENCODE_SERVER_URL– OpenCode server URL.OPENCODE_MODEL– Model inprovider/modelformat.
CLI options
--opencode-server-url <url>– Override OpenCode server URL.--opencode-model <model>– Override model.
For the copilot command:
--output <format>– Output format:text(default) orjson. Usejsonto get{ response, sessionId, diff }for programmatic use.
Running OpenCode
-
GitHub Actions – managed server (easiest): Set
opencode-start-server: true. The action will start an OpenCode server at the beginning of the job (npx opencode-ai serveon port 4096), wait until it is healthy, run the rest of the job using that server, and stop the server when the job ends. You do not need to install or run OpenCode yourself. Set the provider credentials in the job'senvusing the variable names OpenCode expects (see Provider credentials reference above). -
Local / self-hosted: Install OpenCode and run the server, e.g.:
npx opencode-ai serve # or opencode serve --port 4096 -
CI with your own server: Run OpenCode in a job (e.g. in a container or as a service) and set
opencode-server-url, or pointopencode-server-urlto a shared OpenCode instance your org hosts.
Features using OpenCode
- AI pull request description – Generates PR descriptions from issue and diff.
- Think / reasoning – Deep code analysis and change proposals (OpenCode Plan agent).
- Comment translation – Automatically translates issue and PR review comments to the configured locale (e.g. English, Spanish) when they are written in another language. Uses
issues-localeandpull-requests-localeinputs. - Check progress – Progress detection from branch vs issue description (OpenCode Plan agent).
- Bugbot (potential problems) – Analyzes branch vs base and posts findings as comments on the issue and review comments on the PR; updates issue comments and marks PR review threads as resolved when the model reports fixes. Runs on push or via single action / CLI. Configure with
bugbot-severity(minimum severity:info,low,medium,high) andai-ignore-files(paths to exclude). - Bugbot autofix – When you comment on an issue or PR asking to fix one or more reported findings (e.g. "fix it", "fix all"), OpenCode decides which findings to fix, applies changes in the workspace, runs the verify commands you set in
bugbot-fix-verify-commands(e.g. build, test, lint), and the action commits and pushes if all pass. Only organization members or the repo owner can trigger it. Requires OpenCode running from the repo (e.g.opencode-start-server: true). See Features → Bugbot autofix and Troubleshooting → Bugbot autofix. - Do user request – When you comment asking to perform any change in the repo (e.g. "add a test", "refactor this"), OpenCode applies the changes, runs the same verify commands, and the action commits and pushes. Same permission as Bugbot autofix (org member or repo owner).
- Copilot – Code analysis and manipulation agent (OpenCode Build agent).
- Recommend steps – Suggests implementation steps from the issue description (OpenCode Plan agent).
All of these use the same OpenCode server and model configuration.
How Think works on issue comments
When someone comments on an issue or PR review, OpenCode can reply with AI-generated answers (Think feature). The trigger depends on the issue type:
- Issues labeled
questionorhelp: OpenCode responds to any comment on the issue. No mention required — if the user needs help, simply add a comment and OpenCode will answer. - Other issues: You must mention the bot user (the user of the PAT) in the comment, e.g.
@your-bot-user how do I configure X?Only then does OpenCode respond.
This lets question/help issues behave as a support channel where users can ask without knowing the bot's username.
How comment translation works
When someone comments on an issue or PR review, the action checks if the text is in the configured locale (issues-locale for issues, pull-requests-locale for PRs). If the comment is in another language, OpenCode translates it and updates the comment with the translation (appending the original text below). A hidden marker prevents re-translating the same comment. To force a new translation, delete the comment and post again.
How "check progress" works (e.g. "Progress 30%" in the issue)
Progress is updated automatically on every push: when the commit (push) workflow runs (e.g. on: push), the action computes size and progress from the branch diff, updates the progress label on the issue, and applies the same label to any open PRs for that branch. No separate "check progress" workflow is required.
You can also run progress check on demand with single-action: check_progress_action and single-action-issue: <number> (or the CLI check-progress -i <number>). The flow is:
-
Trigger – Either the push workflow (automatic) or a workflow that sets the single action to check progress (or the CLI with an issue number).
-
Data gathering – The action reads the issue description, finds the branch (from the push or by searching for a branch linked to the issue), and gets the diff of that branch vs the development branch (e.g.
develop). The OpenCode agent runs in the workspace and computes the diff itself. -
AI analysis (OpenCode) – The OpenCode Plan agent compares what was requested in the issue vs what is in the diff and returns
progress(0–100) andsummary. -
Result – The progress label is set on the issue (and on any open PRs for that branch). When run from a workflow, PublishResultUseCase posts a comment on the issue with the percentage and summary.
How Bugbot works (potential problems)
Bugbot runs when the push (commit) workflow runs, or on demand via single action detect_potential_problems_action (with single-action-issue) or the CLI detect-potential-problems -i <issue>.
- Trigger – Push to a branch linked to an issue, or a workflow/CLI run with the single action and issue number.
- Analysis – OpenCode Plan compares the branch diff vs the base and returns a list of findings (title, severity, file, line, description). It also receives previously reported findings (from issue and PR comments) and can mark some as resolved.
- Issue – New findings are posted as comments on the issue; when a finding is resolved, the corresponding comment is updated (e.g. "Resolved").
- Pull request – For each finding, the action posts a review comment on the PR at the right file/line. When OpenCode reports a finding as resolved, the action marks that review thread as resolved.
- Config – Use
bugbot-severity(e.g.medium) so only findings at or above that severity are posted; useai-ignore-filesto exclude paths from analysis and reporting.
Bugbot autofix: From an issue comment or PR review comment, you can ask the bot to fix one or more findings (e.g. "fix it", "arregla las vulnerabilidades", "fix all"). OpenCode interprets your comment, applies fixes in the workspace, and the action runs bugbot-fix-verify-commands (e.g. build, test, lint); if all pass and there are changes, it commits and pushes and marks those findings as resolved. On issue comments, the action resolves the branch from an open PR that references the issue. Workflows that run on issue_comment or pull_request_review_comment need contents: write so the action can push. See Troubleshooting → Bugbot autofix.
See Issues → Bugbot and Pull Requests → Bugbot for more.
Can we avoid opencode-server-url and use a "master" OpenCode server?
Current situation
- OpenCode is open-source and self-hosted: there is no public "master" server or official opencode.ai cloud API for this action to point to by default.
- The server used by the action is the one you (or your org) configure in
opencode-server-url.
Options
-
Keep
opencode-server-url(recommended) – For local/CLI, the defaulthttp://localhost:4096works without setting a URL. For GitHub Actions you need an OpenCode server reachable from the runner, so you must set the URL (or use an org default via secret). -
Shared organization server – Deploy OpenCode on an internal or cloud server with your provider API keys. In the workflow, pass that host as
opencode-server-url(e.g. via secret). One "master" server for all repos that use the action. -
Managed server inside the job – The action supports
opencode-start-server: true: it starts OpenCode at the beginning of the job (npx opencode-ai serve), waits until it is ready, runs the flow, and stops the server at the end. No need to install or run OpenCode manually; only provider API keys as secrets. Heavier (startup time, first-time download of opencode-ai) but no external server required.
Summary – You need some OpenCode server (yours or shared). You can use one server for many repos (org "master") and keep the default http://localhost:4096 for local development while requiring opencode-server-url (or an org default) in CI.
Model format
Use provider/model as in OpenCode's config, for example:
opencode/kimi-k2.5-free(free, Kimi K2.5)openai/gpt-4o-miniopenai/gpt-4oanthropic/claude-3-5-sonnet-20241022google/gemini-2.0-flash
Check OpenCode's docs or /config/providers on your server for the exact model IDs.
Troubleshooting
- "Missing required AI configuration": Set
opencode-server-urlandopencode-model(or env vars). - Connection errors: Ensure the OpenCode server is running and reachable from the runner (network/firewall, correct URL and port).
- Auth errors: Ensure the provider API key is set in the environment with the name OpenCode expects (e.g.
OPENAI_API_KEY,ANTHROPIC_API_KEY,OPENROUTER_API_KEY). When usingopencode-start-server: true, pass it via the job'senv. See OpenCode Providers for other providers.