One OpenCode session. Many skills. Every step from orientation to commit — documented, composable, and in git.
Each workflow step is a skill — a markdown file the agent reads on demand. Some fire every session; others are reached for as needed.
Skills are plain markdown. The agent reads one only when the task calls for it.
Before touching any code, I ask: what actually needs my focus right now?
The skill runs several queries in parallel before forming any view:
MRs and issues are cross-referenced — same work item isn't shown twice.
Energy is multidimensional — cognitive, social, sensory, emotional. The two biggest session signals are:
Session duration is a weak signal — a long autonomous session is low load; a short highly-interactive one is high load. The table is a starting point, not a formula.
Low — top 1 work + top 1 personal item, no action recommended unless time-sensitive
Moderate — 2–3 items, mix of work and personal, prefer quick wins
Full — 3–4 items, includes longer-horizon work worth starting
Before making a decision or writing a ticket, I need to know what's already been decided, by whom, and when. The conversations skill searches four sources in order of fidelity.
People profiles and distilled decisions — synthesized by minutes as part of its post-processing. Fast, structured, low noise.
minutes search — full-text across recordings and Slack digests, also processed by minutes.
Real-time for things too recent to be ingested — today's threads, DMs, channel messages. Also covers community/support channels.
gws
Email threads, tickets, procurement — anything not in Slack or meetings.
Before writing a plan or ticket, I use structured frameworks to make sure I'm solving the right problem.
The skill routes automatically based on the situation:
Root cause unclear, recurring issue, or needs reframing
Multiple valid options, calibrating effort, prioritising
Why does this system behave this way?
When obvious approaches are exhausted
Say "use inversion" or "cynefin" to go straight to a framework. Otherwise describe the situation and the skill picks the right category.
Each framework file has multiple models — it picks the most relevant and applies it to your actual context, not just a generic description.
Implementation without a written artifact is a gamble. The writing skill routes to the right template for what's needed:
A ticket is for the implementer. A PRD is for alignment. A project update is for stakeholders. The writing skill enforces that separation.
Load before any change spanning multiple files or involving design decisions.
Self-review and commit approval gate live in commit, not here — so plan stays lean.
After each session — capture non-obvious discoveries before context is lost.
Session continuity across compaction and handoffs.
Maintains .opencode/context-log.md — updated at session start, after each commit, and on compaction.
The review skill is a coordinator — it never reviews the diff itself. It classifies, dispatches specialists in parallel, handles escalations, then merges findings.
Three always dispatched, three conditional:
Specialists return findings and escalations. Follow-up agents handle escalations — their own escalations are discarded to prevent loops.
If the diff touches views, templates, controllers, or UI — the qa skill runs automatically via Firefox DevTools before findings are surfaced.
Works on staged, commit, branch, or MR URL. On MRs: checks out the branch, reads prior review history, posts inline comments via the GitHub API.
My earlier setup wrapped every data source in a persistent MCP server — then curating, maintaining, and keeping that wrapper in sync with APIs that keep changing.
context7 and firefox-devtools remain — both expose capabilities that genuinely can't be replicated with a CLI call. That's the bar: if a CLI or documented API exists, a skill beats an MCP.
minutes search, gws gmail, Slack curl — documented CLIs and APIs directlyissues capability calls the Linear GraphQL API with graphqurl — no wrapper, no servergh, jq, rg — the agent already knows these; skills tell it when and howA skill is plain markdown. When an API changes, update two lines. No server to redeploy, no wrapper to maintain, no schema to keep in sync. The agent reads the skill, picks the right tool, and calls it directly.
A skill is not a prompt. It's a workflow — with routing logic, gates, subagent delegation, and direct tool calls. The model reads it like a runbook and executes it.
After each session, non-obvious discoveries are extracted into AGENTS.md or new skill files. Every session makes the environment a little smarter. Managed by chezmoi — every improvement is a git commit.
One chezmoi apply deploys the entire AI environment — skills, config, binaries, and services.
Skills are opt-in by trigger.
The agent reads a skill only when the task matches its description. No context bloat — each skill is loaded on demand, not pre-loaded for every session.
OpenCode runs as a persistent web service on port 4096. Managed by a macOS LaunchAgent — automatically restarted on apply.
opencode.json is validated against its schema before the service restarts. Bad JSON → apply fails loud, service stays up.
OpenClaw (by @steipete) is a genuinely compelling alternative. A personal AI assistant that runs on your machine, accessible from WhatsApp, Telegram, iMessage, or Discord — 24/7.
Two LaunchAgents, always running, managed by chezmoi:
chezmoi apply. Every workflow covered.Every skill lives in dotfiles/dot_agents/skills/, deployed to ~/.agents/skills/.
The agent already knows how to use documented tools. Skills tell it when, in what order, and with what guardrails. chezmoi keeps the whole thing in git — reproducible, auditable, portable.
athal7/dotfiles · managed by chezmoi · shipped with chezmoi apply