April 2026

My AI Workflow

One OpenCode session. Many skills. Every step from orientation to commit — documented, composable, and in git.

The Workflow

One session, many skills

Each workflow step is a skill — a markdown file the agent reads on demand. Some fire every session; others are reached for as needed.

every session
🧠
attention
💬
conversations
📋
plan
🔴🟢
tdd
🔍
review
🚀
commit + push
reach for when needed
🗺️
thinking-tools
📐
writing
🏛️
architecture
🧑‍🔬
expert agent
📚
learn

Skills are plain markdown. The agent reads one only when the task calls for it.

Step 1 — Orientation

Start with attention

Before touching any code, I ask: what actually needs my focus right now?

The skill runs several queries in parallel before forming any view:

  • OpenCode session DB — user message count, peak concurrent sessions
  • Calendar — what's coming up, how much gap until next event
  • Reminders — overdue, today, undated
  • Chat — recent mentions waiting on me (last 8h)
  • Merge requests — review requests, needs-action, conflicts, CI failures
  • Issues — In Progress and unstarted work assigned to me

MRs and issues are cross-referenced — same work item isn't shown twice.

Energy accounting

Energy is multidimensional — cognitive, social, sensory, emotional. The two biggest session signals are:

  • User message count — each message = active decision-making. <30 available, 30–80 moderate, 80–150 caution, >150 low
  • Peak concurrent sessions — 3–4 shifts estimate +1 level toward Low; 5+ shifts +2

Session duration is a weak signal — a long autonomous session is low load; a short highly-interactive one is high load. The table is a starting point, not a formula.

Output tuned to energy

Low — top 1 work + top 1 personal item, no action recommended unless time-sensitive

Moderate — 2–3 items, mix of work and personal, prefer quick wins

Full — 3–4 items, includes longer-horizon work worth starting

Step 1b — Context before acting

Gather signal with conversations

Before making a decision or writing a ticket, I need to know what's already been decided, by whom, and when. The conversations skill searches four sources in order of fidelity.

1st Knowledge Base

People profiles and distilled decisions — synthesized by minutes as part of its post-processing. Fast, structured, low noise.

2nd Meeting transcripts

minutes search — full-text across recordings and Slack digests, also processed by minutes.

3rd Slack — live search

Real-time for things too recent to be ingested — today's threads, DMs, channel messages. Also covers community/support channels.

4th Gmail — gws

Email threads, tickets, procurement — anything not in Slack or meetings.

Decision guide
What has someone committed to?
KB → meetings
What was decided last week?
KB → minutes search
What did someone say this morning?
Slack search
Support ticket or email thread?
Gmail (gws)
The KB and transcripts are both minutes output — the same pipeline, different fidelity levels. Search before you plan.
When needed — Clarity

Frame the problem with thinking-tools

Before writing a plan or ticket, I use structured frameworks to make sure I'm solving the right problem.

The skill routes automatically based on the situation:

Problem framing

Root cause unclear, recurring issue, or needs reframing

Decision making

Multiple valid options, calibrating effort, prioritising

Systems thinking

Why does this system behave this way?

Creative problem solving

When obvious approaches are exhausted

How it works

Say "use inversion" or "cynefin" to go straight to a framework. Otherwise describe the situation and the skill picks the right category.

Each framework file has multiple models — it picks the most relevant and applies it to your actual context, not just a generic description.

# Example invocation # "I keep solving the same bug in different places" → routes to problem.md → applies Five Whys to find root cause → recommends a systemic fix, not a patch
When needed — Intent

Capture intent with writing

Implementation without a written artifact is a gamble. The writing skill routes to the right template for what's needed:

Document type
When to use
Ticket / Issue
Scoped unit of work — bug, feature, task
PRD
Defining a feature before implementation
ADR
Architecture decision + rejected alternatives
Project update
Status comms to stakeholders
Core principles
  • Lead with outcomes, not tasks — why before what
  • Specific scope — ambiguity becomes scope creep
  • Compress ruthlessly — every word earns its place
  • One artifact, one audience — never mix them

A ticket is for the implementer. A PRD is for alignment. A project update is for stakeholders. The writing skill enforces that separation.

Before implementing

Three skills for three moments

Load before any change spanning multiple files or involving design decisions.

  1. Research — explore codebase, git history, issues
  2. Write plan — files changing, approach, risks
  3. Present & STOP — wait for explicit approval before implementing

Self-review and commit approval gate live in commit, not here — so plan stays lean.

After each session — capture non-obvious discoveries before context is lost.

  • Hidden file relationships & execution paths
  • API/tool quirks and workarounds
  • Debugging breakthroughs
AGENTS.md if needed every session.
Skill if only needed situationally.

Session continuity across compaction and handoffs.

Maintains .opencode/context-log.md — updated at session start, after each commit, and on compaction.

  • Long sessions where review/QA need full history
  • Context survives history summarization
  • Another agent can pick up mid-session
Each skill triggers at a specific moment — keeping instructions focused means the agent actually follows them.
Quality gate

Multi-specialist review

The review skill is a coordinator — it never reviews the diff itself. It classifies, dispatches specialists in parallel, handles escalations, then merges findings.

Three always dispatched, three conditional:

Correctness always
Completeness always
Maintainability always
Performance DB, loops, migrations
Security auth, params, cookies, env
Conventions ORM, caching, new models
Escalation loop

Specialists return findings and escalations. Follow-up agents handle escalations — their own escalations are discarded to prevent loops.

Auto-QA

If the diff touches views, templates, controllers, or UI — the qa skill runs automatically via Firefox DevTools before findings are surfaced.

Merge request reviews

Works on staged, commit, branch, or MR URL. On MRs: checks out the branch, reads prior review history, posts inline comments via the GitHub API.

A change of approach

From MCP servers to skills

My earlier setup wrapped every data source in a persistent MCP server — then curating, maintaining, and keeping that wrapper in sync with APIs that keep changing.

The MCP tax
  • Every source needs a server process — ports, LaunchAgents, restart logic
  • Wrappers go stale when the underlying API changes — you maintain two surfaces
  • All context loaded all the time — no scoping to what the task actually needs
  • Hard to inspect, hard to version, hard to trust
MCPs that stayed

context7 and firefox-devtools remain — both expose capabilities that genuinely can't be replicated with a CLI call. That's the bar: if a CLI or documented API exists, a skill beats an MCP.

Skills use the API directly
  • conversations runs minutes search, gws gmail, Slack curl — documented CLIs and APIs directly
  • issues capability calls the Linear GraphQL API with graphqurl — no wrapper, no server
  • gh, jq, rg — the agent already knows these; skills tell it when and how
The flexibility win

A skill is plain markdown. When an API changes, update two lines. No server to redeploy, no wrapper to maintain, no schema to keep in sync. The agent reads the skill, picks the right tool, and calls it directly.

Skills don't replace the API. They tell the agent how to use it.
The Pattern

Any repeatable workflow can become a skill

A skill is not a prompt. It's a workflow — with routing logic, gates, subagent delegation, and direct tool calls. The model reads it like a runbook and executes it.

Anatomy
--- name: my-skill description: one sentence — when to invoke me --- Step 1: gather context (read files, run CLIs) Step 2: reason (route to sub-skill or framework) Step 3: act (spawn agents, call APIs, write artifacts) Step 4: gate (stop and verify before proceeding)
Skills compose
  • plan presents intent and gates on approval before implementing
  • commit runs tests + self-review + approval gate before every commit
  • review auto-triggers qa when UI is touched
  • attention dispatches sessions into other repos via the agent capability
Skills compound

After each session, non-obvious discoveries are extracted into AGENTS.md or new skill files. Every session makes the environment a little smarter. Managed by chezmoi — every improvement is a git commit.

The Infrastructure

chezmoi manages everything

One chezmoi apply deploys the entire AI environment — skills, config, binaries, and services.

# dotfiles/ dot_agents/skills/ ← edit here attention/SKILL.md → ~/.agents/skills/ plan/SKILL.md review/SKILL.md ... dot_config/opencode/ opencode.json ← MCPs, model, permissions AGENTS.md.tmpl ← agent instructions (templated) .chezmoidata/packages.yaml ← single package registry brews: [ripgrep, gh ...] github_releases: [...] .chezmoiscripts/ ← run on every apply brew bundle launchagent reload opencode restart
Key insight

Skills are opt-in by trigger.

The agent reads a skill only when the task matches its description. No context bloat — each skill is loaded on demand, not pre-loaded for every session.

LaunchAgent

OpenCode runs as a persistent web service on port 4096. Managed by a macOS LaunchAgent — automatically restarted on apply.

Schema validation

opencode.json is validated against its schema before the service restarts. Bad JSON → apply fails loud, service stays up.

The Alternative

The allure of OpenClaw

OpenClaw (by @steipete) is a genuinely compelling alternative. A personal AI assistant that runs on your machine, accessible from WhatsApp, Telegram, iMessage, or Discord — 24/7.

What makes it seductive
  • Proactive — heartbeats, cron jobs, it reaches out to you
  • Persistent memory across all conversations, 24/7
  • Chat-native — control it from your phone over any messenger
  • Self-modifying — it can write and hot-reload its own skills mid-conversation
  • Open source — hackable, self-hostable, community-driven skills
"Your context and skills live on YOUR computer, not a walled garden."
— @danpeguine on X
The answer: Tailscale + PWA

Two LaunchAgents, always running, managed by chezmoi:

# opencode-web.plist opencode web --port 4096 ← persistent web UI # tailscale-serve.plist tailscale serve --bg \ --https=4096 http://127.0.0.1:4096 # → HTTPS on the tailnet, any device
Same "anywhere" pitch, different model
  • Phone, tablet, laptop — install as a PWA, works offline-ish
  • No chat intermediary — full coding UI, not a Telegram bot
  • Secured by Tailscale — no public exposure, no tokens to manage
  • Skills and config still live in git, not the agent's own memory
The Full Picture

25 skills. One chezmoi apply. Every workflow covered.

Every skill lives in dotfiles/dot_agents/skills/, deployed to ~/.agents/skills/.

attention orientation
conversations chat, meetings, email
thinking-tools frameworks
writing artifacts
architecture decisions
plan approval gates
learn session capture
context-log session continuity
tdd red/green/refactor
review verify + code-review
commit semantic format
push CI watching
opencode sessions, dispatch, repair
chezmoi safe apply flow
gh PRs, reviews, CI
qa browser automation
observability logs + traces
elasticsearch log queries
figma design files
slack messaging
google-docs docs + tables
post-meeting minutes cleanup
cleanup worktree/DB hygiene
pty background processes
The Takeaway

Skills over servers.
CLIs over wrappers.

The agent already knows how to use documented tools. Skills tell it when, in what order, and with what guardrails. chezmoi keeps the whole thing in git — reproducible, auditable, portable.

attention → orientation conversations → context plan → execution review → quality
thinking-tools writing architecture learn — reach for when needed