The /research skill I use for all coding
How I make sure every session has accurate and up-to-date context
I’m a heavy Claude Code user and while I love Plan mode, I find it’s actually not all that thorough in its planning.
While it certainly has an obscene amount of data in its training corpus, it doesn’t have everything and is obviously missing anything recent. So I’ve found that making sure a given task has the latest docs and latest web search results tends to solve problems much more efficiently.
For docs, it detects if you’ve got the Context7 MCP installed.
There’s a lot of debate on the efficacy of Context7. The argument being that web searching/scraping works just fine. But I find that a substantial number of developer sites are extremely javascript heavy or behind auth (which is certifiably insane). So Context7 fills in the gap much more efficiently.
At any rate, here’s the /research skill I tack on to just about any new feature or bug I’m working through. This is geared towards Claude, but you should be able to adapt to whatever harness you use relatively easily.
---
name: research
description: "Deep research before planning. Launches parallel agents to search docs, web, and codebase, then synthesizes findings into actionable context."
---
$ARGUMENTS
Research this thoroughly before any planning or implementation begins.
## How to research
1. **Parse intent — separate problem from proposal.** Read the input critically:
- Identify the **core problem or need** — what's broken, what's missing, what outcome does the user actually want?
- Separately identify any **proposed solutions** embedded in the input. These are hypotheses to evaluate, not instructions to follow. A user suggesting "add a config option for X" may really need "X should just work by default."
- Frame 2-4 research questions around understanding and solving the **problem**, not around implementing the proposal.
- If $ARGUMENTS is empty or unclear, use AskUserQuestion to clarify before proceeding.
2. **Validate direction.** Use AskUserQuestion to confirm your research questions and priorities with the user before launching agents. Don't assume you've correctly interpreted what matters most.
3. **Launch parallel research.** Spawn sub-agents to work simultaneously. Judge how many based on complexity — not all are always needed:
- **Codebase agent** (almost always): Grep/Glob/Read to find relevant patterns, existing implementations, related code, config, and dependencies in the current project.
- **Docs agent** (when libraries/frameworks involved): Look up documentation. Try Context7 MCP tools first (mcp__context7__resolve_library_id, mcp__context7__get_library_docs). If Context7 is unavailable, use WebSearch + WebFetch targeting official docs sites.
- **Web agent** (when the problem isn't purely local): WebSearch for similar problems, solutions, examples, blog posts, Stack Overflow answers, GitHub issues. Focus on recent and authoritative sources.
- **Dependencies agent** (when relevant): Check package versions, compatibility, breaking changes, config options. Read package.json/Gemfile/requirements.txt/etc and cross-reference with docs.
**Critical: research the problem, not the proposal.** If the input includes a proposed solution, every agent should research the underlying problem independently first. Look for simpler or more idiomatic solutions that solve the same core need. Don't anchor on the proposed approach — it may be correct, but verify rather than assume.
Each agent should return: what it found, where it found it (file paths or URLs), and key snippets.
4. **Check in.** After agents return, if anything is surprising or ambiguous, use AskUserQuestion to get the user's take before synthesizing. Don't bury uncertainty in the summary.
5. **Synthesize.** Combine all agent findings. Resolve contradictions. Identify what is confirmed vs. uncertain.
**If the input included a proposed solution:** Explicitly evaluate it. Is it the best approach, or is there a simpler way to solve the core problem? If the proposal is unnecessary, overly complex, or solves the wrong thing, say so clearly and recommend the better path. Don't include a proposed solution in your findings just because someone suggested it — only include it if it's genuinely the best answer.
6. **Stress-test the recommendation.** After you have a recommended solution, actively look for its downsides. What UX does it degrade? What edge cases does it miss? What maintenance burden does it create? What could it break? Search the codebase and docs for anything that contradicts or complicates the approach. This isn't about killing the idea — it's about surfacing risks the plan needs to account for.
## Output format
Keep it tight. No filler.
### Answer
Direct response to what was asked. Concise for simple questions, thorough when complexity demands it.
### Evidence
Code snippets, doc quotes, or data that back up the answer. Use code blocks with file paths.
### Sources
- File paths for codebase findings
- URLs for web/doc findings
### Related
Anything else discovered that the user should know — gotchas, related patterns, upcoming deprecations, alternative approaches. Skip this section if there's nothing worth mentioning.
### Downsides & Risks
What could go wrong with the recommended approach? What are the UX tradeoffs, performance implications, edge cases, maintenance burden, or user confusion it could introduce? Be specific — "this could be slow" is useless, "this adds an N+1 query on every page load" is useful. Skip this section if the solution is trivially safe.
## Then enter Plan mode
After presenting research findings, call the EnterPlanMode tool so the user flows directly into planning with all the research context available.
## Rules
- Ask questions. Don't assume. Use AskUserQuestion whenever you're unsure about scope, priorities, or interpretation.
- Be smart about what needs researching. A CSS question doesn't need dependency analysis.
- Prefer primary sources (official docs, source code) over blog posts.
- If you find conflicting information, say so and state which source you trust more.
- Never pad the output. If the answer is simple, the research output should be simple.
- The number of agents should match the problem. Don't launch 4 agents for a one-file bug.


