get_answer
One-call RAG synthesis over the wiki — retrieves, gates on confidence, and returns a cited 2–5 sentence answer with fallback file targets. The first tool to call on any code question.
Collapses search → read → reason into one round-trip. The tool retrieves the most relevant wiki pages, enriches the top hits with symbol docstrings and source excerpts, hands them to the LLM, and returns a synthesized answer with citations and a confidence label. If confidence is low, it returns ranked excerpts and fallback file paths instead — letting the agent decide whether to dig deeper.
When to call
- Always first on any code question — before
search_codebaseor manual file exploration. - If
confidenceis"medium"or"low", follow up withsearch_codebaseandget_contexton thefallback_targets. - Works best when the question names explicit identifiers (a class, function, or module name).
Parameters
Prop
Type
Returns
| Field | Description |
|---|---|
answer | Synthesized 2–5 sentence answer (empty if synthesis was gated) |
citations | File paths the answer references |
confidence | "high", "medium", or "low" |
fallback_targets | Top file paths from retrieval — agent should get_context on these for verification |
retrieval | Top 5 wiki hits with title, target_path, score, summary, and (for top 2 file pages) symbols with docstrings and source excerpts |
note | Context string explaining why synthesis was skipped or hedged |
_meta | Timing, cache hit, answer hint |
Example
get_answer("how does the authentication flow work?")
get_answer(
"how are database migrations handled?",
scope="backend/",
)Things worth knowing
- Three confidence gates keep the tool honest:
- Dominance ratio — if the top retrieval score isn't 1.2× the second, synthesis is skipped and excerpts returned instead.
- Hedge-phrase detection — if the LLM admits insufficiency in
its own answer,
confidenceis downgradedhigh→low. - Identifier-citation gate — if the question names symbols but
none appear in the top retrieval hits,
confidenceis downgradedhigh→medium.
- Question-aware symbol promotion — symbols matching identifiers
extracted from the question get longer docstrings (400 chars) and a
40-line source body excerpt, so "how does
Xwork?" questions can be answered without hedging. - Intersection retrieval — relational questions ("how does X talk to Y?") are split into two queries; hits appearing in both get a 2× boost.
- Caching — questions are normalized and hashed; repeat questions hit the answer cache instantly. Hedged cached answers are bypassed for re-synthesis when the symbol pipeline is updated.
- No LLM provider configured? Falls back to retrieval-only —
ranked hits + snippets at
confidence="low".
get_answer is the highest-leverage tool in the set. Most "what does
this codebase do?" or "where is X handled?" questions are answered in
a single call.
get_overview
Architecture summary, module map, entry points, ownership, hotspots, and community structure for an entire repository — the first call your agent should make on any unfamiliar codebase.
get_context
The workhorse tool. Compact, batched context for any set of files, modules, or symbols — docs, ownership, freshness, and optional source/callers/callees/metrics/community in one call.