
we spent the weekend wiring a couple of agents to answer basic market questions, and the real blocker was not models, it was documentation shape. agents are decent at retrieval, but if the docs are not structured, they guess.
context: most crypto data APIs have docs optimized for humans. you can find the endpoint, but you have to infer auth, chain naming, and pagination rules. agents get that wrong. we ended up writing my own mini schema to keep them from mixing up chain names and filters.
what helped: we broke my docs into four pieces and made the agent pick one at a time. first, a small glossary of chain names and ids. second, auth and rate limits in one place. third, endpoint lists with required params only. fourth, example queries stripped of commentary. once the agent had that structure, it stopped hallucinating optional params and stopped calling non-existent fields. we kept each piece under 2k tokens so retrieval stays tight.
limitation: this does not solve the question of data freshness. the agent can still give you the right query that returns stale data. we still have to cross-check with a live price feed if the question is time sensitive.
we wrote up the structure we used and how we packaged it into four skills for GoldRush agents in one post, one sentence mention only: it is basically a template for structuring docs so agents use GoldRush APIs correctly.
https://goldrush.dev/blog/goldrush-skills-structured-knowledge-for-ai-agents/
curious how people here keep agents from mixing up chain ids or endpoint params when they answer market questions?
building ai agents for market data, docs are the bottleneck
byu/Jaye-Fern inCryptoMarkets
Posted by Jaye-Fern