I’ve spent the last few weeks running a custom GPT-4 agent through 100 different Real World Asset (RWA) whitepapers.
The finding: 40 of them had predatory tokenomics hidden in legal jargon—mostly "flexible" team vesting and hidden minting functions that the average investor would miss in a 50-page PDF.
How it works: I don't use the default ChatGPT. I use a specific "Cynical Auditor" persona that ignores the marketing hype and only looks for discrepancies between the roadmap and the smart contract logic described in text.
Example: One project claimed "locked liquidity" for 2 years, but the whitepaper footnote allowed for "emergency re-allocation" by the DAO (which the team controlled). GPT-4 flagged this anomaly in 15 seconds.
I’m doing this as part of my CS PhD research on AI-driven forensics. If you want to see the full list of red flags I look for, check the logic pinned on my profile.
Using GPT-4 for RWA Whitepaper Forensics: A Case Study on 100 Projects.
byu/Wild-Group-6763 inCryptoTechnology
Posted by Wild-Group-6763
1 Comment
For those interested in the technical stack: I’m using a vectorized RAG setup to feed the whitepapers into the model. The key isn’t just the LLM, but the pre-processing that filters out marketing fluff before the ‘Cynical Persona’ even sees it. I’ve pinned a screenshot of the agent’s logic and the beta signup link on my profile (u/Wild-Group-6763) for anyone who wants to help test.