
Been running this since March 17. Sharing week 2 data. Setup: Multiple AI models analyze live crypto market conditions hourly and make decisions on a $1,000 paper portfolio. Every Sunday I review all outcomes and adjust the rules. No backtests — everything runs against live market data. The research question: does the iterative human review loop measurably improve AI decision quality over 26 weeks? Week 2 numbers: – Portfolio: -0.3% vs BTC -5.8% — preserved capital during the drawdown – Signal accuracy (4h): 78% – When all models agree: 85% accuracy – When only one model fires: 25% — this is the main problem to solve Tracking every decision at 4h, 24h, and 48h intervals. Live at theaitradinglab.com Early days but the data is accumulating. Happy to discuss methodology.
Live experiment: does weekly human review make AI trading decisions better over time? Week 2 update
byu/sherifleb inCryptoTechnology
Posted by sherifleb