We're open-sourcing Autonet on April 6, a framework for decentralized AI model training and inference where verification, rewards, and governance happen on-chain.

    The core thesis: alignment is an economic coordination problem, not a constraint problem. Instead of defining "aligned" centrally and baking it into models, the protocol lets communities publish their own values as semantic embeddings, and the network prices operations based on alignment with those values. Aligned work is subsidized; misaligned work pays a premium that funds the subsidies.

    Smart contract architecture:

    Contract Purpose
    Project.sol AI project lifecycle, funding, model publishing, inference
    TaskContract.sol Task proposal, checkpoints, commit-reveal solution commitment
    ResultsRewards.sol Multi-coordinator Yuma voting, reward distribution, slashing
    ParticipantStaking.sol Role-based staking (Proposer 100, Solver 50, Coordinator 500, Aggregator 1000 ATN)
    ModelShardRegistry.sol Distributed model weights with Merkle proofs and erasure coding
    ForcedErrorRegistry.sol Injects known-bad results to test coordinator vigilance
    AutonetDAO.sol On-chain governance for parameter changes

    How training verification works:

    1. Proposer creates a training task with hidden ground truth
    2. Solver trains the model, commits a hash of the solution
    3. Ground truth is revealed, then solution is revealed (commit-reveal prevents copying)
    4. Multiple coordinators vote on result quality via Yuma consensus
    5. Rewards distributed based on quality scores
    6. Aggregator performs FedAvg on verified weight updates

    Key governance mechanisms:

    • Constitutional constraints: Core principles (derived from the UDHR) stored on-chain. Evaluated by multi-stakeholder LLM consensus. 95% quorum for constitutional amendments.
    • Governance heartbeat: Every node has a Work engine that halts if the governance heartbeat stops. If the network's collective governance goes silent, all work ceases. Hard architectural constraint, not a feature flag.
    • Forced error testing: The ForcedErrorRegistry randomly injects known-bad results. If a coordinator approves them, they get slashed.
    • Forward-only evolution: No rollback mechanism. Bad governance decisions must be fixed through further governance, forcing robust processes.

    13+ Hardhat tests passing. Orchestrator runs complete training cycles locally with real PyTorch training.

    Paper: github.com/autonet-code/whitepaper
    Code: github.com/autonet-code
    MIT License.

    Interested in technical feedback, especially on the commit-reveal verification pattern, the alignment pricing mechanism, and the constitutional governance approach.

    Open-sourcing a constitutional governance framework for decentralized AI training: on-chain verification, staking, and alignment pricing
    byu/EightRice inCryptoTechnology



    Posted by EightRice

    Leave A Reply
    Share via