Most AI research assumes the path to AGI runs through increasingly large LLMs. It's the dominant paradigm for obvious reasons, as it has produced the most visible results.
    A developer presentation I came across proposed a different direction. Rather than training a static model, the proposal is a system that runs continuously and evolves. The architectural distinction is the use of ternary logic (+1, 0, -1) rather than binary, allowing the system to represent uncertainty natively rather than approximating it. It improves through evolutionary selection rather than gradient descent.

    What caught my attention: this is not just theoretical. There is open-source code, a training dataset exceeding a terabyte, a live demo, and a research paper accepted for presentation at IEEE this year.

    While investigating this space, I noticed Qubic seems to be building toward this kind of distributed continuous AI processing using their mining network as the compute layer.

    I'm not an AI researcher. However, I am curious whether people closer to this field think continuous evolutionary architectures are a serious research direction or a dead end compared to scaled transformer models.

    What if AGI comes from decentralized systems rather than scaled-up LLMs?
    byu/ardyes inCryptoCurrency



    Posted by ardyes

    3 Comments

    1. Nice_Material_2436 on

      Let me guess, it’s going to come with a ‘revolutionary’ token right?

    2. the11thdoubledoc on

      I don’t think there are serious AI researchers left who believe the path to AGI is through LLMs

    Leave A Reply
    Share via