ngl, Im getting pretty tired of the "AI is the new industrial revolution" narrative when most of the tech we see right now is just fancy autocomplete. It's fine for writing emails, but you can’t exactly run a semiconductor fab or a global logistics network on a system that might decide 2+2=5 if the prompt is weird

    I was looking at some of the talking points for the upcoming Milken Conference and it seems like the "big money" and infrastructure players are finally moving toward the idea of deterministic AI and "correctness." From a purely economic standpoint, has anyone looked at the ROI difference between probabilistic models (LLMs) and deterministic ones in high-stakes sectors?

    It feels like there's this massive hidden cost of human oversight that nobody is really talking about.

    if you need a human to check every single output for correctness because the model might hallucinate, does the productivity gain even exist, or is it just a lateral move? Curious if there’s any actual empirical research on how logical reliability (or lack of it) messes with the capital-labor substitution models we keep hearing about. The current "AI bubble" feels like it's totally ignoring the friction caused by unreliable logic.

    how do we actually model the productivity of "hallucinating" AI vs deterministic systems in industrial econ?
    byu/Italiancan inAskEconomics



    Posted by Italiancan

    Leave A Reply