At many large companies, AI usage is tracked, and has become a metric/target to measure employee productivity. In a weak job market for knowledge workers, performance reviews tied to AI usage, and frequent mass layoffs, this has led to substantial waste and artificial demand by employees seeking to appear productive.

    Notable examples:

    Disney/ESPN:

    • Internal dashboard to track AI token usage

    • Product and tech staff used 3.1B Claude tokens and 13.3B Cursor tokens over nine workdays, and one Claude power user invoked Claude about 460,600 times, or 51,000+ invocations per workday. Assuming they were working 14 hour work days with no breaks, that's more than one invocation per SECOND. That's not humanly possible without writing a bot to spam requests.

    https://www.businessinsider.com/how-disney-tech-employees-are-using-ai-claude-cursor-tokens-2026-4

    Meta

    • Goals were set for some employees on AI-tool usage, including AI code assistants and agents; related reports cite team-level targets such as 75% AI-assisted code in some groups.

    • There was a leaderboard for employees to compete on token usage. 1 engineer racked up 281 Billion tokens in a single month. To put this into context, according to a survey by Jellyfish, a software engineering intelligence platform, the average developer that uses AI uses 50 million tokens per month, top decile uses 380 Million. So that's 5,000x the median, the typical 900x power user(and that's assuming the top decile aren't token maxxing). https://fortune.com/2026/04/09/meta-killed-employee-ai-token-dashboard/

    Google

    Employees were told AI usage is a part of their performance reviews. Sales employees were given AI usage quotas to meet. https://www.businessinsider.com/google-employee-ai-adoption-non-technical-software-engineer-performance-review-2026-2

    Microsoft

    BI reports an internal memo saying “using AI is no longer optional,” with managers told to include internal AI tool usage in evaluating performance. https://www.businessinsider.com/microsoft-internal-memo-using-ai-no-longer-optional-github-copilot-2025-6

    KPMG:

    • Employees report an AI usage dashboard that is easy to manipulate, employees are expected to hit 75% usage target for AI tools.

    Estimating the impact to demand

    The impact of Tokenmaxxing can be difficult to both quantify because there is is a lack of industry-wide data on the practice, only anecdotes and general usage data.

    What we can do is look at the top decile of developers, and determine to what extent this excess usage provides tangible benefits:

    Jellyfish analyzed 12,000 developers across 200 companies and found median AI-coding usage around 51 million tokens/month, while the 90th percentile used about 380 million tokens/month. It also found that the highest-token developers used roughly 10× as many tokens for only ~2× the throughput, with median developers using about 7 million tokens per PR versus top-decile developers around 69 million tokens per PR.

    "Throughput" does not necessarily mean productivity. For example, an engineer that does not review AI generated code may miss a bug, resulting in needing to submit a 2nd or 3rd PR to fix the bug, whereas a developer that takes their time to review/test code may achieve better real-world output with less PRs.

    Additionally, "PRs" are not a consistent unit of measure, and the survey spans 200 companies. A PR could be to fix a small bug, or it could be to implement a major new feature. Suppose a junior dev has many small PRs for small features & makes heavy use of AI because they're inexperienced and need the help. A senior dev has fewer, more important PRs with major features & uses AI much less. On paper, the junior developer used more tokens and was more productive because they submitted more PRs. But in reality, the senior dev produced more value with less tokens.

    Regardless, based on the naive assumption that throughput= productivity, we can model waste as:

    380M actual tokens/month − 102M productive-equivalent tokens/month = ~278M wasted tokens/month

    To be conservative, we can assume that some power users are legitimate. Heavy usage of agentic tools, tasking agents to work on multiple features in parallel, etc. But on the other hand, 380 million tokens is just the 90% percentile. Top users use ridiculous amounts due to lack of limits. One meta user used 281 Billion tokens in a month. That's 900x power users.

    Much of this wasteful spending has been subsidized:

    Up until recently, AI coding tools were provided at a loss, to gain market share:

    • Copilot is shifting from a subsidized model to usage based billing

    • Cursor switched from subsidized to usage based billing

    • Anthropic is requiring API based billing for 3rd party tools and now has strict limits on Claude code.

    Jellyfish suggested a typical cost of $1 per 1 million tokens, which suggests heavy subsidization. Frontier models cost $2-5 per 1 million input tokens, $15-30 per 1 million output, but input costs may be a bit lower due to prompt caching.

    If we assume the top 10% of users are wasting an average of 276 Million tokens per month at $1 per Million tokens, that suggests $6 Billion a year in waste.

    This is a very conservative estimate due to lack of reliable data on usage by extreme outlier tokenmaxxers. Actual waste is likely much higher, possibly as high as $20 Billion annualized.

    "Tokenmaxxing" – How AI demand is inflated by deliberately wasteful & subsidized usage. At least $6 Billion+ a year in waste
    byu/skilliard7 instocks



    Posted by skilliard7

    8 Comments

    1. MirthandMystery on

      This feels like a rigged flywheel. If AI agents are used to amplify token usage who will audit that?
      This is like Musk saying he has hundred of millions of active users but under the hood you find troves of bots.. he’s goosing the numbers, ripping off advertisers.

    2. creepy_doll on

      Could that power user just be a dude using multi agent flows? Potentially running several of them.

      I do think it’s dumb and wasteful(I only ever use a single agent and keep an active back and forth going so I can catch it being dumb early), but if I can see how stupid metrics could drive one to do something like this guy did.

    3. Due-Brush-530 on

      In one sense, I could see how it’s viewed as “waste”, but I actually feel like the waste is necessary at this stage. Nobody has used these tools before, so there is a necessity to experiment and try different prompts and executions. And stress tests on what it’s being used for. It’s not something you can just walk in and execute on yet. So token usage is high. But you need to use tokens in order to learn how to implement, and you need to use tokens to help train it to be useful.

      At least that’s my perspective from working at a company that has high expectations for everyone to use the tools. So far, my department has found several uses in our field that have already started to alleviate a lot of the crappy parts of our roles so that we can focus on the other stuff we actually get paid to do.

    4. They’re hoping the actual usage needs catch up before theyre found out

    Leave A Reply