The $10 Billion Wall: When AI Found Its “AbShaper Moment”

A few days ago, I read an academic paper which immediately reminded me of my old AbShaper (https://www.linkedin.com/pulse/o-efeito-abshaper-verdade-inconveniente-sobre-projetos-deivid-bitti-wyzuf). Researchers from renowned universities have discovered something that we, on the front lines of AI projects, already suspected: there is an invisible wall that is causing Big Tech companies to literally build nuclear power plants to try to overcome it.

The Brutal Mathematics of Reality

The study revealed numbers that would give any CFO sleepless nights: to improve the performance of an LLM by a mere 10%, it would be necessary to increase the computing power by 10 billion times. Yes, you read that right. It's not 10x or 100x. It's 10.000.000.000x.

It's like someone telling you, "Look, to get 10% more defined abs with the AbShaper, you'll need to do not 100, but 10 billion repetitions." At that point, even the most optimistic buyer would throw in the towel.

The Parallel with Our Business Reality

Na Flexa Cloud , after hundreds of projects delivered, I see this pattern repeating itself in a less dramatic, but equally revealing way:

  • Project A: Client invested R$500 expecting a 200% ROI. Actual result? 15% (still positive, but far from expectations).
  • Project B: We increased the dataset from 100GB to 1TB. Accuracy improvement? From 82% to 84%.
  • Project C: We tripled the computing power. Reduction in response time? 20%.

The lesson? More is not always better. Sometimes it's just more expensive.

The “GPT-4.5 Will Solve Everything” Syndrome

Recently, a client came to me and said, "David, when GPT-5 comes out, our problems will be over, right?" I had to have the same difficult conversation I had with my 16-year-old self about AbShaper.

GPT-4.5 costs 30 times more than GPT-4 to operate. The improvements? Marginal on quantitative tasks. It's like buying an AbShaper Pro Max Titanium Edition for $3.000 when the real problem is that you don't want to do sit-ups.

The Truths We Need to Face

1. The Problem of Spurious Correlations

The paper revealed something fascinating: the larger the dataset, the more "statistical garbage" is generated. It's like doing 10.000 sit-ups incorrectly—you don't get a six-pack, you get back pain.

In a recent project, we discovered that our model had "learned" that sales increased when it rained. Why? Because, by chance, in the training data, rainy days coincided with sales. Correlation? Yes. Causality? Zero.

2. The Law of Diminishing Returns is Relentless

  • Doubling data = 7-9% improvement
  • Double processing = 5% improvement
  • Double investment = Half the expected ROI

3. Energy Is Not Infinite

Big Tech is literally reopening nuclear power plants. Meanwhile, our brains operate on 20 watts—less than an LED bulb. Something is fundamentally wrong with this equation.

What Really Works (Spoiler: It's Not Magic)

After all these projects, I've identified what really moves the needle:

1. Specificity Over Generality

Instead of trying to create the "model that solves everything," we created the "model that solves YOUR specific problem." One customer reduced service time by 40% with a model trained only on THEIR tickets, not on all the knowledge on the internet.

2. Hybrid Intelligence

The best project we delivered combined:

  • AI for initial screening (90% accuracy)
  • Humans for complex cases (10% of cases)
  • Continuous feedback loop

Result? 99.5% final accuracy with 70% less cost.

3. Data Quality > Data Quantity

A clean 1GB dataset outperforms a messy 1TB dataset. Every time. No exceptions.

The Difficult Conversation with the Customer

Today, when a client comes in excited about the “infinite possibilities of AI,” I have a presentation I call the “Reality Moment”:

Slide 1: “Yes, AI is transformative” Slide 2: “No, she won’t perform miracles.” Slide 3: “Here’s the real work needed” Slide 4: “Here are realistic results” Slide 5: “Still interested?”

Surprisingly, the customers who stick around after that conversation are the ones who get the best results.

The Sustainable Path

The future is not in ever-bigger models, but in:

  1. Deep Understanding of the Problem Before choosing the tool, understand the work Map processes, not technologies
  2. Customized Solutions A small, well-instrumented model > generic giant model Focus on your specific use case
  3. Realistic Expectations 10-20% ROI in the first year is GREAT Continuous improvement > instant revolution
  4. Smart Investment Spend on data quality, not quantity. Invest in people who understand the business AND the technology.

Conclusion: The AbShaper Is Still There

My AbShaper still sits in my mother's attic, reminding me daily that there's no such thing as a free lunch. Researchers have mathematically confirmed what experience has already taught us: endless climbing is not the answer.

True intelligence—artificial or otherwise—lies in recognizing limits and working smarter within them. We don't need nuclear power plants. We need clarity about what we want to solve and a willingness to do the hard work.

How are you thinking differently about your AI projects? Are you chasing the next best model or building the right solution for your specific problem?

Do as some of our customers do, like Voith Group , FEBRABAN , Dengo Chocolates , ADCOS Group , GE HealthCare and many others, hire an AI executive immersion and discover how we actually transform ideas into projects with real and measurable returns.

Febraban Case which was released in FEBRABAN TECH .

Article link: https://arxiv.org/pdf/2507.19703

Share