
The AI pricing dilemma every industry must solve: Insights from AI decision-makers
Orb recently brought together a diverse group of leaders from marketing automation platforms, automotive software, fintech, telecom operations, and more. Many were from industries where AI adoption is relatively new, yet each had already started deploying AI agents in their products or operations.
This mix is exactly why we wanted the conversation. It reflects a reality we see every day in our own customer base: AI-driven workflows are no longer confined to technology companies that were “born AI.” They’re embedding themselves into products and processes that have existed for decades and creating a new set of monetization challenges.
What emerged from this discussion is something we call the AI pricing dilemma.
What is the AI pricing dilemma?
The AI pricing dilemma can be summed up like this:
- Costs to deliver AI-powered products are volatile and unpredictable.
- Customers still want pricing that is stable and easy to understand.
- The underlying economic model shifts from fixed costs and high margins to variable costs and tighter margins.
- The value AI delivers increases over time, but quantifying and charging for that improvement is difficult.
This creates a tension that every participant at our roundtable, regardless of their industry, currently faces:
- Technical realities push companies toward usage-based, flexible models.
- Buyer expectations push them toward simple, predictable pricing.
The complexity problem: When customers can’t predict their bill
A theme that came up early and often was the challenge of making usage-based models work in practice.
Credits introduce uncertainty
Credit-based systems—where customers buy a set number of “credits” for AI-powered functions—seem straightforward. Yet, in reality, customers struggle to know how many they will need.
“Our customers are asking how many credits do I need? What does one credit do for me? How do I use it? How many should I buy upfront? Can I buy one later? There's a lot of questions around just how these should work.”
One participant from a SaaS marketing automation company described how customers often hesitate to buy credits at all because they cannot predict their usage. Those who do buy tend to burn through them quickly and return for more. While this is good for short-term revenue, it creates uncertainty and, at times, frustration.
Token-based pricing creates a moving target
The cost to complete the same task can vary significantly between AI models. Even a minor change in the underlying infrastructure can cause token usage—and thus costs—to spike. This unpredictability is a headache for customers who need to budget and for vendors who need to explain price changes.
Non-technical buyers speak a different language
For non-technical buyers, the complexity is even greater. Auto dealerships, call centers, and heavy-equipment companies don’t think in tokens or inference costs. Their procurement teams want pricing they can tie to tangible business metrics: number of employees, number of transactions, or number of assets under management.
“You talk about end users like call centers or automotive dealerships. How much complexity can you offer to them before they get overwhelmed and they lose interest?”
As one participant put it bluntly, “I have 50 employees. How much does it cost?” If your pricing model requires the buyer to learn a new technical concept before they can approve a purchase, you’ve created friction in the sales process.
Margins under pressure: When volatility becomes the norm
Traditional software businesses assume margins of 85% to 90%, and hardware and compute costs are rounding errors compared to license revenue. This no longer holds in AI-driven products.
“It's such a competitive playing field that it's not clear whether optimizing for that marginal profit is the right thing to do right now. OpenAI tomorrow turns around and their models are 10x cheaper, so it totally changes what sort of capability you can offer.”
Several leaders described how compute costs now have a real, direct impact on profitability. Model pricing from providers like OpenAI, Anthropic, or Cohere can change with little notice. Hardware costs fluctuate with each new Nvidia release cycle. A product that was profitable can suddenly see its margins evaporate.
Enterprise deal structures are changing in response. Multi-year flat-fee agreements now take the form of committed spend “burn-down” models, where the customer commits to a certain amount of spend. Usage is tracked closely, and the spend is consumed over time. This limits the vendor’s risk if the customer’s usage skyrockets, but it also changes how revenue is recognized and its predictability.
Some companies now adjust prices six or seven times a month to reflect changes in their own costs. While customers often push back on price increases, they are increasingly expecting price decreases when model or hardware costs fall. This feels less like traditional software pricing and more like ad tech or commodities trading.
Volatility is no longer a temporary condition. It’s a structural part of the AI economy. Pricing models have to be built with this in mind from the start.
The search for value alignment: Speaking the customer’s language
If usage-based and cost-plus models create too much complexity for customers, the logical alternative is to price the way customers already think about value.
“We're replacing a whole lot of functionality that has huge value to the enterprise. So we've got to think about it differently, and think about it as what's the value we're bringing? And then how do we charge a percentage of that value that covers the cost of the model? That different approach is where I think we're headed, because otherwise it's just a race to the bottom.”
Two models are gaining traction:
- “Human replacement” pricing: The AI agent is billed at the equivalent cost of a human worker. For a call center, the price might be benchmarked against an offshore agent’s hourly rate. This makes the cost instantly understandable for procurement teams and executives.
- Outcome-based pricing: Customers pay only for successful outcomes—such as a resolved support case, a completed transaction, or a verified data entry. This aligns incentives between vendor and customer, but it’s hard to operationalize. Defining “success” is not always straightforward. Tracking outcomes can require significant back-office work. And from an accounting standpoint, recognizing revenue can be complex.
Both aim to make the price reflect perceived value. However, the operational complexity can be as high as, or higher than, the technical complexity of usage-based billing.
When the metrics don’t match the model
We also discussed how companies measure and report performance in this new environment. ARR (annual recurring revenue) remains the default metric for boards and investors, but it often does not tell the whole story for AI-driven businesses.
Some companies now report ARR alongside “annualized usage” to capture the revenue above contracted minimums. For many AI-native businesses, usage is two to three times higher than the minimum commitment, which means a large portion of revenue is invisible if you look only at ARR.
There is no industry standard yet for how AI usage revenue should be defined or reported. This lack of standardization can make it harder for companies to communicate their growth story—and for investors to evaluate it accurately.
Why the AI pricing dilemma is now a universal challenge
What made our roundtable so compelling was not just the depth of the challenges discussed, but the fact that they were shared across such different industries.
A database company, an automotive software provider, and a fintech lender may serve different markets, but they’re now wrestling with the same set of problems:
- Pricing models confuse non-technical buyers.
- Margins are under pressure from volatile costs.
- They need to tie pricing to perceived value without creating operational headaches.
- Revenue reporting doesn’t fully capture growth.
The AI pricing dilemma is no longer a problem for a niche group of AI startups. It’s a universal business issue for any company integrating AI into their products or processes.
Solving the AI pricing dilemma
At Orb, we work with both AI-native companies and long-established enterprises facing these pressures. We see firsthand how hard it is to balance technical cost drivers with customer expectations. Our platform is designed to:
- Support dynamic, transparent pricing that can adapt quickly to volatile cost inputs.
- Translate technical usage metrics into business-friendly value narratives that resonate with non-technical buyers.
- Combine ARR with usage analytics so leadership teams and investors can see the full revenue picture.
- Benchmark AI costs against human labor in ways that make the value case easy to understand and defend.
In short, we give companies the tools they need to navigate the AI pricing dilemma without sacrificing customer trust or financial predictability.
Turn these insights into action
If you’re wrestling with these questions in your business, you’re not alone. The challenges that once seemed unique to AI-native startups are now showing up in every industry.
We believe the best solutions will come from sharing ideas and learning from how others are approaching the problem. That’s why we hosted this roundtable—and why we’re inviting more leaders to join the conversation.
Let’s talk about what’s working, what’s not, and how to design monetization strategies that can handle real-world volatility while still delivering a pricing experience your customers can understand and trust.
Ready to solve billing?
