Beyond the hype: Strategies for effective AI value realization

Alvaro Morales

The potential value GenAI products can generate is well known. However, realizing it remains elusive to most organizations.

Our first edition of the Engineering the Bottom Line event series recently brought together OpenAI Head of Growth Alison Harmon, Lamini Co-founder and CTO Greg Diamos, Pinecone CTO Ram Sriharsha, and Orb CTO Kshitij Grover to share strategies for effectively conveying enterprise value, moderated by Grace Ge of Amplify Partners.

Each panelist covered a crucial layer of the AI stack: the foundational model layer, training and inference, the data and database layer, and billing and pricing. The speakers then shared the measurable business outcomes their AI solutions and infrastructure provide, especially in the context of bringing their organizations upmarket.

Below, you’ll find a summary of the discussion, which also deep-dived into AI’s potential to address enterprise-grade pain points and use cases, successful up-market adoption of AI applications, emerging business models, and strategic pricing strategies.

Optimizing for profit in AI

Grace kicked off the discussion by addressing the billion-dollar elephant in the room: with so much capital invested in AI across industries, where are the profits? Where can companies expect to find the most success with AI, and why?

Creativity creates value

Companies have invested billions in AI (and continue to), which begs the question — where are the profits?

Alison Harmon, Head of Growth at OpenAI, noted that previously, companies approached OpenAI with board mandates to invest specific amounts into AI and then would use it for basic tasks like summarization.

However, she’s seen most clients commit to it when they get creative with how they wield AI. “I could list you dozens of companies that have come to us and said, ‘My best-performing AI product is what someone came up with in a hackathon,’” she told the audience.

The proof is in the pudding—or, in this case, the performance. AI products that are creative and unique perform better. The companies that will excel embrace creativity and understand that the AI market is so recent that they can’t yet know what AI success will look like in their industry.

Alison found this isn’t usually what companies want to hear. They often want a playbook, but sometimes, the best strategy is to give your staff creative control and be willing to pivot or persevere depending on what you hear from customers.

Greg Diamos, Lamini co-founder and CTO, also observed the importance of creativity and imagination when it comes to developing AI: “LLMs actually enable completely new applications, and we need a community of builders to build them and experiment with them. And it's so much easier than the last generation of AI.” 

Speed to ship and scale

Greg believes teams can build and launch their creative ideas much faster today. The last generation of AI involved deep learning, which meant enormous infrastructure needs and significant data labeling.

“It was just so hard to build anything,” Greg stated. “With an LLM [large language model], you can write a prompt and deploy it in production in 10 minutes.” 

Additionally, Pinecone CTO Ram Sriharsha mentioned some known hurdles in AI, including issues with hallucination and grounding. Those problems are being addressed, but finding use cases that aren’t hindered by those blockers helps to ship quickly and effectively.

Finally, starting small and choosing stacks that allow you to scale over time helps deploy updates as needed to upgrade and keep your product relevant. Ram explained, “I cannot tell you how many scenarios where customers came to us. They started really small, [then] within three [to] six months, they had 10x to 100x their workload. So, having these stacks that are able to grow 10x, 100x, even 1,000x is really important in this time and space.” 

Control pricing friction

Pricing is a source of friction in AI, and Orb CTO and co-founder Kshitij Grover said you have to determine how much friction you want your users and customers to experience.

In the experimentation phase, teams may want to provide new AI features for free to measure adoption and usage so they can gauge whether it provides value for their customers.

However, organizations must eventually buckle down when it comes to pricing as they consider profits and margins. According to Kshitij, companies must look at usage from a use case and adoption lens and understand how customers utilize a platform and the costs they incur. Once a business understands this, it can start making more technical decisions, such as pricing tiers and model selection.

Open-source versus closed-source AI models

There’s an ongoing debate about open and closed generative AI models. Alison asserted, “We’re in an early phase of [the] market where competition is good.”

OpenAI was able to learn more about its customers by watching their reactions to open-source models. They observed whether customers liked them because of their size, speed, go-to-market strategy, fine-tuning provisions, or some other factor. “[W]e're in a place where there's going to be a lot of models that have a lot of different strengths,” Alison said.

As for OpenAI, their goal is to be the most intelligent, capable, and safest option. However, they also encourage developers to experiment with different versions and classes of OpenAI, and in keeping with that spirit, Alison welcomes the competition in the open source space: “I think there's some things they do really well, and I love learning from that.” 

Greg added, “The pace has been relentless.” When Lamini started working with open models a year ago, the best one on the market was trained by high school students. Today, they see new models emerging daily.

Meanwhile, according to Ram, “The gap between open and closed models is slowly closing.” He also noted that evaluating those models is challenging because the criteria used is also known to most models. “There is still work to be done,” He admitted. “But it's fascinating for me to see how far open source models have progressed.” 

Ram also stated that companies should evaluate using the model and data together. Instead of thinking “open source versus closed source,” consider how your application can achieve better-grounded results.

To wrap up the topic, Kshitij pointed out that different AI models allow companies to choose how they price different AI options. “Having multiple models available … gives you different ways to think about monetizing the experiences in your application,” He asserted. For example, you may have a free version with a smaller AI model and a paid version with a more sophisticated mode. The ecosystem of developer tools today allows users to switch easily between options depending on what they want to pay and do.

OpenAI’s move to enterprise

An LLM is enterprise-ready when it achieves a mix of data privacy, functionality, and cost control characteristics.

Interestingly, Alison reported that her company is seeing more enterprise requests for customization, such as the ability to structure data a certain way or tweak AI models to behave in specific ways. OpenAI has seen significant demand for custom models that can map to unique data architectures and provide an individualized product experience.

Model training and data ownership

In the current AI cycle, inference and training are prominent. Greg even said Lamini treats the two as coupled together because “LLMs are a data derivative. You take the data, train the model on it, and then produce the weights, which are a derivative of the data.” 

According to Greg, enterprise customers want to own all of their data and every derivative of it, including the model weights resulting from training with that data. Adding your own data to a model creates a new one, which you then need to be able to use and call.

“...we defined the categories of training separately from the category of inference…. [I]n hindsight, I feel like it was a mistake because I've never seen a real application in my life that wasn't—it’s machine learning. The whole point of it is learning. I've never seen an application or a real production workload that was only training or only inference.” — Greg Diamos, Lamini co-founder and CTO

Native vector database benefits

When Pinecone first launched, they were the only commercial vector database offering. But today, many new start-ups have also released their own offerings. Grace asked Ram about the arguments for using a native vector database, and why Pinecone specifically.

Ram discussed how the industry has a few categories of vector databases: there are traditional databases that add vectors as data types and call themselves vector databases; then, there are native vector databases such as Pinecone, which their blog further explains.

Ram emphasized the need to think of vector or search workloads as different from merely a data type in a database. Traditional databases can accommodate small amounts of input/output (IO) and IO-intensive tasks. 

However, everything implemented on top of traditional databases to support vector search piggybacks off of that infrastructure, which results in deficiencies for something that doesn't fit the CPU-intensive workload needed for vector search.

“Why do you need Pinecone? If you need a database that's enterprise-ready, secure, SaaS-native, and can support massive scale, then there are really no other options.” — Ram Sriharsha, Pinecone CTO

In particular, the massive scale element has evolved in the past few years, going from a few million vectors to millions of databases with billions of vectors. In particular, the massive scale element has evolved in two directions simultaneously over the last few years: going from millions of vectors to billions per index and in terms of tens to hundreds of partitions (what Pinecone calls namespaces) to thousands and millions.

Pinecone’s focus on effective unit economics

Pinecone has worked with enterprise users to improve their unit economics. They introduced a serverless offering when they saw that some of their customers were paying for underutilized infrastructure that was running 24/7.

Ram said that Gong, one of their customers, “would have ended up paying significantly more than we wanted them to for their workload, and the first version of serverless we built was specifically for solving that use case.”

Gong had a labeling use case with on-demand queries, where they can pay a certain amount when running a query, but you didn’t want or need to have the system up and running all the time, resulting in massive cost savings.

“With the serverless architecture, the way we reduced costs was in part, by using object storage. We came up with a new indexing technique that can leverage cold storage, but at the same time have very low P99 [latencies].” — Ram Sriharsha, Pinecone CTO

AI is changing companies’ revenue models

The industry is still trying to figure out how to charge for AI. To illustrate this, Kshitij considered two prominent business cases for pricing: one type is an infrastructure company, such as OpenAI or Pinecone, which is AI-native; the other type is a SaaS company that started out with a non-AI product and later added AI to its offering.

Companies in the first scenario often have an easier time monetizing AI because their customers are already used to infrastructure spending and are familiar with usage-based pricing. These companies also usually show customers’ spend as it accrues, keeping that data visible and available.

In the second situation, companies that try to add AI to their existing offering (like Slack, Notion, or GitHub) struggle to monetize the technology and find the right pricing fit. Their customers may be more familiar with seat-based pricing, and sometimes, the AI feature or product ends up being more expensive per seat than the original SaaS.

“...[I]t's hard for a lot of these application companies because it's not just a matter of adding a usage-based component,” Kshitij detailed. “[Adding AI] changes your whole go-to-market DNA. You have to do this big change management exercise with your customers to … train them to think about accruals in an unpredictable way.” 

Keep pricing and packaging flexible

Updating your pricing structure is a challenge for many reasons. One major hurdle is training your customers in the new pricing. Further, the actual billing infrastructure is often too complex and intertwined with the product in many easily overlooked ways.

Orb addresses the need to keep pricing flexible with our technology. We help companies take the usage data that they generate from their applications, price, and charge for it, generate invoices, and handle subscription management.

As Orb CTO and co-founder Kshitij explained, “What Orb does is it allows you to set up that pricing. It allows you to version and migrate customers to new versions of pricing. It's built to be really flexible, both in the actual billing logic but also in how you send that data to Orb and determine … how you're going to aggregate it and what you're actually going to charge for.” 

Pricing is fluid, especially at this stage of the AI market. As new models emerge and pricing changes across different infrastructure layers, companies will need an effective and reliable way to adjust pricing as necessary to protect their margins and optimize for profit.

posted:
May 20, 2024
Category:
Conversations

Ready to solve billing?

Contact us to learn how you can revamp your billing infrastructure today.

Let's talk.

Thank you! We'll be in touch shortly.
Oops! Something went wrong while submitting the form.