🔒 OpenAI hostages: AI bulls concerned over ChatGPT dependency – Paul J. Davies

In the rapidly evolving landscape of generative AI, OpenAI has emerged as a front-runner, captivating enterprises with the promise of significant productivity boosts and cost savings through tools like ChatGPT. However, the dependency on a few dominant AI models raises concerns about potential lock-in scenarios, similar to those experienced with cloud services. As OpenAI’s technology becomes integral to business operations, the spectre of increased fees looms, prompting businesses to ponder their future strategies amidst growing reliance on these powerful AI tools.

Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.

By Paul J. Davies ___STEADY_PAYWALL___

Most people were barely aware that OpenAI existed 18 months ago. Now, ChatGPT is among the most desired tools for businesses everywhere. But as company chiefs dream about the productivity gains and cost savings to be had, many fear becoming dependent on one of the few major generative artificial intelligence models in a way that may prove hard to escape.

OpenAI has been quick to capture some big names as enterprise customers. Its website lists companies such as PricewaterhouseCoopers, Amgen Inc., JetBlue Airways Corp., Riot Games Inc. and Klarna Bank AB. Last year, I wrote about Morgan Stanley becoming one of the first big banks to roll out an internal chatbot based on OpenAI’s model.

OpenAI told Bloomberg News in January it had 260 corporate customers, while this month it said it had 600,000 enterprise users, although it didn’t specify how many individual companies it had signed up. Its rivals at Alphabet Inc., Meta Platforms Inc., Anthropic and others are playing catch up. 

What’s fascinating is how much value some businesses think they already get from these tools — prompting me to question why the AI vendors are charging prices that capture only a fraction of that value? So I’ve been asking companies and investors when they think OpenAI and others will start to jack up their fees, and how businesses will respond. 

The problem of getting locked in to a key vendor is very familiar. Most people I’ve spoken to likened the coming GenAI boom to the expansion of cloud-computing services: After a period of strong early growth, the costs began to rise and companies looking to diversify their suppliers found they had programming and data management problems to overcome, as well as contracts to renegotiate. 

No one was willing to speak on the record about how this might play out with OpenAI and its rivals, partly because they didn’t want to risk upsetting nascent relationships with these companies, but also in truth because they just don’t really know what the answers might be yet. So what follows is informed speculation, or at least a rough guide to the potential issues. 

In terms of value, the highest multiple I heard was from a growth-stage investor who said one of his companies generated productivity gains and cost savings worth 10 times the annual fees it was paying to OpenAI. Another user wouldn’t give a figure, but simply said the economic value was many times the cost.

Charging little is a standard move in the Silicon Valley playbook as startups pursue a rapid land grab in an effort to become as ubiquitous as possible in their target market. For a large language model, there’s an extra advantage: The more people use it, the faster its capabilities improve. And for OpenAI and Google’s Gemini, there’s the additional benefit that AI use drives demand for computing power in the cloud services of Microsoft Corp. and Alphabet. They get two bites of the revenue cherry.

Most people I’ve asked expect that prices will start rising once GenAI models become embedded in their customers’ ways of doing things. But the questions, then, are by how much and what — if anything — will users be able to do about it? Will it be harder to switch your AI model to a new vendor than it has been to change cloud providers, or to move from Apple Inc. to Google’s Android?

The truth is no one knows. A lot depends on how these models develop, and whether the makers try to lock customers in or make it easy to move between providers. It will also depend on what a company is using the AI for: The more important it is to the business and the more critical it is that the outputs are accurate, the harder and costlier it could be to substitute.

OpenAI has three tiers for enterprise customers. One is paying to use ChatGPT on the web for $60 per person each month, for general research or writing quick first drafts of things like marketing materials. Switching this for another product is simple. The next is access to the model through an interface on which to build applications; this is more akin to using cloud services in terms of the technical challenges it might take to switch.   

The third tier involves working closely with OpenAI to get a private version of the foundation model and then train, or fine tune, it for a specific and likely more critical job. This is where the fear of lock-in to a single vendor is greatest, especially in highly regulated activities such as finance. The time and potential cost involved in starting again with a new model could be much greater.

The pricing even of the most sophisticated tier is being held down in part by open-source AI models, which are free to access. However, there’s a gamble in using these because companies have to invest much more in making them usable, while hoping the open-source community keeps pace with the better-funded closed-model companies.

Some think changing models will be cheap and easy. Partners at Andreesen Horowitz recently surveyed 70 AI decision makers at large companies and concluded that most are designing applications in a way that should make switching possible. One of the users I spoke with, who works at a large financial group, anticipates that future models will be so good that you’ll be able to just point them at a data set or task they will perform well with little if any tuning.

However, a user at a different large financial group had the opposite view. It’s not the training that takes the time, it’s the evaluation for efficacy and accuracy that takes months of work and will likely to continue to do so. Rushing out a new tool without proper testing can lead to errors like New York City’s MyCity chatbot, which gave local businesses wrong answers and law-breaking advice. That would be disastrous for any regulated or business-critical activity.

Still, GenAI companies might never need to push up pricing. Computing capacity should keep getting cheaper as chips continue to improve. And if GPT-type tools become as ubiquitous as the companies behind them hope, the market will be so large that very low pricing will still generate fantastic revenue. 

OpenAI and its peers are likely to end up with enormous market power and look poised to become systemically important suppliers in the way that cloud services are already. As companies rush to get their own versions going, they should do everything they can to plan for that day — and avoid being held hostage by a single provider.

Read also:

© 2024 Bloomberg L.P.

Visited 178 times, 4 visit(s) today