How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025

How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025

By Sarah Wang , Shangda Xu , Justin Kahl , and Tugce Erten

Just over a year ago, we highlighted 16 changes to the way enterprises approached building and buying gen AI. Since then, the landscape has continued to evolve quickly—so we revisited our conversations with over two dozen enterprise buyers and surveyed 100 CIOs across 15 industries to help founders understand how these leaders are using, buying, and budgeting for gen AI in 2025 and beyond.

Even in a field where the only constant is change, the gen AI market structure has evolved significantly beyond our expectations since we ran our last survey over a year ago. 

  • Enterprise AI budgets grew beyond already high forecasts and graduated from pilot programs and innovation funds to recurring line-items in core IT and business unit budgets. 
  • Organizations are much more sophisticated at mixing and matching multiple models to optimize across both performance and cost. OpenAI, Google, and Anthropic took dominant overall market share in our survey while Meta and Mistral were popular among open source options. 
  • Procurement now mirrors traditional software buying—with more rigorous evaluations, hosting considerations, and benchmark scrutiny—while increasingly complex AI workflows are driving higher switching costs. 
  • Meanwhile, the AI app landscape has matured: off-the-shelf solutions are eclipsing custom builds and rewarding AI-native third party applications.

Read the full report here


More from the a16z Growth Compendium

Article content

16 Changes to the Way Enterprises Are Building and Buying Generative AI in 2024

By Sarah Wang and Shangda Xu

Generative AI took the consumer landscape by storm in 2023, reaching over a billion dollars of consumer spend in record time. In 2024, we believe the revenue opportunity will be multiples larger in the enterprise.

Last year, while consumers spent hours chatting with new AI companions or making images and videos with diffusion models, most enterprise engagement with gen AI seemed limited to a handful of obvious use cases and shipping “GPT-wrapper” products as new SKUs. Some naysayers doubted that gen AI could scale into the enterprise at all. Aren’t we stuck with the same 3 use cases? Can these startups actually make any money? Isn’t this all hype?

Over the past couple months, we’ve spoken with dozens of Fortune 500 and top enterprise leaders, and surveyed 70 more, to understand how they’re using, buying, and budgeting for generative AI. We were shocked by how significantly the resourcing and attitudes toward gen AI had changed over the last 6 months.

Though these leaders still have some reservations about deploying generative AI, they’re also nearly tripling their budgets, expanding the number of use cases that are deployed on smaller open-source models, and transitioning more workloads from early experimentation into production. 

This is a massive opportunity for founders. We believe that AI startups who 1) build for enterprises’ AI-centric strategic initiatives while anticipating their pain points, and 2) move from a services-heavy approach to building scalable products will capture this new wave of investment and carve out significant market share.

Read here


Connect with a16z Growth

Want to stay informed on the latest from a16z Growth? Subscribe here to receive these LinkedIn newsletters regularly and check out our company building & scaling compendium here.


Disclosures

You are receiving this newsletter since you opted in earlier; if you would like to opt out of future newsletters, you can unsubscribe immediately.

This newsletter is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. This newsletter may link to other websites and certain information contained herein has been obtained from third-party sources. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation.

References to any companies, securities, or digital assets are for illustrative purposes only and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund which should be read in their entirety.) Past performance is not indicative of future results.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Content in this newsletter speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see disclosures for additional important information.


Billions are being poured into mimicry centres. But intelligence without physics has no anchor. It won’t end well. ⭕️ Physics First

Like
Reply

Must watch for all in Web 3.0 Pl share your thoughts ⭐️ https://bit.ly/Maaya_Investor_Event Every stakeholder matters and they know it. Utsav DAR

Like
Reply

That's why we are just going ahead to build our own LLM and other AI services and API to our internal system. Transferring user requests with tag on instructions in wrappers, is a waste of bandwidth. Might as well code our preferences directly into our LLM. Some more, the cost discounts between server investments + other costs and LLM token payments + other costs really makes a difference and increases as one scales. Startups like us can't be throwing money away like these large enterprises.

The future of enterprise #AI is here and it’s not just a buzzword anymore. According to this a16z’s latest survey of 100 CIOs, #AI budgets are growing at a rapid pace, with many enterprises now allocating funds directly from core IT and business unit budgets. This shift reflects a growing recognition that generative #AI is no longer experimental but essential to business operations. Moreover, the landscape is evolving beyond a one-size-fits-all approach. Enterprises are increasingly adopting a multi-model strategy, selecting #AI models based on performance and cost for specific use cases. This nuanced approach allows businesses to optimize their #AI deployments effectively. It's clear that #AI is no longer just a tool in the enterprise toolkit; it's becoming the toolkit itself. As one CIO aptly noted, "what I spent in 2023 I now spend in a week." For founders and innovators, this is a pivotal moment to align with the evolving needs of enterprises and capitalize on the expanding #AI landscape. #OnlyTimeWillTell

To view or add a comment, sign in

More articles by Andreessen Horowitz

Others also viewed

Explore content categories