In Part I, we examined what the purpose of AI is: a hyper-intelligent machine that is an agent for human improvement or a replacement for commoditized and mediocre human labor? Following the post, I was linked to a great video by a founder friend that re-iterated something to me: contra the typical engineering mindset, the point of technology is the problems it solves, not the advance in the technology itself. Many people who are building LLM startups start with the tech first and look for a problem, but generational companies start by solving problems and working backwards to what technology is useful.
Another Jobs line is striking:
PART II
If they don’t develop an AGI, do current LLM providers have the right approach for solving enterprise use cases?
Will cheap open-source models that are systematically applied win over “best-in-breed” foundation models?
The opening clause in the first question is a crucial assumption to caveat here because it sets the scene for the second part. I think the way most VCs look at the space is: “This might be the next paradigm shift of the 2020s like cloud and mobile in the last two decades. How do I invest in as many companies as possible with exposure to this trend?” followed by investing in copywriting and PowerPoint tools powered by “Generative AI” built on top of OpenAI’s APIs. It’s just another tech trend where you can make a market map and see what companies are hot in the latest YC batch or spinning out of OpenAI.
Meanwhile, Sam Altman and many others building AI technology have fears that it could be an existential risk to all of human life in the extreme case and deeply displacing to human society in the base case.
Why does this distinction in how people are approaching the space matter?
Because I think the purpose of why something is built informs everything about it: from the speed of iteration, the specific choices at every technological layer, and whether a company chooses to in-house or outsource various aspects of deployment.
I don’t know Sam Altman personally, but I think building an AGI is more important to him and the engineers at OpenAI than finding “enterprise use cases” and scaling revenue in the meantime.
Do they want revenue? Of course.
Do they want interesting applications to be built on top of them? Certainly.
But do I think that it’s the fundamental North Star of the company and their raison d’être? No.
As a reminder, OpenAI was founded as a nonprofit research lab. The logical answer for why they might not care about enterprise use cases or very specific problems in the short-term is because they believe once they have an AGI that replaces most economic work, they will make tons of money and there’s no need to worry about the revenue model. If anything, they’re more concerned about redistribution of resources if they make AGI happen than if they hit revenue targets on a yearly basis.
There’s also the more subtler massive shot at glory: if they’re the first to do it, OpenAI (and Sam especially) will go down in the history books as the creators of something humans could only dream of building for decades.
It’s the subtext in all of their blogs around safety and alignment; they truly believe they are working on the most important technology ever created and they might fundamentally re-alter all of existence.
Put yourself in their shoes. Even a five percent shot at what might be eternal glory matters a helluva lot more than whether you build better infra for note-taking apps. You need to have some plausible use cases, cool demos, and the occasional Jasper breakout story to continue to attract funding from Microsoft and others, but the real prize at the end is AGI.
That’s why I think they’ve chosen to be an API-first company and off-load deployment into enterprise to partners like Bain and others. The assumption is that the technology is so good today that even the management consultants will be able to figure something out. I doubt many people at OpenAI really want to go do the dirty work of selling and implementing their models into Coca-Cola or wherever.
So if OpenAI and other would-be creators of AGI at the foundation model companies like Anthropic care less about mundane things like “revenue” and “GTM motion,” it seems reasonable to ask if they have the right approach for deploying LLMs into enterprise if they don’t succeed at creating an AGI.
My TL;DR is likely no.
Much of my thoughts are informed by this article by Ben Van Roo and a few other conversations with founders, but the key highlights are that a few things stand in the way of widespread deployment of a foundation model like GPT-4 into enterprise:
Factual accuracy
Capturing updated information and news
Contextual corporate knowledge
Privacy and security issues
Cost of compute
The first two are well-understood by this point. Depending on the workflow, constantly fact-checking the model’s output may lead to more friction than time saved. As mentioned in my previous article, this is why I expect most of these types of LLMs to slot into workflows where errors don’t materially disrupt a user’s progress and they’re easy to evaluate and fix. Meanwhile, adding new information to a pre-trained model is a solvable problem however it comes at the cost of continually retraining/augmenting the already expensive model.
A potential valuable application for LLMs will be when they can provide company-specific contextual knowledge by looking across internal data stores. That requires direct plugins which then opens up the privacy and data question. For GPT-3, any data you give to OpenAI can be used to fine-tune the model, but the model and weights stay with OpenAI. Will most corporations want sensitive data going into OpenAI’s model in a one-way street? Could other competitors use the model you’re helping train to bypass you along certain vectors? Does giving all this data away to OpenAI pass the privacy and security sniff test for any CISOs at big enterprises?
The answer is likely no, which means that GPT-4 and others would have to be deployed on company servers. As Ben mentions in his article, while the costs of GPT-4 are unknown, hosting on a local instance could require hosting up to 500 GPUs simultaneously for each inference. How many enterprises are willing to pay that cost for a privately-hosted instance, particularly as actual use cases are still being figured out?
All of these lead me to the general conclusion that while OpenAI and other foundation model providers will likely produce the highest performing models that most closely mimic an “artificial general intelligence,” they will not be widely deployed into enterprise particularly over the next few years. There’s just too many issues that will create friction in an already-slow implementation process. And perhaps just as importantly, I don’t think this is what the people at these foundation model companies are maniacally focused on: they’re trying to build a machine God, not increase corporate efficiency.
So does this mean foundation models are a complete headfake and be just as forgotten as corporations trying to develop their “web3 strategy” a couple of years ago?
My short answer is no, with a caveat. I think what we’ll see instead is corporations using open-source, significantly cheaper models that they can deploy locally while feeding it internal private corporate data. They can then use that to solve specific business problems like summarization of internal knowledge, customer service, enterprise search, etc. The caveat is that every couple of years, the foundation model providers will re-launch a new model that blows away the open-source models in performance and there will be noise about moving over, but within a short time, open-source will catch up. That lag time will be short enough that corporations will be happy to wait for the cheaper models.
Opportunities for startups will be to provide the infrastructure and operations for the deployment of open-source models internally within enterprise. Some early examples of companies in and around this space include Dust, Langchain, Yurts, and a few other stealth startups yet to be announced but it’s still very much early days. There’s a big question of how much of this will require a services layer versus a pure software business model if you’re a venture investor, but there should be a couple of large companies built here.
Ultimately, the key takeaway is that I don’t expect to see these expensive foundation models that are very specifically being trained for a different goal become widely adopted in actual enterprise use cases in the short-term other than a few flashy announcements and partnerships.
We’ll talk more about where we see opportunities for startups versus incumbents in the next piece, but let’s recap the major points here:
OpenAI and other foundation model providers are looking to build an AGI, not enterprise software infrastructure.
The choices they’re making to build an AGI don’t necessarily provide the best foundations for deployment of LLMs into enterprise at least in the short-term. Specifically, there are problems around factual accuracy, updated knowledge, contextual private corporate knowledge, privacy and security, and cost of compute that may not or definitely will not be resolved by better, larger models.
Even though they won’t be best-in-breed and continually playing catch-up, cheaper open-source models will be the most common form factor of deployment for LLMs into enterprise. Startup opportunities will be around providing the infrastructure and operations layer for these models.
Next time, we’ll dive into the startup vs incumbent opportunity (who wins, Figma or a new startup? Microsoft or Tome?) and expand upon what startups and verticalized approaches make sense given our views so far.
Thanks to Jungwon Byun, Blake Eastman, John Dulin, and Ben Van Roo for reading earlier versions of these articles and their valuable feedback. And once again, if you think I’m terribly wrong or missing something, please let me know. Cunningham’s Law is my friend.
Really well put!!