The AI Moment, Part IV: Lightning Round
Random thoughts on AI and its impact on the coming decades
To wrap up our AI series, we’re going to do a lightning round of a few random thoughts I’ve had thinking about AI and its impact on society and business over the coming decades. Originally, I wanted to take a look at what it would mean if we had so-called “AGI,” but this term has become so amorphous and difficult to parse that it’s hard to write a treatise without first establishing what people mean. A God-like AI that we are trying desperately to control or “align” differs quite a bit from a software program or service that augments human intelligence and productivity. It’s hard to encapsulate the different aspects of the conversation into a coherent piece on Substack so I’m going to leave this topic for in-person conversations and debates.
So without further ado, let’s do a lightning round:
It’s human nature to make God into our own image. Three thousand years ago, the most powerful gods of mythology were not “super-intelligent” ones but ones of might and strength, hurling lightning bolts and waging war. In 2023, in a post-industrial knowledge economy, the God we are most afraid of now one wielding intelligence.
Eliezer Yudkowsky, the leading AI doomer, envisions AGI as a smarter, “more perfect” version of himself: an ideal Bayesian reasoner utilizing von Neumann decision theory to remake the world and civilization the way it sees fit.
Ever since Eve was tempted in the garden, humans have wanted to be “like God, knowing good and evil” (Genesis 3:5). The promise (and fear) of AGI writ large is that we will have infinite intelligence like God.
Scientists tend to break down human action into biology and socialization. It’s not surprising that the current belief is that AGI can be created by machine biology (ie computer programming/LLMs) and socialization (RLHF).
However, most people behave in the world as if people have a soul or consciousness that can act out of free will and choose to buck biology or socialization. Do machines need a similar ‘divine spark’ to become conscious and act of free will?
RLHF is net negative drag for creating hyper-intelligent machines. I posited this possibility back in Part I, but the ongoing neutering of GPT-4 is really proving this. It flattens a machine with incredible potential into a mid corporate employee: one could argue socialization does the same to us.
In the long run, which AI model you’re using in most business contexts will be a similarly important question to “what cloud provider are you using?”
ChatGPT might be the Starlink of OpenAI: an intermediate revenue-generating step to support the longer-term goal of AGI1, but not necessarily a core part of the technology that will get us there. (h/t Aashay Sanghvi who came up with this take originally)
Companies of the future will be smaller in terms of headcount. Tech companies grew into behemoths tasked with employing the overproduced elite, but a combination of higher interest rates and AI may curtail that phenomenon.
Successful companies will be able to build for smaller markets and with smaller teams because of cheap AI tools. The leverage of AI means you can build custom-made, well-designed products without the cost of human capital. Niching down will be a viable business strategy.
Venture capital will return to being a cottage industry. Most VC funds will die. Markets that were previously served somewhat mediocrely by horizontal venture-backed companies can be won by smaller, non-venture-backed companies doing low seven figures of revenue rather than hundreds of millions or billions.2
As we increasingly hand over more aspects of work to AI systems, we will have to deal with the increasing risk of adversarial attacks on these systems. One example is that slight alterations in X-ray images in a “way that is imperceptible to humans” can cause a “(deep neural) network to change its classification from say 99 percent confidence that the image shows no cancer to 99 percent confidence that cancer is present.”3 Cybersecurity and internal controls will be critical.
This illustrates the black-box nature of these models: we don’t exactly understand how the models are arriving at the prediction they arrive at, making the surface area for attacks large. In general, the adversarial and complex nature of the real world is always underestimated by people who fall in love with the purity of models, experiments, and backtesting.
This lack of understanding how the model is arriving at its answers is why a lot of the “Wow, GPT-4 did this!!!” papers have failed to replicate under any level of rigor. Melanie Mitchell has some good writing on this.
Refinement culture is going to get even worse with AI-generated content.
The concept of AI as a “rational utility maximizing agent” is an anthropomorphism (and not a good one at that, as not even humans are rational utility maximizing agents).4
Evolutionary psychology, game theory, and decision theory fail to predict basic human behavior. Using these as frameworks for how a machine should or will operate might not be the ideal setup even if some people fall in love with the formality of these systems.5
Looking to the past and predicting the future based on that isn't necessarily indicative of the future. Past performance isn't indicative of future returns as they say about the stock market.6 Looking to the past and saying "Hey, previous technology worked out alright for us!" isn't an argument from first principles. It does likely indicate you need good evidence for why this particular thing is so different, but it's not an argument in and of itself.
Marc Andreessen recently made the argument that there is nothing to fear from increased technology and automation by citing employment numbers and wage growth. While wage stagnation in the US is somewhat debatable7, the employment numbers are definitely fake. Ten percent of American men between the age of 25 and 54 have simply dropped out of the workforce and aren’t looking for work compared to two percent half a century ago. Additionally, being ‘employed’ as a gig worker (aka itinerant butler for the upper-middle class) with variable pay and without healthcare or retirement benefits is hardly the same as being employed with a stable factory job fifty years ago.8
More broadly, outside the US, wage and job growth in the rest of the world post-Volcker shock in 1980 is pretty questionable. Per capita GDP has barely grown in countries as far-flung as Brazil, Ivory Coast, Guatemala, etc. France and other European countries have youth unemployment rates hovering in the 20-40% range. Even in India, 90% of the jobs created over the past three decades have been in the “informal sector” rather than the IT industry.9 The remarkable China story has obfuscated mediocrity and stagnation everywhere else.
Tech, finance, and other sectors of the “knowledge economy” have never been able to absorb large amounts of labor like agriculture and manufacturing. Past a certain point, additional software engineers and bloated teams actually reduce productivity. Tech and finance run on linchpin or Pareto models of talent where elite talent is necessary. Comparatively, agriculture and manufacturing can make even the marginal worker (or “C” player in tech worker parlance) productive.
If AI is the apotheosis of a knowledge economy, the nuanced position is that this is both good and bad for labor. Despite much belief or platitudes around the contrary, talent in a knowledge economy is not normally distributed or even evenly distributed. There will likely be an increasing bifurcation in society between those who can participate in a knowledge economy supercharged by AI and those who cannot.
For those who cannot:
Best case: yoga teacher or creator
Base case: Retail or service worker
Bear case: Unemployed in parents’ basement and/or potentially addicted to drugs
We need a solution to avoid this. Some have floated UBI, but that’s once again mistaking humans as “homo economicus” versus “homo sapiens.” We need meaning and purpose in life.
Manufacturing and industrialization is essentially the only way developing countries have been able to become developed so far. Countries that have tried to skip this stage or prematurely de-industrialized and de-agrarianized face the problem of itinerant, underemployed workers and decomplexified economies where wealth is controlled by a small, mostly corrupt elite class. This describes the majority of Latin America and Africa currently. The way AI can save the world is if it helps create or augment so much human knowledge and intelligence that can be re-directed toward production and building in the physical world, augmented also by machines. Mass employment in well-paying and productive jobs becomes possible. We discover what to do, what to build, and what to create so that we have a new Industrial Revolution on the scale of the first.
That’s all for now. Some of these thoughts are still developing so feel free to push back and share with me what you’re thinking. Despite some of the concerns I’ve laid out, I’m still long-term bullish on the opportunity for AI to help create wealth in aggregate. It’s not God, but a tool that we can use to steer away from stagnation and decay if used the right way.
Thanks for reading and if you’re a founder building an AI-powered company (my current interests were previously laid out in Parts II and III, but always happy to hear a pitch and what I’m missing), please reach out at pratyush [at] susaventures [dot] com
Or spacefaring civilization in SpaceX’s case.
These will be great businesses, but venture is the wrong financial instrument for them. As Chris Paik said, “Venture capital is a tool purpose built for companies capable of explosive value creation in compressed periods of time.” There’s a lot fewer of those companies and opportunities than was believed in the ZIRP era.
Melanie Mitchell, “Artificial Intelligence: A Guide For Thinking Humans”
A digression, but something I’ve been thinking about a lot since I read Grace Kasten’s piece on Homo Economicus vs Homo Sapiens where she pointed out the most valuable companies post-Internet appeal to our sapiens nature, not economicus. This is why consumer companies when they work are larger than enterprise. However, that also makes them harder to build because the consumer is more flighty and harder to plan for. Things that appeal to our "truer" nature is why the “7 deadly sins” framework work because it's more emblematic of us than the "rational" mindset. It’s why Apple extracts an order of magnitude more profits than Android.
From Harmless AI’s book “Anti-Yudkowsky”: William Blake says “that formal systems (natural religions) always have some imaginative work being done to generate the system in the first place. Systems are borne at a given moment because there is an event happening in the world to generate the spark of insight which allows a system to be formalized. But after the system's formalization, man forgets the images and desires that swept thru his intellect to generate the system and imagines the system was present before time began, as it becomes a tautology, and all tautologies hold in all possible worlds. But after the system's formalization, man forgets the images and desires that swept thru his intellect to generate the system and imagines the system was present before time began. It becomes a tautology, and all tautologies hold in all possible worlds.” Examples include ideas like Darwinian ethics, Marxism, Freudian psychology, Jungian psychology, the Christian philosophy of human rights and the value of the weak and poor, etc: frameworks and ideas that were originated in the minds of men and then taken on to be tautologies or universal truths or ideas that were there "from the beginning." We even go back and look at primitive man and analyze them through that lens when it would make little sense to them. Their actual worldview is completely alien to us. One must remember that these ideas frameworks we’ve created, they’re usually products of their time, and they do not necessarily hold in all possible worlds across all possible times.
More frivolous examples like this in VC are ideas like founder-market fit, $1m ARR is a good Series A metric, Rule of 40, etc. These are all ideas that originated in the minds of men that then took on the framework of "this is how great companies are built or found" as if it's a universal truth or natural law. These are none of those things: nobody in the 1990s was looking for FMF as a way to assess companies. These are not tautologies or universal truths, but ideas that we came up with and use as proxy heuristics. "Society then cantilevers over them" as Chris Paik says. Passive investing is another one. It's done well for 40 years so we think "well, it must just be a truth of the universe that this or investing in real estate is the best way to build wealth." However, these ideas were born of events that happened around the world that generated the spark of insight for system formalization: for passive, the 40+ year bull run and the active managers investing that helped make the market more accurate. For homeownership, boomers and population growth and housing restrictions. It's not something that holds in all possible worlds, but a system arises of its own time. Often, the recognition of these as excellent frameworks to describe the past is the very thing that then kills their accuracy to predict the future.
I expect we'll see many more LTCM-style or bigger blowouts from "10-sigma unlikely" events from people using models to construct portfolios or get financial advice.
Your starting point matters quite a bit and you’ll get a big difference if you start in 1990 versus 1973.
It’s fair to argue that a middle class with stable jobs and good benefits is more of a historical anomaly than norm.
The informal sector refers to what Dutch sociologist Jan Breman calls “wage hunters and gatherers”: “the unlicensed taxi drivers, roadside fruit peddlers, freelance porters, squeegee men and women, bidi rollers, beggars, rag pickers, clothing resellers, small-time scammers and thieves, bazaar porters, and general-purpose unskilled jobbers who constitute the majority of the populations of cities everywhere from Kabul to Kabinda to Managua.”