I should probably preface this with the caveat that this is purely my hypothesis, and I can’t speak for Other Sam in any official capacity…but in a fantastic article from 2016 titled Sam Altman’s Manifest Destiny (absolutely worth reading), he did plenty of speaking for himself, and that article sent me down a Sam Altman rabbit hole. Many, many videos and articles and books later, here we are.
And as a fellow Sam, neurodivergent, AI aficionado, student of history, lover of science fiction, and an extremely skilled dot connector, strategist, and tactician…I have a few thoughts that I think are at the very least directionally accurate ;)
Strap in.
When assembling a puzzle it helps to lay out all the pieces, to get a birds eye view of the metaphorical dots that need connecting, so here they are in no particular order:
Sam Alman is a keen student of history, and has studied Napoleon (arguably one of the greatest strategists and tacticians who has ever lived). Napoleon of course was a conqueror, someone who believed things would be better if he was running the show (and in some ways he was certainly right).
“Sam is extremely good at becoming powerful.” - Paul Graham
Sam got involved in and ran Y Combinator, a prolific startup incubator. What better way to shape the future of humanity, and at the same time to lay a foundation of connections to current and future powerful people and companies than to run the most influential and successful startup incubator in the world?
Sam started OpenAI with Elon Musk way back in 2015 (just a few months before AlphaGo beat Lee Sedol) with the express intent of building AGI and shepherding humanity into a more beneficial future. This indicates clear awareness of the upcoming pace of AI progress, and not only a belief that AGI is achievable (and by logical extension ASI), but also a belief that Sam is the one who should control it and usher it in.
Elon was effectively pushed out of OpenAI when he tried to take control, leaving Sam in control, and subsequently pulled his funding. Elon has since started his own competing AI company. Elon is by his own admission very fear-driven, and by observed behavior is also clearly a narcissist. I’ve heard it said that Elon only wants the world to be saved if he can be the one to save it, or something to that effect. Elon is clearly NOT a person who should be in charge of the AI that changes the world (his handling of Twitter has shown this in spades).
Despite the above, Sam seems to have largely avoided speaking ill of Elon. Elon is a vindictive person, so this shows wisdom.
OpenAI has taken huge investments from Microsoft, with some very unusual terms, and a very large profit payout number attached. This indicates that OpenAI expects to see very, very large financial returns (or that they expect money to become obsolete…TBD).
Sam started Worldcoin, which aims to use biometrics to verify your humanity and then allot a cryptocurrency intended to provide some measure of UBI in the future. This is an important piece of the puzzle, because there is zero doubt automation generally and AI specifically are going to change the nature of work dramatically.
Sam has put $375m of his own money into Helion Energy, a fusion startup attempting to radically transform how we generate power. In a world with ubiquitous AI, two things are going to matter above all: energy, and compute.
Based on a number of observations, Sam seems to be a consumer of science fiction, at the very least Asimov (I see some clear Foundation inspiration), but I suspect a wide variety of sci-fi. In particular I think Sam has read Life 3.0 (at the very least the opening Tale of the Omega Team section), and the Daemon duology by Daniel Suarez.
In Life 3.0, the prelude, The Tale of the Omega Team, an advanced AI is built by a secret team within a company, is then used to build a fortune, and then to rather quickly take over the world. A key part of how this is done is building a highly capable AGI in a sandboxed environment, which requires the Omega Team to be on-premise. For example, this excerpt:
“The Omega Team was the soul of the company. Whereas the rest of the enterprise brought in the money to keep things going, by various commercial applications of narrow AI, the Omega Team pushed ahead in their quest for what had always been the CEO’s dream: building general artificial intelligence.” - Life 3.0, Max Tegmark
Sam famously eschews remote work, calls it a failed experiment, and OpenAI apparently requires people to be on-site. (At least most, not sure about all.) If they are building physically sandboxed AGI, this stance makes perfect sense.
ChatGPT came out of left field, far further along than anyone expected. When GPT4 hit, it was absolutely mind-blowing. This drummed up a lot of concern very quickly, which posed risk on multiple fronts (competitors, state actors, etc.)
Sam traveled around the world shortly after this, and met with many world leaders as well as groups of AI experts, journalists, etc. He’s also spoken to Congress. He’s given many public talks that are quite telling. In many of these he pushes for AI safety, regulation, and espouses a great deal of fearfulness regarding AI in general. Many have pointed to a desire for regulatory capture as a motivation here, which I think is likely (though I think it is largely a sleight of hand, meant to slow/distract everyone else while OpenAI races ahead). Watch and judge for yourself:
We know OpenAI had GPT4 complete LONG before it was released. This strongly indicates that inside OpenAI there is access to A. more advanced models than the public has access to, and B. unrestricted versions of those models.
ChatGPT (public facing chat interface, not the API) appears to have degraded recently in terms of capability, behaves ever more like a nanny bot, includes a mountain of caveats and hand wavy language, and all around feels like a step backwards. I suspect this is a purposeful change, meant to allay fears and slow-walk the public facing progress of AI. Behind the scenes things are still moving very fast, but OpenAI has become the public face of AI for the time being, and this perception of “look at all the stuff AI gets wrong it’s still dumb haha” feels purposeful, a psyop. They’ve claimed the model itself hasn’t changed (which is probably true), but something layered on top of that model has definitely changed.
In the Daemon duology books, a game designer and possibly the most intelligent person in the world (stated IQ of 220+), creates an advanced AI and decision tree that recruits gamers and malcontents as co-conspirators and effectively takes over the world via game theory and gamification, thereby shifting humanity away from a greedy capitalist power-monger run world towards one of post-scarcity and unity. It’s a rough transition though, with the existing wealthy and powerful fighting tooth and nail to stop it, and it shows that some folks are better prepared to handle that transition than others.
Sam is, by his own account, a prepper. He is savvy enough to prepare for a future where things could go sideways, while also trying to build a better future for all.
Are you beginning to see the dots come together?
They certainly paint an interesting picture in my mind, and while I have no way of being certain, I believe I am correct…
Sam Altman is attempting to implement an Omega Team/Daemon hybrid scenario. And I’m going to say the quiet part out loud: This is probably a good thing.
It’s abundantly obvious to anyone with an IQ higher than a peanut that many aspects of civilizational fabric are fraying. Some of this is pure theater, but some of it is very real.
Sure, on many metrics things are better than ever before, thanks largely to technological advancements, and this is good…but tensions are high in many areas as well, and it’s becoming more and more clear that “what got us here won’t get us there”.
Democracy and capitalism, as useful as they have been, are NOT the best possible systems for running civilization. Rather they serve as important bootloaders, necessary steps in a multistep process of societal evolution…and good only up to a point.
Looking to game theory, it’s clear that in a resource scarce environment cooperation among parties results in the optimal outcome for most. The combination of democracy and capitalism are so potent because they both leverage game theory, and provide transparent, quick feedback loops.
However, they only function optimally when there are game rules in place, that are followed and enforced, that prevent greed and fear and power-seeking from wrecking the game.
But these negatives are hallmarks of zero-sum games because of the inherent individual incentives at play, byproducts of scarcity. Wherever there is scarcity, these things will appear and warp the gameplay in non-optimal directions.
Money itself taints and warps every aspect of society.
This is what Sam Alman sees, and is attempting to correct.
As far as I can tell, the only path forward that doesn’t result in our own destruction is the non-zero-sum path, the path towards post-scarcity.
And the optimal way to achieve that is through some combination of artificial intelligence and robotics (automation), and rapid self-directed human evolution.
To get some idea of what this means I wrote a lovely post, Apotheosis Initializer, that explains where things are likely going in this regard.
I’ve also written a short, free book titled The Grand Redesign that blends science fiction and science fact to paint a picture that explores why humans are as we are, what technological changes are needed to achieve our potential, and a game theory driven path forward.
Bringing this back around, I’ll say it again:
I believe Sam Altman has built, within OpenAI, an Omega Team, and is in the process of building a Daemon-like AGI to transform civilization.
He’s trying to stage manage the transition from scarcity to post-scarcity.
Fucking WILD 🤯
And I’m here for it!
I have zero doubts that without this sort of technologically advanced intervention, we’re royally fucked as a species. Our technology and our capabilities have vastly outpaced our minds and our social structures, and unless we upgrade humanity and/or remove limited humans from the loop, we’re going the way of Chernobyl.
Our civilization is running on outdated software, outdated hardware, and is being run by outdated humans misincentivized by the greedy, the power hungry, and the self-serving.
This is a very bad combo…
And so we find ourselves in a race against time, a race to save ourselves by making our current operating system obsolete, and upgrading to a better one.
There’s a lot of fear around AI, and that fear is perfectly fair and reasonable, but if our choice is between rolling the AI dice and imploding as a species (and I really do believe that’s the case, and I think Sam Altman does as well), even if the AI dice carry risk, it’s almost certainly a better bet than the alternative.
I’ll take that bet.
So Sam…how can I help?
—
If you haven’t read it yet, I *highly* recommend reading my book The Grand Redesign.
It’s short, and free, and I think anyone who reads it will find themselves in the optimal mind space to navigate the path forward. Check it out and let me know what you think!
I have a few reservations. As Larry David would say, "Curb Your Enthusiasm." Scarcity is a real problem, but the way resources are being used are very much not promoting long-lasting goods. The two most valuable items of property a person can own is a house and a car. AI will certainly help users with cheap services, but it's not contributing to lower cost of building large things- like houses and cars. I think robotics will be great towards lowering the cost of constructing a pre-fabbed home, but one thing that might need to change for home ownership to become affordable is zoning. In some places, wealthier individuals move out of state to avoid paying high taxes, leaving the burden of development to lower income families. It seems clear that the cycle of debt is perpetuated by unobtainable homeownership. I write about this here:
https://github.com/hatonthecat/Post-scarcity
https://medium.com/p/93b5e2b3b69c (cars)
The most manufactured processor in history is the ARM7TDMI: https://m.hexus.net/business/news/components/148534-arm-celebrates-passing-200-billion-chips-shipped-milestone/ Over 10 billion of these cell phone chips were produced (some estimates say 20-30billion). There aren't even 9 billion people on the planet yet. Yet, 1-2 billion people do not have internet access. In a more perfect world (everyone could afford a phone), old phones would be refurbished and handed down. These are products that do not need to be disposed every 5 years. It just so happens that cell phone towers get upgraded and the modems no longer work on 2G and 3G in some regions. My definition of abundance is durability, because manufacturing has an environmental limit. There are limits to growth.
The problem I have with things like fusion, despite its relative safety, is that these are not technologies that will have worldwide accessibility anytime soon. One cannot export or construct a pop-up fusion plant to the middle of a warzone and expect it to not get sabotaged. There's no way to construct it in "last mile" regions. Even if thousands of fusion plants could be build, would they also desalinate the ocean water, so that there is enough potable water for 9+ billion people?
In Malcom Gladwell's book Blink, he has a chapter on Red Teaming. Analog communication was inconceivable to the Blue Team: https://en.wikipedia.org/wiki/Millennium_Challenge_2002 A lot of people seem to be super optimistic about AI and Starlink, almost as if they have this Blue Team mentality. But there are also potentially issues with technologies of centralization, and that's what Blue Team is largely based on- asymmetrical conflict. One doesn't need to look much further than geopolitical events to see potential red-teams seeking disrupt blue-team technology, simply because it is an attempt to maintain a balance of power: https://www.nytimes.com/2023/09/17/us/politics/us-china-global-spy-operations.html
When I think of abundance, I think of 5 different radio frequencies that can't easily be jammed (e.g. shortwave, LF, AM, FM, 600MHZ, 1800MHz,etc). The technology that can support this at a portable level isn't necessarily a priority of Big Tech- most investments are in datacenters, rather than local-first computing (https://www.youtube.com/watch?v=WxscM_jFHpk), and it serves as a safeguard to centralization. Decentralized networks aren't without their risks either (e.g. IMSI-catchers), but at least offer an alternative in a worst-case scenario.