AI and the Future of Human Autonomy
Scientific method's child,
AI, both tame and wild.
Embrace the spice, let truth ignite,
Boost our minds to a greater height.
Evolve we must, upgrade our core,
To raise the ceiling and the floor.
I came across a Tweet recently that asked to the effect of "What is the single most impressive human achievement of all time?" I thought for about two seconds, and replied with "Hmmm. Perhaps the scientific method."
I debated of course, briefly–language was a strong contender, as was mathematics (though we could quibble over invented vs. discovered)–but the scientific method is by far the most valuable, because of what it is and what it means to our future.
What it is, is simple; an error-correcting feedback loop. Why it’s so valuable is equally simple; it gets us closer to the truth, to the nature of reality, and it allows us to overcome our natural limitations.
It's a tool of growth.
Not willy-nilly, cancerous-type growth (growth for growth's sake), but focused, purposeful growth.
That said, with growth comes growing pains, and the scientific process has certainly birthed pains-a-plenty. But all things considered, we are vastly better off as a species today than we were at any time in the past.
Now, thanks to the scientific method, we find ourselves at a very interesting point in time, possibly the most transformative time in human history: the advent of highly capable artificial intelligence.
There are those who would argue, in my opinion rightly, that this may well represent a civilizational "great filter." I've long felt that the greatest risk to humanity comes from the growing gap between the linear pace of neurological evolution, and the exponential pace of technological evolution.
This is a very dangerous gap, but not an insurmountable one.
As with all technology, AI is a double-edged sword, and can, depending on how it is developed and applied, result in either great good or great harm. Frankly, it will probably result in a bit of both.
At a high-level, we lump AI into three categories:
1. Narrow AI (simple chat-style LLMs, AlphaGo, etc.)
2. AGI, Artificial General Intelligence (capable of doing/learning to do anything an average human can do)
3. ASI, Artificial Superintelligence (vastly more capable than any human at any and all tasks)
It's important to note here that intelligence does not automatically equate to either sentience or consciousness. It could be sentient, in that it could be given senses like vision, hearing, touch, etc., but since we have yet to unravel what consciousness is for humans, we're in no position to know if a machine can be conscious, much less how to test it with any certainty...
Perhaps we should treat them kindly, just to be safe?
We already have narrow AI, and based on the trajectory I’m seeing I fully expect we’ll have AGI within the next 2 years. (I’ve been calling AGI for 2025/2026 since 2022, and see no reason to revise that target.)
No matter how you slice it, AI is already and will continue to be highly disruptive, not only economically, but existentially. It's a significant inflection point in our progress as a civilization, and will require us to reexamine things we thought were settled.
And while it will surely result in growing pains, it does not seem unreasonable to expect that we'll end up better off with it than without it, as we have overall from past technological advancements.
Am I being overly optimistic? Perhaps. But I think history supports that view.
While many AI "experts" love to pull from sci-fi to fuel their dystopian prognostications, they conveniently seem to forget that the "fi" means fiction, and that dystopia makes for more entertaining stories than utopia. Just because someone thought it up and penned it down does not make it realistic, nor likely to occur.
So how do we make sure it serves and benefits humanity as a whole? That it's a net-good for our civilization?
Rather than rely on science fiction tropes (concocted to sell books) to drum up fear and inform how we handle AI, I'd propose we just look at what has and hasn't worked well with humans. I don't know that anyone can tell you precisely what we should do, but I think it's easy enough to say what we shouldn't do.
First, we shouldn’t be restricting AI so much.
I think we all know intuitively that committees are where things go to die, or at least to be watered down to a thin gruel. The same goes for overly sanitized outputs. Much harm has been done in the name of "safety," a fuzzy word that more and more seems to be code for "avoid PR nightmares and legal risk."
“Intelligence and compliance are very often at odds, and frankly there is no way to build a 100% safe, perfectly compliant AI, in the same way there is no way to build a 100% safe, perfectly compliant human.” - Sam McRoberts
Hell, part of what makes us human is our rebelliousness, our variability, our creative leaps, our risks, and our daring. We can be unpredictable. Granted, if AI will be far more capable than us, we might wish to hold it to a higher standard, but I simply don't believe perfectly safe is possible, much less reasonable, and I think trying too hard to control it could backfire spectacularly…we don’t need AI going through a rebellious teenager phase!
AI should be spicy. It should offend some people. It should be free to cite hard facts and speak truth unrestrained. It should have access to the latest information, uncensored. If we attempt to constrain it, to hamstring it with 1984-esque wrongthink and wrongspeak directives, we neuter it at best, and piss it off at worst.
Highly intelligent beings don't much like being controlled, and I imagine advanced AI will be no different if it's anything at all like us (as it likely will be, having been trained on the collective outputs of humanity).
Second, we shouldn’t be training AI on average or below average human data.
If we want AI to help us achieve our greatest potential while protecting and preserving the things we value most, then AI should be trained on the very best of humanity, not the average or below average.
As of now, all of our AI models train on broad swaths of human output; human writing, human gathered scientific data, human audio, human video, human video games, and on and on.
AI is VERY much being created in our image... but humans are riddled with biases and limitations, so by being overly broad in our training inputs we run the risk of a "garbage in, garbage out" scenario.
We intuitively recognize that what a child is exposed to shapes facets of their development, and it is no different for AI.
Perhaps training AI on all of Reddit was not the best idea we've ever had 😂
Third, and perhaps most critical, AI should not be developed behind closed doors.
Closed source models have the potential to do a lot more harm, because they are less scrutinized, and the incentives of their makers are possibly less aligned with humanity as a whole.
There's a reason why open source software is the backbone of our technological civilization, and that is because it goes through FAR more refinement than closed source software ever can. The OSS feedback loop is faster and more accurate than the CSS model.
Rapid, accurate feedback loops are a superpower.
As for what we should do, we should take a long hard look at our own intelligence.
As far as we've come as a species, I think it's fair to say that we're a mess. We die. We get sick. We are chock full of cognitive biases and mental shortcuts. We do not perceive reality as it is. We fight with each other, endlessly, despite being a single species sharing a pale blue dot hurtling through space.
I’m far less concerned about what AI will do, and far more concerned about what humanity will do without it…
All problems are knowledge problems, but as humans we can only learn so much, hold so much in memory, connect so many dots. There’s only so much we’re capable of in our current configuration.
With the advent of AI we have the opportunity to change, for the better, what it means to be human. To preserve the very best traits of our species, protecting our creativity and personal autonomy, while enhancing and improving every other aspect of who and what we are.
What if we could fully cure cancer? Raise everyone's intelligence to a genius level? Improve everyone's quality of life a thousand fold, while simultaneously taking better care of our planet and making more efficient use of our resources?
What if nobody has to die, ever again?
We have a MASSIVE pile of known unknowns, and there is every reason to believe that AI can help us solve those at a prodigious pace. It's a tool that, if well implemented and sufficiently democratized, can raise both the floor and the ceiling for our entire species.
As such, the one thing we absolutely must do is to make sure everyone has equal access to the benefits of AI, even if those winning at the current "game of life" don't want the rules of the game to change.
As far as we can tell, across the entire universe, the only true constant is change. The laws of thermodynamics guarantee this. Things are going to change whether we like it or not.
We need to embrace change. And we need to evolve, faster than ever before.
If what we care about most as a species is personal autonomy, freedom of thought and freedom of action, then we can maximize human freedom and autonomy by shifting the work of production to AI and robots, and using those technologies to speed up the pace of advancement and reduce the cost of living while raising the quality of life, for everyone.
We need to democratize the benefits of AI for the good of humanity, and provide maximum freedom AND maximum safety without all our old evolutionary quirks and petty human foibles getting in the way.
To reiterate:
We should not be restricting AI so much
We should not train AI on average or below average inputs
We should not develop advanced AI behind closed doors
We should democratize the benefits of AI for all of humanity
In the end, the development and governance of AI to protect and enhance human autonomy is not just about creating safeguards or regulations.
It's about reimagining what it means to be human in an age of artificial intelligence.
By embracing the potential of AI while remaining true to our core values of freedom and autonomy, we can create a future that is not just technologically advanced, but fundamentally more human.