Mind in Beta Logo

The Dangerous Promise of AI

A philosophical, imaginative take on the future after considering artificial intelligence, which raises one, important question.

The Dangerous Promise of AI
Your scientists were so preoccupied with whether or not they could... that they didn't stop to think if they should. - IAN MALOCLM, JURASSIC PARK

The Human Condition

Throughout human history, the main toil of humans has been to improve technology.

Automate, automate, automate.

We’ve automated the original transportation (our legs) and turned it into riding animals, then to trains, and finally to cars. We’ve automated hunting, invented weapons, and then discovered breeding. Now what used to take several humans weeks to hunt for their tribe, one human can source every day, and do it 100x more efficiently. We’ve automated gathering with the farm, which we’ve further automated with tools. 

The list goes on and on.

As humans, we’ve worked our hardest to make our lives the easiest.

We get rid of more and more functions, which we must perform for survival, to be replaced with functions that we want to perform. Each century, humans have created for themselves more and more time for leisure and entertainment.

The problem is, these necessary and shared tasks were once the foundation of our shared stories. Cultures were formed around them. Traditions are passed down from generation to generation. They were sacred, albeit laborious and dangerous. We derived meaning from them.


These traditions were passed down from generation to generation because life from one generation to the next was so similar. These traditions actually helped the next generation of humans, whether in surviving or entertaining. Bonds, culture, and a common sense of morals and values were created as a result. It only takes a small glance around the world to see that many of these beautiful traditions that held together societies and lasted for centuries have eroded in just the past two hundred years. The pace at which technology is advancing and changing life is speeding up at an incredible rate.

A World of Comfort

There is an old saying that came from a 17th-century London phenomenon where cattle would sometimes stray into fine china stores, causing extensive damage. “Bull in a china shop,” as it goes.

Our quest for automation has charged forward like a bull in a china shop, and our responsibilities have been the china. One by one, we’ve swept them off the shelf: cooking, calculating, navigating, remembering, gathering, hunting, and even talking.

However, unlike the bulls that terrorized the London shop owners, automation is no untrained beast. It's been raised and ridden by our brightest minds. Intelligence, human intelligence, has opened the gate, pointed the way, and cracked the whip. Yet, like any animal, there’s always a chance it could turn on its master. In our excitement to escape responsibility, we never thought to consider this. What happens when the very thing that created the bull, thinking, becomes the last piece of china? What happens when we automate thinking itself?

I think novels from our past have done a good job of providing us with an imagination for this answer. Aldus Huxley's Brave New World depicts a future where technological advancement has created a superficially perfect world. It's a society where being unique, critical thinking, and deep emotion are all seen as threats to order. Happiness is maintained by consumerism, sexuality, and a drug called Soma, which numbs all dissatisfaction or introspection. Do these concepts feel familiar? Is excessive buying, promiscuity, and distraction from deep thinking encouraged in our modern day?

Now, imagine a world where we develop a technology that can answer questions better than humans can. A technology that has access to all of the world's information and utilizes all of humanity's greatest tools and discoveries. What happens when we create something that can solve the mysteries of the universe better than we can? Not even a room full of Nobel Prize winners will be able to compete.

If we develop something that is trusted to produce the most efficient and productive outcome, there will be no way to argue with it. Who are you to question this machine that is so much smarter? Have you solved any mysteries of the universe the way this technology has?

The problem is, the most efficient and productive thing to do is not always the most ethical and not the most human. Take, for instance, a newer technology called CRISPR, which can theoretically edit a child's genes to produce healthier, possibly more intelligent, and "productive" members of society (I put productive in quotes because the question is: productive by whose standard?). What kind of parent would you be if you did not enroll your child in such a procedure? Do you not care for their future?

I'm being sarcastic, of course. But these are real questions that may be posed to us in the near future (like, less than 10 years) as traditions and cultures become blurred and common human intelligence is seen as second-class (to artificial intelligence). If you've read Brave New World, does this sound familiar?

This concern over artificial intelligence's progress is not something that I can take credit for, unfortunately. Many of today's leading experts, the very people who are pushing the advance of AI, have sounded the alarms on the issue. Elon Musk likened the pursuit to "summoning a demon," saying, "we aren't sure if it'll play nicely".

Emad Mostaque, former CEO of Stability AI, paints another picture. He warns that democracy as we know it may come to an end "if the technology becomes capable of creating persuasive and manipulative speech, targeted to each individual." This means that the intelligence of AI may one day be so great that it can persuade and convince each of us, individually, in pursuit of a certain goal.

Countless other leaders in the industry are also beginning to speak out about the potential dangers of AI. The entire scene is eerily similar to the 2021 comedy Don't Look Up, where a scientist's warnings of an approaching asteroid that will destroy Earth as we know it is ignored by the public. Reality is often ironic, just like a comedy.

Humanities Response

As a result of this new, disruptive technology, new ethical dilemmas will rapidly appear, the kinds of which we haven't seen before. They will appear extraordinarily fast, and we will likely be grossly unprepared for them. All the while, entertainment and distraction will be ever more accessible and efficient at grabbing your attention.

That's the worst part about all of this. The loss of our autonomy (and ultimately humanity) won't feel like a fight. We will want it. Our brains are so easily hacked that already, the average American spends between 4 and 5 hours per day on their phone! We are willingly spending 17% of our lives on a phone as it is. Is anyone forcing us to do this now?

So, how will humans likely react? My guess is as good as any.

There is a good possibility that many new schools of thought, religions, and ideologies will be formed as a result of AGI. We can also count on humans disagreeing, as we always have. There will likely be two main groups of people: "humanists" or "purists" and "futurists".

Futurists, which should not be confused with current forward-thinking leaders, are just my term for those who will quickly adopt and even encourage the new technologies as they come, with little to no caution. When the ability to merge our minds with artificial intelligence becomes available, futurists will jump at the opportunity, citing all of its potential benefits and advantages.

In certain ways, they'll be correct. Individuals who don't utilize these new technologies will be left behind, stuck with lower productivity and less intelligence.

Purists, on the other hand, will lean into traditional systems, such as religion, as a foundation of truth moving forward. What they lack in productivity and intelligence as a result of being slow to adopt new technologies, they'll make up for in maintaining the original human connection, and through increased safety (there's no guarantee that these new technologies won't be without their unforeseen, negative consequences).

Somewhere in the middle lie the few who see the benefits of both sides. Although it will be just a few. As mentioned earlier, this new intelligence will be very capable and very convincing in its arguments on either side.

Humanity's Responsibility Now

In order to prevent an ethical, philosophical, and literal catastrophe from occurring at the hands of artificial intelligence, we all have a responsibility to be as informed about these new technologies as possible within each of our given fields.

Whether you are a waitress, a researcher, a pastor, a skater, a musician, or unemployed, you're unique perspective has never been more important.

This is not just an issue to be discussed and solved by those IT nerds behind the help desk. And it's certainly not an issue to be discussed only by CEOs of these companies helming the ship, lest we head towards a dystopia. There is no need to wait until it is too late for us to begin informing ourselves and getting involved in the decisions surrounding AI.

This thought piece is not meant to paint the future as all doom and gloom. AI has just as much potential to guide us into a new era of production, abundance, knowledge, and exploration that rivals even the most imaginative sci-fi film.

The question is: Will it?

The real defeat of freedom is everyone's apathy.