Why I’m Now Concerned About the Rise of AI

As a blogger, my primary work has been writing book reviews and recapping key takeaways from what I read. However, some books warrant a deeper discussion; a simple “review” doesn’t do them justice.

Our Final Invention by James Barrat is a book that demands discussion. Elon Musk (a hero of mine) proclaimed Barrat’s book as one of the five books everyone should read about the future. I agree.

For years, I’ve been oblivious about why some people fear artificial intelligence. Now I’m fascinated by the topic and how new technological advances could adversely impact my life.

Barrat’s thesis in the book is that the rise of AI could lead to the demise of humanity. At first blush, that sounds absurd. However, before you dismiss Barrat as a conspiracy theorist who uses fear and sensationalism to peddle books, let’s explore the topic further.

Roughly stated, there are two types of futuristic AI: artificial general intelligence (AGI) and artificial superintelligence (ASI).

Artificial General Intelligence (AGI)

AGI is the first step: achieving a human level of intelligence with machines. We’re not there yet, but we’re getting closer every day.

Tech behemoth IBM has given us two examples of computing progress. In the epic “Man vs. Machine” chess match of 1997, IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov. Then, in 2011, IBM’s new Watson computer defeated the flesh-and-blood Jeopardy kings Ken Jennings and Brad Rutter to claim a $1 million prize and many more millions in free press for IBM.

But were those computer victories examples of true “intelligence” or just demonstrations of computational power?

That topic is debated, but most people say actual intelligence requires understanding information rather than just regurgitating it. Scientists are currently working to develop that deeper level of AI capability, and some futurists think that goal could be attained within the next couple decades.

Artificial Superintelligence (ASI)

ASI is a different beast entirely. ASI is the idea that once we achieve the creation of AGI, that technology will quickly rocket to super-human proportions. Why? Because computer programs are wired for hyper-efficiency and continuous improvement.

A computer that reaches a requisite level of intelligence would be able to create new programs that speed its own development. ASI would apply Moore’s Law on steroids.

This idea of supercomputers creating more supercomputers is aptly called the “intelligence explosion.” The ASI train would be running at full speed on grease-lined tracks to an uncertain destination.

How Could ASI Affect Us?

The problem is, once there’s a new intellectual top dog in the world, questions arise about what will happen to the former cerebral champion: us.

Barrat reminds readers how we (the human race) personally treat animals we see as our intellectual inferiors: genetic testing on mice, holding chimpanzees captive in zoos, etc.

We don’t hate these creatures. If anything, we’re sympathetic toward them and we “like” them. Yet our actions harm them in powerful ways.

Would ASI view humans the same way we view mice? If so, we cannot settle for mere indifference or sympathy from ASI. We know where that can lead, as evidenced by the way we treat mice and chimps.

We would need ASI to be unconditionally friendly and subservient to the needs of humanity. And that could require some pretty complex software engineering.

How exactly does one program “friendliness” into a computer? Currently, scientists and developers have no idea. Many scientists say it would be extraordinarily difficult — if not impossible.

Science fiction fans may think this problem has already been solved with Isaac Asimov’s Three Laws of Robotics. But in the scientific community, those “laws” are generally regarded as woefully insufficient.

If AI creators are unable (or unwilling) to solve the friendliness dilemma, one could envision a scenario in which ASI uses humans for scientific research in the same way we use other animals.

I know, I know…these doomsday theories drip of sensationalism to the degree that many people immediately tune out.

Either we’re afraid that subscribing to these theories would make us seem crazy (I struggled with my own demons about whether to post this article) or we don’t want to admit that this possibility — however remote — could significantly alter humanity’s horizon.

Here are six reasons why we should take the threat of AI seriously:

(1) Many credible people in the world of technology are concerned about AI. Those people include Elon Musk (all-around badass and founder of Space-X, Tesla, PayPal, OpenAI, and The Boring Company), Bill Gates (founder of Microsoft), Stephen Hawking (theoretical physicist, cosmologist, and author), and Nick Bostrom (author of the book Superintelligence and one of Foreign Policy’s Top 100 Global Thinkers).

(2) AI systems are notoriously goal-oriented. If anything comes into conflict with the goals of an AI system, the AI could go to great lengths to remove the impediment. A simple example: people often say we could simply “unplug the machine” if it does anything we don’t like. But would a “superintelligent” system really allow us to do that?

(3) Millions of dollars of AI funding has come from the Department of Defense agency DARPA. DARPA is seeking AI that can be weaponized for our military. In fact, Barrat shares that no fewer than 56 countries are actively developing robots for the battlefield. So much for the idea of creating “friendly” AI!

(4) The majority of AI development is happening underground. Dozens of private companies (as well as some covert divisions of large publicly-traded companies) are working to develop AGI. Most groups are developing their tech in secret. Let’s not forget that in most innovative fields, speed to market trumps safety/security concerns — especially if the technology could produce life-altering wealth for its creator(s). If AGI/ASI could actually be as detrimental as some theorists claim, hundreds of people are secretly working on technology that could serve as tomorrow’s weapons of mass destruction.

(5) We are becoming more dependent upon technology every day. We’ve received innumerable benefits from software, but tech and artificial intelligence now permeate every realm of society: utility power grids, motor vehicles, communication devices, stock and bond markets, medical records, etc. It would be extremely difficult to unwind AI from our lives if an AI system began to act in frightening autonomous ways.

(6) Most AI programs are “black box” systems that are difficult to understand and control. In the words of Barrat, a black box system is “a computational tool in which you understand the input and the output but not the underlying procedure…And their unknowability is a big downside for any system that uses evolutionary components. Every step toward inscrutability is a step away from accountability, or fond hopes like programming in friendliness toward humans.”

In a famous recent example of a black box system, last year Facebook shut down one of its AI systems that had created its own non-human language. Forbes said of the event: “It is as concerning as it is amazing — simultaneously a glimpse of both the awesome and horrifying potential of AI.”

Now What?

If you’re like me, you don’t know what to think about all of this. I’m not a doomsayer, although I’ll admit I have a sick fascination with topics like this.

As a first step, I think it’s in our personal best interest to familiarize ourselves with this topic and what it could mean for society. I don’t think AI is a sure path to destruction — far from it. But the possible consequences of creating and unleashing a force of this magnitude upon the world warrant careful consideration.

Certainly, AI will have a non-neutral impact upon the world. It may usher in a new period of prosperity and health, or it may present a new, faceless foe we will be forced to outwit while also trying to harness its incredible power.

Leave a Reply