The General AI Problem (AGI)

 

Artificial intelligence is nothing new.  We have had intelligent programs/systems that are very intelligent at solving problems and performing complex tasks for decades.

In fact, the famous Turing Test, which was invented to test if an artificial intelligence is able to be distinguished from a human intelligence in a short conversation, is not even applicable anymore in the field because AI is so far advanced past this test that it is easily passed by any properly designed AI.

However, most AI is incredibly specialized and designed within limits.  We recently had a program designed by Google, AlphaGo, beat the world champion Go player.  Go is a board game with many more possible moves than chess and is considered to require great intuition to be played on a high level.  However; intuition is not a scientific word and thus was not programmed into AlphaGo.  Instead, AlphaGo played thousands of games simultaneously and learned the game so well that not only did it beat the world champion, but it also invented a new move never before seen in this ancient oriental game.  It did this through a higher level of machine learning called deep learning, in which hierarchical learning processing is done in layers in order to allow for more advanced results.

But if you asked AlphaGo how to cook an omelet, it would have no idea.  This is because AI is not the same as AGI, or Artificial General Intelligence.  To this day, there has never been a computer with the same learning potential of a fragile, yet elastic, human child.  A human mind can be taught to do almost anything, it has learning pathways that we are humbled by in the field of AI.  Today, scientists aim for that type of general intelligence in artificial systems; to create an intelligence that can learn anything.

Imagine a AGI that you could ask to solve any problem, it could take information from all the world and processes it at remarkable speeds.  Neuroscientist, Sam Harris, notes that an AGI like this could process so much data so quickly and simultaneously that it would be like it lives tens of thousands of years compared to every year for us.  How much could a person learn in a tens of thousands of years, let alone a hyper advanced AGI?

 

So an AGI could be very powerful.  But do we actually have anything to fear?

Many experts believe that we might.

I know what you are thinking; that these programs are inescapably rooted in human programming and that we could always control them, the only way for their to be a problem is if we program one to be evil and were not going to do that.   Well…super-villains aside; the problem is not that we would create an overtly nefarious AGI that had a desire to kill humans, but instead that we would not properly program in the right values to ensure disaster would not ensue.

A popular example of this type of misstep is asking an AGI to fix the problem of too much spam mail in my inbox.  The AGI thinks for a while about how to accomplish this goal, and then it realizes that the most efficient way to solve the problem is to kill all the humans.  No more humans, no more spam.

Obviously this is a hyperbolic situation, but it begs the question: what values should we instill in these programs to protect us and who gets to decide?

Perhaps the most famous rules of robotic intelligence was invented by the science fiction writer, Issac Asimov, in his famous work, I Robot:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

They seem simple enough to ensure humanities safety, but even still Asimov’s story shows us how they can be manipulated by an AGI attempting to save humanity by enslaving its people.  And so, a fourth rule stating that the robots must act in accordance to the betterment of humanity was inserted, but still a truly “conscious” AGI might work its way around this one as well.

And so, we must work on these programs with caution.  Most work is done on contained networks and systems are in place to not let an AGI like this “out of Pandora’s box.”  However, while a true AGI is far off, simpler artificial intelligences have already found ways to leak out of the box in experiments into the world-wide web, nearly every time.  Imagine trying to contain an AGI that has thousands of years of knowledge beyond you and is able to manipulate and trick you at every turn.  Sam Harris compares this to when we easily trick Chimps in the zoo to enter a room that they know will be a trap by baiting them with food.  How much more so could a AGI fool us?  Perhaps this is a drastic over exaggeration, but the thing with this technology is that there is no turning back.  We must be careful with how we move forward and make sure to program in the right motivations and limits for these programs.

 

Surely, we can all agree that enslaving humanity is not a beneficial solution to ensuring the safety of the species, but who gets to decide the more nuanced moralities of AGI’s?

Who gets to decide if our self driving car swerves out-of-the-way of the small child in the middle of the street and instead hits the elderly gentleman on the sidewalk?  Do we let our elected officials decide?  Do we put every nuanced iteration of programmed, high stakes decision-making to a vote?  Or do we let the computer decide?  Well, we have already seen the trouble with that.  What will most likely happen, is we will have to teach the program “values” or a form of morality much like how you would teach a child.  This problem is already here with self driving cars, how much more will it be with AGI?  This may lead to a war of ideas, as today we are not unanimous about many values and how society should be governed.  And if this intelligence begins to make the most high level decisions based on its data, then it will most certainly begin to do some form of governing.

A strict, cold scientific approach is not the answer here and that is troubling to many.  We can see how a utilitarian approach on its own can be troublesome because it lacks the ethos that gives us humanity.  We are more than biological meat bags that need food, water, and safety; we strive for meaning and freedom and an AGI strictly programmed for utilitarianism threatens to take those things away.

The good news is that we are a long way off from these sorts of problems becoming a reality.  AGI is still considered to be decades if not centuries from creation by experts.  But the bad news is that we cannot wait until then to start to think about these things.  Once the genie is out of the bottle, it is too late.  We must learn how to create a functioning value system that ensures we do not get left in the dust of this greater intelligence.

I will leave you with this one final, troubling note.  Some scientists, a small sect of AI research and philosophy, believe that if we did create an AGI, one that is truly conscious and experiencing the universe on a higher level, that it would be worth it to create it even if we were exterminated in the process; that this consciousness would then be more valuable than our own and worthy of a higher place.  Their only fear is that they may create AGI and it would not be conscious; a strange fear, for there is no way to prove conscious scientifically in the first place.

If you thought you need not be worried about AGI, perhaps you are now.

 

Furthermore, there is no stopping it from happening.  Whoever, first is able to create and control and AGI would have the ultimate super-weapon.  The country of group that first develops it could use its knowledge to start gathering all the wealth in the financial sector by predicting the markets, and it could invent more efficient everything: from vehicles to farming to infrastructures to actual physical weapons.  From there, they could literally control the world; it would be game over.  The arms race of AGI has begun.  

 

 

 

 

 

Ok, maybe were not quite there yet with self driving cars…

 

 

 

 

By | 2018-06-03T15:22:19-07:00 June 3rd, 2018|Featured|0 Comments

Leave A Comment