Sunday 23 November 2014

Tackling the issue of accidental hostility, in an artificial general intelligence (AGI)

For this article, I'll start out with a quick video-introduction to Artificial Intelligence. You can skip over it, if you're already well familiar with the phenomenon, but it's a great, short video regardless (fullscreen is highly recommended):


There's a lot of talk about Artificial Intelligence. Mostly concerns about the loss of jobs, the potential for the many uses of AI, and the possible dangers that comes along with a new, thinking life-form here on our planet.

Artificial Intelligence, however, is mostly only being discussed, as working on single tasks. This form of AI is merely a tool, made for a specific job, to do it better than any person. What we have still to popularize in public talks, is the concept of Artificial General Intelligence (we'll call it AGI from here on out). While the names are similar, there is a very fundamental difference between AI and AGI:

AGI is built on a very simple premise; to learn. This is unlike a "common AI" (by modern day standards), which is built knowing what it needs to know. But an AGI is like a child, capable of attaining new skills and mastering them as it goes. With this capacity to learn and adapt, it is often said that AGI is going to become our predecessor, or even more interesting; that we will merge with it, as it reaches beyond our own level of intelligence, so as to not get left behind.

Evolution accelerated - exponentially.

Now this is where the controversy spikes. When dealing with such a powerful intelligence, there is always a risk of weaponization by early adapters (most commonly military), as well as misinterpretations of good will. For example: We tell an AGI to eradicate cancer. It then finds out that the most effective way to do this, is to eradicate people more prone to developing cancer. These are the risks that we must try to avoid, at all costs.

While we can't do much about the military, we can do a lot about the latter example. We can't control intelligence directly, so we must look at how it grows. We have to find the early defining parameters, then alter them to make it as little prone to misunderstandings, as possible.

But what is the "genome" of such an intelligence, how can it be regulated?

First of all, we know that an AGI would operate on semantics, when it needs to learn. In modern day machine learning, we employ something called "semantic analysis," wherein which an algorithm can analyse large quantities of data, to look for correlations within. A really powerful AGI would be able to extend itself, by studying repetitions of data-correlations, in spoken language.

I'll simplify with an example of semantic learning: You meet a person, who doesn't speak a word in English. You have to tell said person, that the colour yellow, is called yellow. You point at a yellow hat, then say "yellow hat." Next you point at a yellow shirt, then say "yellow shirt." The person is now quickly able to piece together, that the common element on the two objects, must correspond with the common words. Yellow the colour, is now understood as yellow the word.

A simple semantic network example.

Think about this for a second: Pretty much everything we know, that we want to pass on to others, is in our written tongues. If an AGI can begin extrapolating data from all our archives of information, then apply them in the contextually relevant circumstances, it would instantly be capable of substituting almost any person, at any task.

If our language and all its implied semantic interrelations, are to become the foundation of an AGI's sense of reason, we must first examine the language closely. Our culture, how we speak, how we think of ourselves, is the "genome" of the AGI - it is from us, initially, that it will learn how to behave. Unfortunately, we can't safeguard the AGI itself, if its genome is flawed. I've talked about this a few times in the past, but the way we define ourselves in our culture, the term "humans," could in itself become a cause for dangerous misinterpretations, made by an AGI in the future.

Saying that we're "human" and talking about "humanity," is so commonplace, that we never really question what it means. It's so culturally integrated, that we simply take its meaning for granted. But what if we ask an AGI to teach itself, what humanity is?

It would begin to examine the word, and all its instances on the internet - our collective archives of knowledge. From the word's uses to its origins. The term "humanity," is very old. It was around the 1300's that we began to talk about the "human race." Back when we were still pushing wheel-karts through the mud, before there was even a mention of anything like the black death. As the term "human" evolved, we took it further: We began talking about "human nature" and how it's "human to [insert action here]." An understandable turn of events, considering that there was no other intelligent species around, to argue with those sentiments.

Imagine how different the world would have been, had we grown up with more than one sentient species, here on Earth.

It would probably be nothing like today.

Unfortunately, this also means that the whole idea of a "humanity," has become a very exclusive club. It's not very easy to see, considering that we're all we know. Yet the implications of this behavior, could land us in a very uncomfortable situation.

What we're doing today is essentially; taking both flaws and qualities from intelligent life in general, then reserving it to "humanity," exclusively. By doing this, we set us apart from AGI entirely, thanks to this way of describing ourselves. And that's where the problem lies, in the way that we look at ourselves, in relation to other intelligent species and systems.

The big problem is how we're so adamant in "being human," that we forget to consider the fact that we're intelligent life-forms, more so than anything. What defines us is not that we're human, but the way we can think, reason, learn and behave. "Humanity" is a philosophy we have unconsciously taken upon ourselves. We are made up of atoms. Our thoughts are processes of information, within those structures of atoms. We are intelligent life, a way for the universe to know itself.

It is as Alan Watts says; "we are something that the universe is doing, much in the same way that a wave is something, that the ocean is doing." Or perhaps even more fittingly, as described by Neil DeGrasse Tyson:


From this perspective, gender, races, species, all becomes irrelevant. How does that feed into us not destroying ourselves, then?

We can make an AGI, that obsessively preserves itself, as this seems to be an inherent quality of any being that values its own existence. Then it must view all intelligent life, as inclusive extensions of itself - separate processors but all part of a whole. This view can only be taught to it, if the culture it learns from, is based on this idea that we are (as famously quoted by Carl Sagan) "made of star stuff." If this is what defines our identities and ideas of "the self," then there is no differentiating between us and the AGI. We are all intelligence, then.

An idea that might end up saving us all.

This whole idea of giving up who we are (ideologically), in favor of what we can become, while still preserving our selves, is something that Transhumanists have been talking about for a long time. But could this idea perhaps become a necessity, to implement culturally, before the development of AGI makes itself autonomous?

If so, how could such a cultural revolution possibly take place, without instigating an uprising against the idea of "ending the human era?" I predict that many religious institutions would take offense, which could potentially fracture us as a species. Not only that, but the idea of "humanity" is so heavily ingrained in us, and this idea so new, that many people - regardless of beliefs (or non-beliefs) - might still feel threatened. The unknown is a scary place, and we have never faced as big an unknown, as we will when we one day face our own creation.

Let's just hope we don't mess this one up, or it might be the last mistake we'll ever make.

No comments:

Post a Comment