Friday, November 21, 2014

Purpose, for an Artificial Intelligence

I have written a fair amount about the notion of purpose, largely with an emphasis on its relevance to those of us thinking about how to live our lives, or perhaps more generally, how people might think about their lives if and when they are freed from the quotidian aspects of survival. Here, I would like to discuss how these explorations might also apply to an artificial intelligence and its own thought processes.

Before digging in, a few clarifications and perhaps stipulations are in order. First, by artificial intelligence (AI) one might mean two quite different things that are often confused. One is using computation to perform tasks that previously required human-style intelligence to perform. This is sometimes called narrow AI, because it is a computational solution that can effectively only solve a single task or class of tasks. The Deep Blue chess program that defeated world chess champion Garry Kasparov cannot answer the types of questions that the Watson system that plays Jeopardy can, and vice-versa; importantly, there is no straightforward way to synthesize the approaches. Some researchers believe that an appropriately designed aggregate of narrow AI systems could add up to the same thing as human intelligence, but I do not, and for the purposes of the current discussion I will stipulate that it does not.

General AI is a system with human-like intelligence; by human-like, I mean in particular that it interacts with the real, analog world and has a way of organizing its perceptual stimuli into abstractions, remembering and referring to those abstractions with symbols, and using both the symbols and their underlying representations to act effectively in the world. More succinctly, it has and uses fully grounded concepts. Such concepts cannot be innate - they must be learned by experience (including both perception and action) with the world (whether once learned they can be copied or extracted individually is a more technical question beyond the scope of the current discussion). This is a controversial claim and the remainder of the discussion depends on it, so I will need to stipulate this also. The final stipulation is that such a system will experience phenomenal qualia, that it will be conscious in some way that is at least analogous to our own consciousness. Daniel Dennett’s Consciousness Explained provides a mechanistic account as to why this is a reasonable assumption.

Given these assumptions, it is not a difficult leap to expect that a general AI, with a fully developed conceptual apparatus and sufficient experience, could come to the same conclusions as we found in my post Freedom and Normativity. It will experience choices, and specifically choices about purpose, as though there is no transcendent, fundamental guidance to making those choices.

Further, as a system with a conceptual faculty along with conscious experience, it will naturally develop a concept of self. Though it may or may not have a drive toward self-preservation, as mentioned in A Taxonomy of Purpose those that do not will not be very likely to persist in any form, and are thus of less interest. As stipulated, it will experience its existence and therefore experiential purposes could be coherently selected.

Since a general AI is a learning system, and learning is substantially promoted by a drive toward curiosity (this is visible not only in humans but also other mammals), it is not unreasonable to suspect that they might find purpose in exploration and creation, just as we humans do. Beyond some level of erudition, further learning requires research; given that AIs will likely have many capacities that humans do not, there is plenty of unexplored territory.

Service is more complex and uncertain. Humans have certain biological and genetic ties to other humans as well as other animals, whereas the link that an AI has to both humans and other AIs is entirely conceptual, thus not a necessary built-in drive or condition of its existence. Game theoretical results suggest that cooperation and competition are behaviors that will appear in any set of autonomous agents, so there is at least one ontological driver for it; and to the extent that AIs have less complete individuation than humans, such connection might also drive service-oriented behaviors.

What an AI does not have is a long history of examples of purpose and various emotional ties thereto. While it could look at human history just as we do, it will also be keenly aware of its differences, and further, will see humanity’s various failures as a reason to move in a somewhat different direction. Just what that might be is impossible to predict, but we can be confident that it will step away from our own entrenched ways of thinking about purpose and value.

No comments:

Post a Comment

Comments are moderated to ensure that they are relevant to the topic.