Previous page
Next page

14.2.2 Can intelligent machines be constructed?

Is it possible for a machine to be intelligent? If one discounts the mystical, then the answer to this question is certainly yes, since humans are themselves biological “machines” – machines that are self-aware.[2] Knowing that intelligent machines exist does not answer the question of whether humans can construct them. While most computer scientists think that constructing machines capable of intelligent behavior is possible, it is conceivable that we humans may simply be too dumb to ever figure out how to build an intelligent machine.

So, how should we go about trying to construct an artificial intelligence? There are two general approaches taken by AI researchers in their attempts to automate intelligent behavior. One approach is often called machine learning while the other approach goes by the name symbolic AI.

Symbolic AI focuses on developing systems composed of explicit rules that manipulate data in ways designed to produce seemingly intelligent behavior. The underlying rules are designed by people and encoded into computer programs. Symbolic AI is the “classic” approach to artificial intelligence. Much of the early success in the field, with topics such as game playing, automated reasoning, and expert systems, resulted from this approach. Symbolic AI has been less successful when attempting to automate lower-level behaviors such as understanding natural (human) languages and computer vision.

As opposed to traditional symbolic AI systems in which humans try to figure out the rules underlying a process and then encode those rules as computer programs, machine learning systems are based on the idea of designing a system that learns by example. While a symbolic approach to character recognition might attempt to generate an explicit list of the features that make a capital “A” an “A” – such as the angle of the two lines that form the outer shape of the letter, the position of the cross bar, etc. – a machine learning approach would instead focus on constructing a system that could be shown many different examples of the letter “A” and then discover for itself the features that make an “A” an “A” as opposed to a “B”.

Some approaches to machine learning are biologically inspired, such as artificial neural networks which model certain aspects of the structure and function of biological neural systems (brains). Other approaches are based on more abstract mathematical models, such as Bayesian networks. In recent years, the machine learning approach, when paired with the vast quantities of data now available over the Internet and today’s faster processors, has yielded significant progress in areas such as speech recognition, natural (human) language translation, and vision.

Whether or not these approaches (symbolic AI and machine learning) have a shot at achieving intelligent behavior depends on an underlying assumption or hypothesis – known as the physical symbol system hypothesis – being true. The physical symbol system hypothesis states that physical symbol systems are capable of generating intelligent behavior. In other words, intelligent behavior can be encoded into such a system. A physical symbol system is a collection of physical patterns (such as written characters or magnetic charges), called symbols, together with a set of processes (or rules) for manipulating those symbols (e.g., creating, modifying, deleting, and reordering them). A computer is a real world device that can be used to implement physical symbol systems, since computers are capable of both storing symbols (generally represented as strings of 0’s and 1’s) and manipulating those symbols under program control.

While most AI researchers believe the physical symbol system hypothesis to be true, it is only a hypothesis, not a proven fact. Some philosophers, on the other hand, are not convinced and believe that intelligent behavior will never be achieved by computers or any other physical symbol system. Others believe that although the development of machines that act in an intelligent manner is possible, or even likely, such systems will never be truly intelligent. In other words, these philosophers think that while it is possible that a machine may at some point be able to “mimic” intelligent behavior so as to fool people into thinking it is intelligent, it will never really “be” intelligent – just a good fake.

These two schools of thought go by the names weak AI and strong AI. Weak AI refers to machines that behave in an intelligent manner, but makes no claim as to whether or not the underlying system is truly intelligent. Strong AI refers to machines that not only act in an intelligent manner, but also are intelligent.

At first glance, the debate over weak verses strong AI may appear pointless. After all, the debate is not over how the systems behave, but over the subtler question of whether machines can really possess self-awareness.

The answer that we humans settle on to this rather esoteric question will eventually have an enormous impact on how we treat machines that display intelligent behavior.[3] If we view them as elaborate devices then they will have no rights – one cannot be cruel to a microwave oven after all. If on the other hand we choose to view them as true intelligences then humans will be faced with a large number of rather thorny moral issues: Is it possible to be cruel to an AI? Should ownership of an AI be allowed, or would it be equivalent to slavery? What if we could construct AI’s so that they enjoyed doing what we wanted, wouldn’t that solve the moral problems associated with owning them? Well, the vast majority of humans have decided that slavery is morally repugnant irrespective of whether or not the slave might claim to be happy with his or her station in life. Shouldn’t the same moral arguments apply to all truly intelligent beings?

If computer scientists ever succeed in the goal of creating machines that seem to act in an intelligent manner, this debate over AI rights is likely to become extremely heated – as the economic and social consequences could be enormous – perhaps even more heated than the current debate over abortion that rages in the US. In general, one’s opinion on abortion is primarily determined by whether one believes “personhood” begins at conception, birth, or somewhere in between.[4] This question has no definitive answer as “personhood” is not a rigidly definable concept and, in fact, has varied between different societies and over time. Some societies / religions view personhood occurring at conception. Other societies have defined personhood as occurring at various times following birth, such as ancient Sparta where newborns were examined by local elders and if found wanting were thrown over a cliff. Today such a practice sounds barbaric but to those living in ancient Sparta the newborn wasn’t truly a Spartan until the local elder pronounced the child to be one.

Just as there is no definitive test to determine where “personhood” begins, there is no such test to determine whether an AI is truly intelligent (Strong AI), and therefor presumably deserving of rights, or whether it simply “mimics” intelligent behavior (Weak AI). This point is succinctly made in the film “2001: A Space Odyssey” when a BBC reporter asks Astronaut Dave Bowman whether HAL has genuine emotions. Dave responds: “Well, he acts like he has genuine emotions. Of course, he’s programmed that way to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don’t think anyone can truthfully answer.”


Footnotes

[2]  Many humans believe in some sort of “vital essence,” “spirit,” or “soul.” However, those beliefs are untestable, in the scientific sense of the word, and are excluded from consideration here.

[3]  Assuming, for the moment, that such machines can ever be constructed.

[4]  The often repeated “Life begins at conception” meme misses the point. Of course a zygote, blastocyst, embryo, and fetus are alive – as are egg cells and sperm cells even before conception. Humans have no problem with the death of many kinds of living things, especially microscopic things. We do however have a problem with humans killing humans. So the key question is not when does life begin, but rather when does personhood begin.

Return to top