Introduction

AI is one of the newest disciplines, formally initiated in 1956 when the name was coined. However, the study of intelligence is one of the oldest disciplines being approximately 2000 years old. The advent of computers made it possible for the first time for people to test models they proposed for learning, reasoning, perceiving, etc.

What is AI?

Figure 1.1

Definitions may be organised into four categories.

Acting humanly

The first proposal for success in building a program and acts humanly was the Turing Test. To be considered intelligent a program must be able to act sufficiently like a human to fool an interrogator. A human interrogates the program and another human via a terminal simultaneously. If after a reasonable period, the interrogator cannot tell which is which, the program passes.

To pass this test requires:

This test avoids physical contact and concentrates on "higher level" mental faculties. A total Turing test would require the program to also do:

Thinking Humanly

This requires "getting inside" of the human mind to see how it works and then comparing our computer programs to this. This is what cognitive science attempts to do. Another way to do this is to observe a human problem solving and argue that one's programs go about problem solving in a similar way.

Example: GPS (General Problem Solver) was an early computer program that attempted to model human thinking. The developers were not so much interested in whether or not GPS solved problems correctly. They were more interested in showing that it solved problems like people, going through the same steps and taking around the same amount of time to perform those steps.

Thinking Rationally

Aristotle was one of the first to attempt to codify "thinking". His syllogisms provided patterns of argument structure that always gave correct conclusions, giving correct premises.

Example: All computers use energy. Using energy always generates heat. Therefore, all computers generate heat.

This initiate the field of logic. Formal logic was developed in the late nineteenth century. This was the first step toward enabling computer programs to reason logically.

By 1965, programs existed that could, given enough time and memory, take a description of the problem in logical notation and find the solution, if one existed. The logicist tradition in AI hopes to build on such programs to create intelligence.

There are two main obstacles to this approach: First, it is difficult to make informal knowledge precise enough to use the logicist approach particularly when there is uncertainty in the knowledge. Second, there is a big difference between being able to solve a problem in principle and doing so in practice.

Acting Rationally: The rational agent approach

Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just something that perceives and acts.

In the logical approach to AI, the emphasis is on correct inferences. This is often part of being a rational agent because one way to act rationally is to reason logically and then act on ones conclusions. But this is not all of rationality because agents often find themselves in situations where there is no provably correct thing to do, yet they must do something.

There are also ways to act rationally that do not seem to involve inference, e.g., reflex actions.

The study of AI as rational agent design has two advantages:

  1. It is more general than the logical approach because correct inference is only a useful mechanism for achieving rationality, not a necessary one.
  2. It is more amenable to scientific development than approaches based on human behaviour or human thought because a standard of rationality can be defined independent of humans.
Achieving perfect rationality in complex environments is not possible because the computational demands are too high. However, we will study perfect rationality as a starting place.

Foundations of AI

Mathematics

Philosophers staked out most of the important ideas of AI, but to move to a formal science requires a level of mathematical formalism in three main areas: computation, logic and probability.

Mathematicians have proved that there exists an algorithm to prove any true statement in first-order logic. However, if one adds the principle of induction required to capture the semantics of the natural numbers, then this is no longer the case. Specifically, the incompleteness theorem showed that in any language expressive enough to describe the properties of the natural numbers, there are true statements that are undecidable: their truth cannot be established by any algorithm.

Analogously, Turing showed that there are some functions that no Turing machine can compute.

Although undecidability and noncomputability are important in the understanding of computation, the notion of intractability has had much greater impact on computer science and AI. A class of problems in called intractable if the time required to solve instances of the class grows at least exponentially with the size of the instances.

Exponential vs. polynomial. In between is nondeterministic polynomial.

Even moderately sized instances of intractable problems classes cannot be solved in reasonable amounts of time. Therefore, one should strive to divide the overall problem of generating intelligent behaviour into tractable subproblems rather than intractable ones.

Another important concept from mathematics is problem reduction. A reduction is a general transformation from one class of problems to another such that the solutions to the first class can be found by reducing them to problems in the second class and then solving those.

One notion for recognizing intractable problems in that of NP-Completeness. Problems that can be solved in nondeterministic polynomial time. Any problem class to which an NP-complete problem can be reduced is likely to be intractable.

Probability is the principle mathematical tool that we have to represent and reason about uncertainty. Bayes proposed a rule for updating subjective probabilities in the light of new evidence. This rule forms the basis of the modern approach to uncertain reasoning in AI.

Decision theory combines probability theory with utility theory (which provides a framework for specifying the preferences of an agent) to give a general theory that can distinguish good actions from bad ones.

Psychology

The principle characteristic of cognitive psychology is that the brain processes and processes information. The claim is that beliefs, goals, and reasoning steps can be useful components of a theory of human behaviour. The knowledge-based agent has three key steps:

  1. Stimulus is translated into an internal representation
  2. The representation is manipulated by cognitive processes to derive new internal representations
  3. These are translated into actions
Linguistics

Having a theory of how humans successfully process natural language is an AI-complete problem - if we could solve this problem then we would have created a model of intelligence.

Much of the early work in knowledge representation was done in support of programs that attempted natural language understanding.

The History of AI

In the early days of AI there was great optimism that the intelligent computer were just a few decades off. However, the problem proved to be far more difficult than anticipated. Today most researchers in AI a smart enough not to make predictions. Also many are not really concerned with creating intelligence, rather they are concerned with creating more intelligent computer programs than currently exist.

The microworlds approach to AI was pioneered in the 1960's and tried to solve problems in limited domains.

The ANALOGY program could solve geometric analogy problems such as this

The most famous microworld is the Blocks World (1970's). A command such as "Pick up the red block" could be used to manipulate the world.


Figure 1.3

The microworld approach turned out to have problems because the advances made in writing programs for microworlds turned out not to be generalisable.

Early work in the Logicist camp also had problems because of the use of weak methods (these use weak information about a domain). However, knowledge intensive approaches have been more successful. A key development from the Logicist tradition was knowledge-based systems in the 1980's.

In the late 1980's Neural Networks became fashionable again (they had been popular in the 60's) due to improved learning algorithms and faster processors.

AI Time Line

1943 - McCulloch and Pits propose modelling neurons using on/off devices.
1950's - Claude Shannon and Alan Turing try to write chess playing programs.
1957 - John McCarthy thinks of the name "Artificial Intelligence".
1960's - Logic Theorist, GPS, microworlds, neural networks.
1971 - NP-Comlpeteness theory (Cook and Karp) casts doubt on general applicability of AI methods.
1970's - Knowledge based systems and expert systems.
1980's - AI techniques in widespread use, neural networks rediscovered.
1990's - Deep Blue wins against world chess champion. Image and Speech recognition become practical.