Classical or conventional programming has dominated AI (as all other forms of computer usage). It is characterized by the processes of first deriving an algorithmic procedure for achieving some desired goal within the accepted constraints and then casting this algorithm into a machine-executable form [i.e., the classical task of writing (or coding) a program]. Typically, the elements of the program are the conceptual objects of the problem [e.g., a small array of storage locations might be labeled short-term memory (STM)], and the algorithmic mechanisms programmed will directly implement the conceptualizations of the way the model works (e.g., subparts of the program using STM will be direct implementations of storing and removing objects from the STM object in the program). In this classical usage of computers, the speed and accuracy of the computer are used to perform a series of actions that could, in principle, be performed by hand with a pencil and paper. The main drawback from an AI perspective is that the programmer must decide in advance exactly what detailed sequence of actions will constitute the AI system.
Most programming languages are procedural languages; that is, they permit the programming to specify sequences of action, procedures, that the computer will then execute. Several programming languages have a particular association with AI. One such language is LISP, which dominated the early decades of AI. LISP serves the needs of AI by being a general symbolmanipulation language (rather than a numeric computation language such as FORTRAN); it is also particularly useful for AI system development because it demands a minimum of predefined conditions on the computational objects, which permits maximum freedom for changing program structure in order to explore an AI model by observation of the computational consequences. This flexibility minimizes automatic error checking and can lead to fragile programs that may also be large and slow to execute. It is thus not a favored language and method for mainstream computing, in which the emphasis is on fast and reliable computation of predefined procedures.
Early in the history of AI it was realized that the complexity of AI models could be reduced by dividing the computational procedures and the objects they operate on into two distinct parts: (i) a list of rules that capture the knowledge of some domain and (ii) the mechanisms for operating on these rules. The former part became known as the knowledge base and the latter as the inference engine or control regime. The following might be a simple rule in a chess-playing system: IF a potential exchange of pieces will result in the opponent losing the more valuable piece THEN do the exchange of pieces.
The general form of such rules is IF / condition > THEN / action >. When the / condition > is met, then the / action > is executed, which generally results in new conditions being met and/or answers being generated. The control regime might be that the first rule whose condition is satisfied is executed, with this simple control repeated until some goal is achieved or no rules can be executed. This strategy for building AI systems became inextricably linked with the AI subarea of expert systems.
Development of the knowledge-base and inference-engine division was formalized in the idea of logic programming. The basis for logical reasoning, which has always been an attractive foundation on which to build AI, is a set of axioms and the mechanism of logical deduction, and this pair map easily onto the knowledge base and inference engine, respectively. This particular development was implemented in detail in the logic programming language PROLOG, which became a popular AI language in the 1970s, especially in Europe and Japan.
Was this article helpful?