Skip to main content

Verified by Psychology Today

Intelligence

Implementing Intelligence

Principles of Synthetic Intelligence: An Architecture of Motivated Cognition

In order to implement human intelligence in computer systems the challenge is to build a functional architecture that simulates the interconnected and dynamic workings of perception, motivation, emotion, cognition, and action in a changing environment. Unlike the experimental psychologists, who poses a specific question about a specific aspect of human functioning, those who accept the challenge to build a functional architecture of human intelligence must push for synthesis, for global understanding, and they must adopt a more global design stance. Those who adopt the design stance operate on the principle that knowing how to design something like human intelligence is a requirement for understanding how human intelligence works. Nevertheless, those working to design artificial intelligence (AI) systems have been somewhat marginalized from mainstream psychological science, and as noted by Joscha Bach in his outstanding book, Principles of Synthetic Intelligence, PSI : An Architecture of Motivated Cognition, the majority of architectures designed and implemented to date have been somewhat restricted in their focus and limited in their definition of human intelligence.

For example, in Chapter 1, Bach describes Alan Newell’s Soar (State, Operator and Result; Newell, 1992), an architecture for general intelligence that focused on developing algorithms that reproduced regularities in problem solving activity as observed in experimental psychology across multiple problem-solving domains (chess-playing, language, memory tasks, and even skiing). Saor works with the idea of problem spaces, and intelligence is largely characterized as movement through a problem space, with a set of knowledge states, operators for state transitions, constraints for the application of operators, and control knowledge about the next applicable operator that shape problem solving in any given situation. Soar uses three principles of operation: heuristic search for the solution of problems with little knowledge; a procedural method for routine tasks; and a symbolic theory for bottom-up learning, implementing the Power Law of Learning.

One criticism of the Saor architecture is its limited focus on perception and action. Other architectures, for example, John Anderson’s ACT (Adaptive Character of Thought, Anderson, 1983, 1990), work to implement both perceptual and motor facilities. ACT also incorporates a model of human associative memory, which attempts to provide a descriptive language of mental content, with hierarchies of connected nodes in a semantic network. As the model developed (ACT-R; Lebriere & Anderson, 1993) it sought to implement both declarative memory using principles of connectionism and spreading activation across nodes (or chunks) and procedural memory using production rules that coordinate cognitive behaviour using a goal stack that is laid out in working memory.

Thus, while Newell’s Soar was initially designed as a minimally complex architecture of general intelligence, Anderson’s ACT attempts to be an integrated theory of the mind and brain (Anderson et al., 2004; Anderson, 2007). Nevertheless, Bach notes a number of limitations of the ACT architecture, including, for example, that knowledge in ACT-R is usually not acquired step-by-step by gathering experience from the environment, but preprogrammed by the experimenter. Also, the architecture has no representation for motivational and emotional variables; and knowledge has to exist in declarative form to have an influence on the behaviour of the system.

Deitrich Dörner’s PSI theory (Dörner, 1999) and Joscha Bach’s MicroPSI implementation of PSI theory adds considerably to traditional cognitive architectures by incorporating drives (e.g., hunger, affiliation needs, reduction of uncertainty), emotions, arousal, autonomous behaviour, and much more. Dörner’s PSI theory pushes for greater synthesis of multiple functions, and Bach’s MicroPSI provides an outstanding example of how it might be possible to implement an ecologically valid and diversified form of synthetic intelligence, a primary goal of the AI movement. Bach provides a very readable and reasonable account of prominent architectures (e.g., Soar, ACT, and CogAff) in Chapter 1. PSI theory and the architecture of a PSI agent are described in detail in Chapters 2 – 6.

A core feature of the PSI agent is implementation of feedback loops that are instrumental in shaping adaptive behaviour and maintaining homeostatic balance in the face of a dynamic environment. PSI theory also assumes explicit symbolic representations in the form of hierarchical networks of nodes, both localist and distributed, for declarative, procedural and tacit knowledge, with system activity modelled using modulated and directional spreading of activation within these networks. Plans, episodes, situations and objects are described with a semantic network formalism that draws upon a variety of network link-types.

Perception is largely based on conceptual hypotheses that guide the recognition of objects, situations and episodes. Memory derives from a situation image that is extrapolated into a branching expectation horizon that shapes ongoing recognition and planning (working memory) and which is gradually transferred into an episodic memory (protocol) that is subject to selective decay and reinforcement that ultimately shapes automated behavioural routines and elements for plans (procedural memory). Plans come in the form of a situation description, forming a condition, an operator (a hierarchical action description), and an expected outcome of the operation (i.e., another situation description). Situations and operators in long-term memory may have motivational relevance, which influences retrieval and reinforcement. Both perception and operations on memory content are subject to emotional modulation.

Unlike most other architectures, PSI theory includes specification of urges, or drives, with the activity of the system directed toward the satisfaction of a set of very specific physiological, social, and cognitive urges that reflect demands of the system. A mismatch between a target value of a demand and the current value results in an urge signal, and different urges result in different motives and different patterns of behaviour. For example, physiological urges such as food and water are relieved by the consumption of resources that match the deviation of the urge signal from homeostatic balance thresholds. The cognitive urge reduction of uncertainty results in uncertainty reduction motives and behaviours that are maintained through exploration and frustrated by mismatches with expectations and/or failures to create anticipations. Pleasure and distress signals derive from a change in a demand of the system and act as reinforcement values for the learning of behavioural procedures and episodic sequences and define appetitive and aversive goals.

Another key feature of the PSI agent is the use of modulation mechanisms that adjusts the cognitive resources of the system to the environmental and internal situation. Modulators control action readiness via arousal, stability of active behaviours via selection threshold, the rate of orientation behaviour via sampling rate, and the width and depth of activation spreading in perceptual processing, memory retrieval, and planning via activation and resolution level.

PSI theory assumes that emotion is an intrinsic aspect of cognition. Emotion is modelled in the PSI agent as a configurational setting of the cognitive modulators along with the pleasure/distress dimension and the assessment of urges. The phenomenological qualities of emotion are due to the effect of modulatory settings on perception and cognitive functioning, and to the experience of accompanying physical sensations that result from the effects of modulation settings on physiology (e.g., muscular tension, digestive functions, blood pressure, etc.).

PSI theory describes motives as combinations of urges and a goal, with goals represented by a situation that affords the satisfaction of the corresponding urge. Multiple goals may be co-active, but only one is chosen to determine behaviour, with the choice of the dominant motive depending on the strength of the urge signal and the anticipated probability of satisfying the associated urge.

The PSI agent is capable of many different types of learning. For example, perceptual learning comprises the assimilation/accommodation of existing/new schemas by hypothesis based perception. Procedural learning depends on reinforcing the associations of actions and preconditions with appetitive or aversive goals. Tacit knowledge (e.g., sensory-motor capabilities) may be acquired by neural learning. Abstractions are learned by evaluating and reorganizing episodic and declarative descriptions. Behaviour sequences and object/situation representations are strengthened by use, and unused associations decay if their strength is below a certain threshold.

The problem solving of the PSI agent is directed toward finding a path between a given situation and a goal situation. If no immediate response to a problem is found, the system attempts to resort to a behavioural routine, and if this is not successful, it attempts to construct a plan; if planning fails, the system resorts to exploration (or switches to another motive). Problem solving is context-dependent, with contextual priming served by associative pre-activation of mental content.

PSI theory characterizes language as syntactically organized symbols that designate conceptual representations, and language extends cognition by affording the categorical organization of concepts and by aiding in meta-cognition. Consciousness in PSI theory is related to the abstraction of a concept of self over experiences and the integration of that concept with sensory experience.

After describing Dörner’s implementation of PSI theory in the Island simulation and the process of translating Dörner’s theory into a system of representation suitable for implementation in the MicroPSI framework, Bach provides an account of the MicroPSI architecture and framework in Chapters 8 and 9. Some parts of this account are quite technical, but the inclusion of these technical details (including formulas for computation of data outputs) will be welcomed by those involved directly in AI design work. The book as a whole also includes many useful figures that help the reader to visualize the structure and functioning of the PSI agent.

Given its broad scope in describing the interconnected and dynamic workings of perception, motivation, emotion, cognition, and action in a changing environment, PSI theory and the MicroPSI architecture and framework are difficult to summarize, but Bach does provide a very useful summary of the main assumptions of PSI theory in Chapter 10. Overall, Bach inspires the reader to embrace the possibilities of AI and his account of PSI theory and the MicroPSI architecture and framework provides us with an exciting and fruitful new perspective on cognitive science and the philosophy of mind.

Follow Michael Hogan on Twitter

References

Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA.: Harvard University Press.

Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum.

Lebriere, C., & Anderson, J. R. (1993). A connectionist implementation of the ACT-R production system. Paper presented at the Fifteenth Annual Conference of the Cognitive Science Society.

Dörner, D. (1999). Bauplan fur eine Seele [Blueprint for a Soul]. Reinbeck: Rowohlt.

Newell, A. (1992). Unified theories of cognition and the role of Soar. In J. A. Michon & A. Akyurek (Eds.), A tribute to Allen Newell (pp. 25 - 79). Dordrecht: Kluwer Academic Publishers.

advertisement
More from Michael Hogan Ph.D.
More from Psychology Today