McNaughton Rulesprinciples



MEANING, THEORIES AND ASSESSMENT OF. The American psychologists Charles Egerton Osgood (1916-1991) and George John Suci (1925- ), and the Canadian-born American psychologist Percy Hyman Tannenbaum (1927- ), developed a popular paper-and-pencil measurement device called the semantic differential technique that attempts to assess quantitatively the affec-tive/connotative meaning ("signification") of words, as well as measuring attitudes towards other objects, entities, and concepts. Thus, in this approach, the theory of meaning is coterminous with its measurement (cf., Evans, 1975). The semantic differential consists of several seven-point bipolar rating scales (e.g., good-bad; active-passive; strong-weak) on which the individual rates the word, concept, or item under study. The technique led to the conclusion (via factor analysis) that there are three basic dimensions, theoretically, of affec-tive/connotative meaning: evaluation, activity, and potency [cf., the English-born American structuralist Edward B. Titchener's (18671927) context theory of meaning which holds that meaning depends on the mental images associated with a specific collection or body of sensations, as in the concept of "fire;" the motor theory of meaning proposed by the American behaviorist John Broadus Watson (1878-1958) which holds that meaning consists of covert movements and motor sets or motor readiness, that is, of the tendencies toward action that are aroused partially by an object; for example, the "meaning" of the red object on the table is its naming in internal speech as "apple," plus the motor readiness to overtly pick up the object and eat it; the conceptual dependency theory - introduced by the American linguist/cognitive scientist Roger C. Schank (1946- ) in the area of knowledge representation - refers to the way in which meaning is represented, whereby propositions are reduced to a small number of semantic primitives, such as agents, actions, and objects, and which are interpreted according to knowledge stored as "scripts;" logotherapy theory - a type of psychotherapy developed by the Austrian psychiatrist Viktor E. Frankl (1905-1997) which focuses on the patient's "will to meaning" (rather than on a "will to power" or a "will to pleasure") and seeks to restore in the individual a sense of meaning via creative activities/experiences of art, culture, and nature, and encourages the person's self-acceptance and his/her meaningful place in the world; among the techniques here is "paradoxical intention" (or "negative practice") especially useful for treating obsessive-compulsive disorders in which the person deliberately rehearses a particular habit, behavior, or undesirable pattern of thought, with the goal of developing a less fearful attitude towards it, controlling it, and/or extinguishing it; and psycholexicology - a rarely-used term that refers to the psychological study of words and their meanings; the term purportedly was coined by G. A. Miller and P. N. Johnson-Laird in the 1970s, and remains closely related to the notion of "procedural semantics" which emphasizes the importance of perceptual and other computational operations that language users supposedly employ in determining the applicability of words]. See also ATTITUDE/ ATTITUDE CHANGE, THEORIES OF; EXISTENTIAL ANALYSIS THEORY; EXIS-TENTIAL/PHENOMENOLOGICAL THEORIES OF ABNORMALITY; LANGUAGE ORIGINS, THEORIES OF. REFERENCES

Osgood, C. E. (1952). The nature and measurement of meaning. Psychological Bulletin, 49, 197-237. Osgood, C. E., Suci, G. J., & Tannenbaum, P.

H. (1957). The measurement of meaning. Urbana, IL: University of Illinois Press. Frankl, V. E. (1962/1980). Man's search for meaning: An introduction to lo-

gotherapy. New York: Simon & Schuster.

Evans, R. B. (1975). The origins of Titch-ener's doctrine of meaning. Journal of the History of the Behavioral Sciences, 21, 334-341. Miller, G. A., & Johnson-Laird, P. N. (1976).

Language and perception. Cambridge, MA: Harvard University Press.

Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hillsdale, NJ: Erlbaum.

Narens, L. (2002). Theories of meaningful-ness. Mahwah, NJ: Erlbaum.


MEASUREMENT THEORY. The notion of measurement refers to the systematic assignment of numbers to represent quantitative aspects/attributes of events or objects. The American experimental psychologist Stanley Smith Stevens (1906-1973) proposed the following scales of measurement level that includes the assignment of numbers "according to rules:" nominal scale - a discrete (rather than continuous) form of data classification where elements/items are not quantified but are merely assigned to different, often numbered, named/labeled categories (e.g., assigning individuals to "hair-color" categories based on the color of their hair); ordinal scale - data are arranged in order/ranking of magnitude but the scale possesses no standard measurement of degrees of difference between the elements/items (e.g., the rank ordering of women in a "beauty contest" on the basis of perceived attractiveness; or the medals awarded to athletes in the Olympic Games); interval scale - differences among elements/items/ scores can be quantified more or less in "absolute" terms, but the zero point on the scale is fixed arbitrarily; in this scale, the "equal differences" between scores correspond to equal differences in the attribute/characteristic being measured, but there is no score corresponding to the total absence of the attribute (e.g., calendar dates where each day is 24 hours long, but there is no zero point/score representing an absence of time/days); ratio scale - differences among values of elements/items/scores can be quantified in "absolute" terms where a "fixed zero point" is specified or defined; in this scale, equal differences between scores represent equal differences in the measured attribute, and a zero score represents the complete absence of the attribute; when measurement is on a ratio scale, it is meaningful to describe a score in terms of ratios (e.g., "she is twice as old as he is," or there is a ratio of 2:1 in their ages; one's age as measured in years is a ratio scale measure where birth represents the "fixed zero point"). Some psychologists prefer to avoid Stevens' theoretical approach to measurement and scales for the following reasons: it overlooks a crucial defining feature of measurement, that is, its connection with quantity or magnitude; it involves "rule-governed" assignments of numbers that do not truly represent quantities or magnitudes (e.g., the assignment of telephone numbers to individuals); "naming" is merely describing and not quantifying (as in the nominal scale); there is no real/true "absolute zero" point in measurement (as implied in the ratio scale); and some psychological researchers are easily led to conclude erroneously that there is a direct relationship between level of measurement scale used and type of statistical test to be employed (cf., Gaito, 1980). Among other speculations, effects, and issues related to measurement theory and psychological measurement ("psychometrics") are: generalization theory - the use of statistical analytical techniques to estimate the extent to which the scores derived from a particular test/data collection situation are applicable beyond the specific conditions under which those data were obtained; also called external or ecological validity [cf., internal validity - the extent to which a dependent variable/measure is determined by the independent variable(s) in an experiment]; reliability theory - study of the internal consistency and stability with which a measuring device performs its intended function in an accurate fashion (e.g., getting the same results from a group of participants who take the same test, or equivalent forms of the same test, on two separate occasions under virtually the same testing condi tions); scale attenuation effects - refers to a reduction in the range of scale values utilized by participants in a study and may originate from difficulties in interpreting results when participants' responses on the dependent variable are either nearly perfect (as in the ceiling effect) or nearly absent (as in the floor effect); basement/floor effect - refers to the inability of measuring instruments or statistical procedures to determine differences at the bottom of data when the difference between scores/data is small; ceiling effect - refers to the inability of measuring instruments or statistical procedures to determine differences at the top of data when the difference between scores/data is large, or the inability of a test to measure or discriminate above a certain point, usually because the items are too easy for some people; and the testing effect - refers to the influence that taking a test actually has on the variables/traits which the test was designed to assess, and is a major source of error in psychological testing that is likely to occur, especially, where the use of pre-tests may alter the phenomenon that is measured/tested subsequently. See also CLASSICAL TEST/MEASUREMENT THEORY; CONJOINT MEASUREMENT THEORY. REFERENCES

Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677680.

Suppes, P., & Zinnes, J. L. (1963). Basic measurement theory. In R. D. Luce, R. R. Bush, & E. Galanter (Eds.), Handbook of mathematical psychology. Vol. 1. New York: Wiley. Gaito, J. (1980). Measurement scales and statistics: Resurgence of an old misconception. Psychological Bulletin, 87, 564-567.

MECHANISTIC THEORY. In the history of psychology, and philosophy, the doctrine of mechanism is the notion that all animals, including humans, may be viewed as machines, with the added fiat that although living organisms may be complex, nevertheless they essentially are machines requiring no special, additional, or surplus principles to account for their behavior. Traditionally, the controversial mechanistic theory has been contrasted with the theory of vitalism (which holds that a "vital force," not explicable by chemical, mechanical, or physical principles, is the basic cause of life) and the theory of organicism (a version of "holism" that emphasizes the notion that the parts of living organisms are only what they are due to their contributions to the whole being.) The theory of vitalism had its origins in the field of chemistry, in particular in the classification of compounds in 1675 by the French chemist Nicolas Lemery (16451715), and by the French chemist Antoine Laurent Lavoisier (1743-1794). In 1815, the Swedish chemist Johan J. Berzelius (17791848) proposed a distinction between "organic" and "inorganic" compounds which are governed by different laws; for example, organic compounds are produced under the influence of a "vital force," and are incapable of being prepared artificially. In 1828, this distinction was eclipsed when the German chemist Friedrich Wohler (1800-1882) synthesized the organic compound "urea" from an inorganic substance. In the field of philosophy, in one case [according to the German philosopher Hans Driesch (1867-1941)], the life-force principle may take the form of "entelechies" (i.e., actualities or realizations) within living things thought to be responsible for their growth and development. In another case [according to the French philosopher Henri Bergson (1859-1941)], the general "life force" takes on the features of an "élan vital" (life/vital force or spurt) which rejects the type of vitalism that postulates individual "entel-echies." Mechanistic theory is associated, often, with both the doctrine of determinism (which posits that all events, physical or mental, including all forms of behavior, are the result of prior causal factors) and the doctrine of materialism (hypotheses asserting that physical matter is the only ultimate reality), but must be distinguished from such allied doctrines. For example, the "mechanist" (one who denies the existence of anything such as a "soul" or "mind" in living beings) is always a "materialist," but a "materialist" is not always a "mechanist;" also, a "vitalist" may promote "materialism," but discovers in organic tissue a special type of matter whose functions may not be explained in "mechanical" terms. A "mechanist" is a "determinist," because ma chines are defined often as "determined entities;" however, a "determinist" may not be a "mechanist" [e.g., the Dutch philosopher Benedictus/Baruch Spinoza (1632-1677) was a "pantheist" (the belief that God is the transcendent reality of which the material universe and man are only manifestations; it involves a denial of God's personality and expresses a tendency to identify God with nature), but he subscribed, also, to a vigorous "determinism" and a denial of free will]. The origins of the psychological doctrine of mechanism lie in the mechanistic viewpoint of the world as triggered by the scientific revolution of the 17th century [e.g., the English physicist/mathematician Sir Isaac Newton (1642-1727) proposed in 1687 that the universe is a "celestial clockwork" that adheres to precise and mathematically-stated natural laws]. It was an easy, and inevitable, step to go from the "celestial clockwork" of physics to the "behavioral clockwork" of psychology. However, along the way, the philosophers once again made contributions to mechanistic theory. For instance, the French philosopher Rene Descartes (15691650) advanced a rigorous mechanical conception of nature, and proposed that animals are mere machines whose behavior is determined by the mechanical functioning of their nervous systems. For Descartes, people are considered likewise to be machines, but they also possess "free souls" (that can "think") separate from bodily deterministic mechanisms. However, functions/capabilities such as memory, perception, and imagination were viewed by Descartes as physiological phenomena that are discernible or accountable by mechanical laws. The French philosopher Julien Offray de LaMettrie (1709-1751) asserted that "man is a machine," and although he denied the existence of a "soul," he was not a "mechanist" in all respects, inasmuch as he espoused vitalism by distinguishing between inorganic and organic matter. In the 19th and 20th centuries, various scientific and theoretical obstacles to mechanistic theory were overcome slowly; for example, the development of the sensory-motor conception of nervous function overshadowed investigators' search for the "soul" in the human body, and the development of the theory of evolution - along with the discovery of the DNA molecule -

helped to explain how vital life processes may be accounted for by mechanical reproduction, transmission, and communication systems. It may be observed that mechanistic theory, today, still disturbs those individuals (and religious groups) who believe that "mechanism," by embracing determinism, works to undermine belief in "free will" and "moral responsibility." The debate continues. See also BEHAVIORIST THEORY; DESCARTES' THEORY; DETERMINISM, DOCTRINE/THEORY OF; EVOLUTIONARY THEORY; EXISTENTIAL ANALYSIS THEORY; HOBBES' PSYCHOLOGICAL THEORY; HOLISTIC THEORY; LEARNING THEORIES/LAWS; LOEB'S TROPIS-TIC THEORY; MIND-BODY THEORIES. REFERENCES

Newton, I. (1687). Philosophiae naturalis;

principia mathematica. London: Pepys.

Driesch, H. (1905/1914). The history and theory of vitalism. Leipzig: Engelmann.

Bergson, H. (1911). Creative evolution. New York: Holt.

Young, D. (1970). Mind, brain, and adaptation in the nineteenth century. Oxford, UK: Clarendon Press.



Better Mind Better Life

Better Mind Better Life

Get All The Support And Guidance You Need To Be A Success At A Better Life. This Book Is One Of The Most Valuable Resources In The World When It Comes To Better Living with Enhanced Mental Health.

Get My Free Ebook

Post a comment