Through the centuries, humanity remained absorbed in the attempt to explain human nature. The philosophers speculated. Literary giants wrote of human passions, struggles, triumphs, and tragedies. But the facts were not available; only personal opinion and guesswork. It was impossible to know for sure how we see and hear until modern science learned about light and sound waves and the way they affect nerve endings within the body. Human moods and emotions could not be analyzed until science identified the substances secreted by the human glands and the complex way the glands interact with the brain. The process of heredity could not be understood until biologists discovered the chromosomes, genes, and the chemical key to life called DNA. The influence of environment was unclear until psychologists established the facts about learning and about development from infant to adult (Sternberg, 2006).
Even today, we do not know the full story, and perhaps we never will, for human behavior is so complex that it may forever defy complete understanding. But psychologists aided by the progress of other scientists have found some of the answers, and they are making new discoveries all the time. The psychological experiment, psychology itself, has come a long way since the science began. At the start, the idea of taking approach to the study of behavior required a radical shift in human thinking and invention of brand-new techniques of study. The early psychologists lacked the tools necessary for sophisticated exploration (Buckley, 2001). Nevertheless, they provided humanity five precursors to the development of cognitive psychology, namely, functionalism, pragmatism, behaviorism, structuralism, and associationism.
Factual: Five Antecedents to Cognitive Psychology
According to the theory of functionalism, mental events depend on primarily on the networks, pathways, and the interconnection of mental processes, and not on the material stuff that it is composed of. Functionalists do not deny that human mental processes are a function of human brain activity. They simply throw open the criteria of mental activity to include computers, robots, or other human-made devices that exhibit the relevant processes (Levin, 2004).
The field of artificial intelligence attempts to realize the functionalist theory and duplicate human cognitive mental states in computing machinery. For some time scientists have tried to replicate human thought processes in some kind of mechanical form. However, at that state of technology, the robot was little more than a wind up toy, and it exhibited none of the internal processes that functionalists would associate with thinking. Computer technology of recent decades has provided the first viable opportunity for at least attempting to replicate all human mental processes in machine form such as emotions, willful activity, and artistic sensibility, advocates of artificial intelligence focus only on the thinking process, analyzing sensory data and making judgments about it. From a functionalist perspective, the dangerous claim about artificial intelligence is philosophically controversial since it holds that a computer can have human-like thoughts (Witherington and Crichton, 2007).
“Pragmatism,” William James writes, “is a method only.” Still, as a method, pragmatism assumed that human life has a purpose and that rival theories about human nature and the world need to be tested against purpose (Goodman, 2006). According to James, there is in fact no single definition of human purpose. Instead, our understanding of human purpose is part of the activity of thinking (Zachar, 2002). Philosophical thinking arises when we want to understand things and the setting in which they live; purpose derives its meaning from a sense of being at home in the universe (Gross, 2002).
James rejected rationalism chiefly because it was dogmatic and presumed t give conclusive answers about the world in terms that frequently left the issues of life untouched. By contrast, pragmatism “has no dogmas and no doctrines save its method” (Leigland, 2004). As a method, pragmatism takes its cue from the newly discovered facts of life. We should not accept as final any formulations in science, theology, or philosophy, but instead see them as only approximations (“Pragmatism”).
In 1913, another American, John Watson, revolutionized psychology by breaking completely with the school of introspection and founding the movement behaviorism. Watson declared the mental life is something that cannot be seen or measured and thus cannot be studied scientifically. Rather, he concluded that psychologists should concentrate on actions that are plainly visible. In other words, he wanted the science to study what people, not what they think (Graham, 2000).
He considered all human behavior to be a series of actions in a which a stimulus, that is, an event in the environment, produces a response, that is an observable muscular movement or some physiological reaction, such as increased heart rate or glandular secretion, that can also be observed and measured with the proper instruments (Harzem, 2004).
All our sense organs, though they vary greatly in the sensations hey produce, operate on some basic principles. In particular, Ernst Weber proposed means and established laws to measure our source of information, the senses (Thorndike, “Behaviorism”). When receptor cells are activated by a stimulus, they set off bursts of nervous impulses. After being routed through various switching points, the receptors’ messages reach the sensory areas of the cerebral cortex, where they are translated into our conscious sensations of vision, hearing, and the rest. Our brain never actually encounters light or sound. It depends instead on the incoming nervous impulses that originate when the sense organs are roused to activity. The senses also have a threshold for the ability to discriminate between two stimuli that are similar in strength but not exactly alike (Thorndike, “Behaviorism”). The rule that the difference threshold is a fixed percentage of the original stimulus is called Weber’s Law, in honor of the physiologist who discovered it more than a century ago. In practical terms, it means this: the more intense the sensory stimulation to which the human organism is being subjected, the greater the increase in intensity required to produce a recognizable difference (Ludvig, 2003).
The year the science was founded is usually put at 1879, when Wilhelm Wundt opened the first psychological laboratory at Germany’s University of Leipzig (Boeree, 2000). Wundt had studied to be a physician, then instead of practicing medicine, taught as a professor of physiology. But he soon lost interest, because he was much more concerned with human consciousness than with the workings of the body. His experiments, in retrospect, seem rather trivial. For example, he and his students spent hours in the laboratory listening to the click of a metronome, sometimes set fast and sometimes set slow, sounding only a few times or many, and analyzed their conscious reactions. They decided that a rapid series of clicks produced excitement and a slow series made them relaxed, and that they had slight feelings of tension before each click and of relief afterward (Boeree, 2000).
Despite this modest beginning, the new science of psychology found an immediate and enthusiastic response. A few years later, similar acclaim came to Sir Francis Galton, one of the first British psychologists. Galton, who was interested in individual differences, invented numerous devices to test such traits as hearing, sense of smell, color vision, and ability to judge weights. Even in those early years, when psychology was just taking a few tentative steps into the vast realm of human behavior, it captured the public’s attention (“Structuralism”).
Like Wundt, most of the pioneers concentrated on an attempt to discover the nature, origins, and significance of conscious experiences. This was what is methodically known to be structuralism. Their chief method of investigation was introspection, or looking inward. They tried to analyze the processes that went on inside their minds, asked their subjects to do the same, and recorded their findings, as objectively as possible, for comparison with other observers (“Structuralism”).
The most prominent of the early American psychologists and structuralists was William James, who came to the science from an unusual background (Boeree, 2000). Like Wundt, he studied medicine but never practiced. Indeed he had a difficult time finding his true vocation. At one time, he wanted to be an artist, then a chemist, and once he joined a zoological expedition to Brazil, In his late twenties, he suffered a mental breakdown and went through a prolonged depression in which he seriously thought of committing suicide. But he recovered, largely he believed through what he called an achievement of the will, and went on to become a Harvard professor and prolific writer on psychology and philosophy. James had no doubt about the mission of the new science. His definition of psychology is the study of mental life. The distinguishing feature of his mental life, he felt, was that human beings constantly seek certain end results and must constantly choose among various methods of achieving them. James was interested in the broad pattern of human strivings, the cradle-to-grave progress of human beings as thinking organisms who adopt certain goals and ambitions, including spiritual ones, and struggle in various ways to attain the goals or become reconciled to failure (Boeree, 2000).
It is not by mere chance that our ideas are related to each other. There must be, David Hume says, “some bond of union, some associating quality, by which one idea naturally introduces another.” Psychologists call it the theory of associationism. It is not a special faculty of the mind that associates one idea with another but by observing the actual patterns of our thinking and analyzing the groupings of our ideas (Holtorf, “Associationism”).
Whenever there are certain qualities in ideas, these ideas are associated with each other. These qualities are three in number: resemblance, contiguity in time or place, and cause and effect. Associationism states that the connections of all ideas to each other could be explained by these qualities and gave the following examples of how they work: “A picture naturally leads our thoughts to the original (resemblance): the mention of one apartment in the building naturally introduces an enquiry…concerning the others (contiguity): and if we think of a wound, we can scarcely forebear reflecting on the pain which follows it (cause and effect)” (Holtorf, “Associationism”).
There are no operations of the mind that differ in principle from one of these examples of the association of ideas. But of these, the notion of cause and effect was considered by cognitive psychologists to be the central element in knowledge. They took the position that the causal principle is the foundation upon which the validity of all knowledge depends. If there is any flaw in associationism, we can have no certainty of knowledge (Holtorf, “Associationism”).
Analytical: Plato vs. Aristotle
Plato and Aristotle were both commanding thinkers and leaders of their respective eras and fields. Both philosophers possess analogous qualities in their writing flairs, on top of their topics in their writings.
Plato’s comprehensive treatment of knowledge was so powerful that his philosophy became one of the most influential strands in the history of Western thought. Unlike his predecessors, who focused upon single main problems, Plato brought together all the major concerns of human thought into a coherent organization of knowledge (Cooper, 1997). Aristotle on the other hand invented formal logic. He also invented the idea of the separate sciences. For him, there was a close connection between logic and science, inasmuch as he considered logic to be the instrument or organon with which to formulate language properly when analyzing what a science involves. Psychology as well know today is somewhat founded on the harmonious relationship of logic and science (Large, “Aristotle”).
The foundation of Plato’s philosophy is his account of knowledge. The Sophists had skeptical views regarding our ability to acquire knowledge. Human knowledge, they believed, was grounded in social customs and the perceptions of individual people. Such “knowledge” fluctuated from one culture or person to another. Plato, though, staunchly rejected this view. He was convinced that there are unchanging and universal truths, which human reason is capable of grasping. In his dialog, The Republic, he picturesquely makes his case with the Allegory of the Cave and the metaphor of the Divided Line (Cooper, 1997).
In the Allegory of the Cave, Plato rejected the skepticism of the Sophists by arguing that there are these two worlds, the dark world of the cave and the bright world of light. For Plato, knowledge was not only possible, but it was virtually infallible. What makes knowledge infallible is that it is based upon what is most real. The dramatic contrast between the shadows, reflections, and the actual objects parallels the different degrees to which human beings could be enlightened. Plato was convinced that we could discover the real objects behind all the multitude of shadows, and thereby attain true knowledge (Cooper, 1997).
In his metaphor of the Divided Line, Plato provides more detail about the levels of knowledge that we can obtain. Plato concludes his discussion of the Divided Line with the summary statement, “ now you may take, as corresponding to the four sections, these four states of mind: intelligence for the highest, thinking for the second, belief for the third and for the last imagining. These you may arrange as the terms in a proportion, assigning to each a degree of clearness and certainty, corresponding to the measure in which their objects possess truth and reality” (Cooper, 1997). The highest degree of reality, he argued, consists of the Forms, as compared with shadows, reflections, and even the visible objects.
Unlike Plato, who thought that to know the good was sufficient to do the good, Aristotle saw that there must be deliberate choice in addition to knowledge. Thus, Aristotle said that the “origin of moral action, that is its efficient, not its final cause, is choice, and the origin of choice is desire and reasoning with a view to an end” (Barnes, 1984). With the end in mind, Aristotle became inclined to arriving at an end from a deductive approach. Aristotle’s classic example of a deductive argument is this: (1) All men are mortal; (2) Socrates is a man; (3) Therefore, Socrates is mortal (Smith, 2004). The problem with this approach is that the conclusions we draw only perpetuate the errors that are already contained in the premises. Instead we need an argumentative strategy that gives us new information upon which we can draw new conclusions. Induction does just this.
Plato was particularly concerned with the cognitive aspect of art, feeling that it had the effect of distorting knowledge because it was removed several steps from reality (Eskritt, et al., 2001). Aristotle, on the other hand, believing that the universal Forms exist only in particular things, felt that artists are dealing directly with the universal when they study things and translate them into art forms (Large, “Aristotle”).
Plato argued that Forms, such as Human or Table, had a separate existence. Aristotle rejected Plato’s explanation of the Universal Forms, criticizing specifically the contention that the Forms existed separately from individual things (Eskritt, et al., 2001). Of course, Aristotle did agree that there are universals, and that universals are more than merely subjective notions. Indeed, Aristotle recognized that without the theory of universals, there could be no scientific knowledge, for then there would be no way of saying something about all members of a particular class (Cooper, 1997). Aristotle was not convinced that Plato’s theory of the Forms could help us know things any better: “they help in no wise toward the knowledge of other things” (Cooper, 1997).
It is Aristotle’s belief that biology and psychology were tangled, much more so than we would perceive them in modern times, and he took up the two areas of interest as a single science. The end of psychology was to determine the characteristics and real meaning of the soul or psyche. Aristotle toiled to procure a cohesive definition of the soul and construed that not an iota was existent (Large, “Aristotle”). For Aristotle the soul is the definitive form of the body. Without the body, the soul could neither be nor exercise its functions. This is in sharp contrast to Plato’s explanation of the body as the prison house of the soul. This way, Plato could describe knowledge or learning as the process of recollection of what the soul knew in its previous state. Aristotle, on the other hand, tied soul and body so closely together that with the death of the body, the soul, its organizing principle, also dies (Barnes, 1984).
Though Aristotle may have seen biology and psychology as unified science, the relevant eternal worth of each exhibits the enormous disparity between them. Aristotle’s version of psychology is rooted in conjecture that has ever since been abandoned with improved comprehension and technology brought to light, while his role in biology was found on competent annotations deduced with ardent insight that survived centuries of criticisms (Large, “Aristotle”).
Creative: Descartes vs. Locke
By the time of such ancient Greeks as Plato and Aristotle, the science of mathematics was flourishing, physicians had learned a great deal about the human body, and philosophers took a more sophisticated view of human experience (Buckley, 2001). One puzzle that fascinated the Greeks was the human senses, our ability to see a person standing many yards away, totally unconnected in any apparent way with our own body, or to hear that person speak. One philosopher speculated that all objects must give off some kind of invisible substance that penetrate our eyes or ears, then travels to the brain. Another puzzle was human temperament. Why are some people so melancholy? Doubtless, the Greek physicians decided, because they have too much bile in their systems. Why are others so optimistic, happy and warm-hearted? Doubtless because they have an especially rich flow of blood (Buckley, 2001).
Throughout the Middle Ages, intellectual and philosophical figures scrutinized behavior primarily from a spiritual rather than a scientific perspective. Then again, a number of philosophers of the 17th and 18th centuries provided sizeable inputs to the expansion of psychology (Sternberg, 2006). Rene Descartes is one of the inquisitive minds in the history of psychology (Lagerspetz, 2002). Since Descartes has found a piece of certain knowledge, that he exists as a thinking thing, he starts to look around for more of self- evident truths. He discovers that he has quite a few of them, prominent among these being the truths of mathematics and logic, and he is optimistic about his chances for developing a system of certain knowledge (Smith, 2007). Then he realizes a kink in his plan. These clear and distinct perceptions are only indubitable so long as he is attending to them. Rene Descartes portrayed the body and mind as unconnected elements that heavily shape each other. Descartes proposed that the transmission between body and mind happened in the pineal gland in the brain (“Rene Descartes”).
By around the year 1600, most leading thinkers of the Western world had decided that behavior was largely dictated by inborn characteristics, somehow present at the moment of birth. Babies are born with strong tendencies to be gloomy, optimistic, generous, greedy, ambitious, or lazy. Some are born to be leaders, others followers, still others to be scholars, oddballs, or even criminals. But a little later the philosopher John Locke popularized a different view, namely that a bay at birth is simply a tabula rasa, or blank tablet, on which anything at all can be written by experience and learning (Uzgalis, 2007). The two opposite views posed another puzzle: Are our lives governed by heredity or by environment?
The school of empiricism came upon the scene and was destined to alter the course and concerns of then emerging psychology in the form of modern philosophy (Ward, “Empiricism”). Whereas Francis Bacon aimed at the total reconstruction of all human knowledge, Locke, who was the founder of empiricism in Britain, aimed at the more modest objective of clearing the ground a little, and removing some of the rubbish that lies in the way of knowledge. Locke hit upon a bold and original interpretation of how the mind works, and from this, described the kind and extent of knowledge we can expect from the human mind (Uzgalis, 2007). The scope of our knowledge, according to Locke, is limited to our experience. This was not a new insight as both Bacon and Thomas Hobbes had urged before him that knowledge should be built upon observation, and to this extent they indeed could be called empiricists (Markie, 2004).
Additionally, Rene Descartes assumed that there was no problem that human reason could not solve if the correct method was employed. This was also the assumption Locke called into critical question, namely the belief that the human mind has capabilities that enable it to discover the true nature of the universe. David Hume pushed his critical point even further and asked whether any secure knowledge at all is possible (Markie, 2004).
In line with the philosophers’ concept of the inseparability of mind and body, I describe my own mind-body philosophy by looking at the knowledgeable mind, for instance, as the mind of integration and full, creative health regardless of the state of the body. It is full personhood or mature wholeness and creative adaptation, components of highest truths and values brought into daily life. It is more than difference between shadows and actual things as described by the legendary Plato but to me, knowledge transcends across shaded or lucid and bodily matters. By having a creative yet knowledgeable mind, the person becomes equipped with the knowledge that is beyond the confines of the body or any learning environment. As we increase our ability to see our mind’s processes, we steadily gain control over physical events. Consequently, the original dimensions of the mind are preserved but are apparently growing proportionately in order to produce a kind of knowledge that is not skewed.
Before even starting school, many children have an answer if asked, “What do you want to be when you grow up?” One girl might dream of being a veterinarian because she loves her dog so much, a boy might want to be a police officer because of what he has seen on television. Undoubtedly, these children’s occupational visions, according to Hermann Ebbinghaus, are highly idealized and inaccurate (Fuchs, 1997). However, it is clear that ideas about possible occupations begin early in life. As the child grows up, he or she learns more about what it means to work in a particular field. The vet-to-be, for instance, might read books about working with animals or visit the small animal clinic. When she goes to college and vet school, she learns even more about her chosen occupation. Thus, form childhood onward, she is being socialized into an occupational role (Halverson, 2004).
In line with this maturation process, Ebbinghaus described how children’s ways of thinking developed as they interacted with the world around them. Infants and young children understand the world much differently than adults do, and as they play and explore, their mind learns how to think in ways that better fit with reality. Adeptness with language facilitates the formation of concepts, which helps organize information into categories and facilitates the deep processing that creates long-lasting memory. Ebbinghaus regarded children as juvenile logicians constructing their own particularized social realities and hypotheses of wisdom (Postman, 1968).
According to Ebbinghaus, children also develop psychologically and cognitively as their brains absorb more information and they learn how to use that information. Literally, children have to learn how to think on purpose and to process or organize all the information that comes to them from the environment. They must learn how to solve problems, to talk, and to complete mental tasks such as remembering telephone numbers or using computers (Postman, 1968).
Ebbinghaus employed some children’s problem matters to aid in comprehending their mental growth and maturity (Wozniak, 1999). As an example, children may fall short in going easy on eight checkers scattered and recount that there probable are more checkers. If one trims down the number to five, they may perhaps detect numbers. By concentrating on the truth that they are not capable of keeping up the numbers for eight items he is possibly obtuse to commit to memory that they can only accomplish it for reduced numbers. What may astonish anyone is upon telling the children a magic bunny rearranged the checkers, they are likely to evoke bigger figures. Some people overlook the reality that children are conjectural. In telling something of grave importance, it is therefore suggested to be as close to the truth as possible to avoid magical speculations (Postman, 1968).
In any case, intellectual comprehensiveness, according to Ebbinghaus, instead of being innate or dependent on a neurologically programmed readiness, is an outcome of the child’s social interaction with the dynamics of maturation. The influence of this theoretical position has resulted in the contemporary emphasis on early childhood experience within the academic setting. In the classroom, Ebbinghaus demonstrated that in addition to the textbook to be used as principal reference for the class, the lecture would include a number of aids that shall help the pupils understand various concepts successfully and enjoyably. Specifically, the lecture shall be carried out not without visual aids as some itself may be confusing in the absence of visual illustration. Using examples, strategies, and integration of the concepts may guarantee that key concepts or valuable ideas are not elapsed, or that these are not confused with other concepts instilled by the domineering students in the class (Postman, 1968).
Unlike children in elementary level, whenever young adults or college students learn something new, they are profiting from experience by changing their ability to adapt to their environment. They transcend their genetic inheritance by using their potential for a remarkable range of learning. Learning frees them from stereotyped automatic reactions by enabling them to develop adaptive, novel behavior sequences. They learn to predict what events tend to go together, as well as what the consequences of their actions will be. In fact, the nervous system is designed for learning, for being changed by virtually all that they experience (Halverson, 2004).
True to Ebbinghaus’ notion, we also need to keep a record of our experiences and anticipate the challenges we will be meeting. Memory is the living library of all references to our past. Much of it is available to help us deal with problems of the present or make future decisions, as long as we have access to the material that is stored there. When we do, we are able to go beyond what is given in our current experience within or beyond the classroom to become powerful information processors, that is, thinking, reasoning, judging, problem-solving, creating individuals. And when we add our unique ability for language, we can learn secondhand from the experiences of other students or fellowmen in general, and also communicate to others what’s in or on our minds (Wozniak, 1999).
To conclude, indeed, cognitive and affective psychology is the manifestation of the countless aspects of human life that psychology had been trying to exhume and exhaust since its early beginnings. Psychology in general has even evolved to a profession in the industrial age. Especially in the United States, where psychotherapy has flourished more vigorously than anywhere else, it represents a large investment by society (Buckley, 2001). It is impossible to estimate how many millions of hours have gone into the training of psychotherapists, the practice of various treatment methods, and research into new and better methods.
Barnes, Jonathan. (1984). The Complete Works of Aristotle. Princeton: Princeton University Press.
Boeree, C. G. (2000). “Wilhelm Wundt and William James.” Retrieved November 8, 2007, from http://webspace.ship.edu/cgboer/wundtjames.html
Buckley, P. (2001). Ancient templates: The classical origins of psychoanalysis. American Journal of Psychotherapy, 55(4), 451-459. Retrieved October 15, 2007, from the ProQuest Database.
Cooper, John M. (1997). Plato: Complete Works. Indianapolis: Hackett Publishing Company.
Eskritt, M., Lee, K., & Donald, M. (2001). The influence of symbolic literacy on memory: Testing Plato’s hypothesis. Canadian Journal of Experimental Psychology, 55(1), 39-50. Retrieved October 15, 2007, from the ProQuest Database.
Fuchs, A. H. (1997). Ebbinghaus’ contributions to psychology after 1885. The American Journal of Psychology, 110(4), 621-633. Retrieved October 27, 2007, from the EBSCOhost Database.
Goodman, R. (2006). William James. Stanford Encyclopedia of Philosophy. Retrieved November 8, 2007, from http://plato.stanford.edu/entries/james/
Graham, G. (2000). Behaviorism. Stanford Encyclopedia of Philosophy. Retrieved November 2, 2007, from http://plato.stanford.edu/entries/behaviorism/
Gross, N. (2002). Becoming a pragmatist philosopher: Status, self-concept, and intellectual choice. American Sociological Review, 67(1), 52-76. Retrieved October 23, 2007, from the ProQuest Database.
Halverson, R. (2004). Accessing, documenting, and communicating practical wisdom: The phronesis of school leadership practice. American Journal of Education, 111(1), 90-121. Retrieved October 23, 2007, from the ProQuest Database.
Harzem, P. (2004). Behaviorism for the new psychology: What was wrong with behaviorism and what is wrong with it now. Behavior and Philosophy, 32(1), 5-12. Retrieved October 23, 2007, from the ProQuest Database.
Holtorf, C. (n.d.). “Associationism.” Retrieved October 25, 2007, from https://tspace.library.utoronto.ca/citd/holtorf/3.7.html
Lagerspetz, O. (2002). Experience and consciousness in the shadow of Descartes. Philosophical Psychology, 15(1), 5-18. Retrieved October 10, 2007, from the EBSCOhost Database.
Large, W. (n.d.). Aristotle. Retrieved October 23, 2007, from http://www.arasite.org/aristotle.html
Leigland, S. (2004). Pragmatism and radical behaviorism: comments on malone (2001). Behavior and Philosophy, 32(2), 305-315. Retrieved November 2, 2007, from the ProQuest Database.
Levin, J. (2004). Functionalism. Stanford Encyclopedia of Philosophy. Retrieved November 8, 2007, from http://plato.stanford.edu/entries/functionalism/
Ludvig, E. A. (2003). Why pinker needs behaviorism: A critique of the blank slate. Behavior and Philosophy, 31, 139-143. Retrieved October 23, 2007, from the ProQuest Database.
Markie, P. (2004). Rationalism vs. empiricism. Stanford Encyclopedia of Philosophy. Retrieved October 15, 2007, from http://plato.stanford.edu/entries/rationalism-empiricism/
Postman, L. (1968). Hermann Ebbinghaus. American Psychologist, 23(3), 149-157. Retrieved November 5, 2007, from the EBSCOhost Database.
“Pragmatism.” Wikipedia, the free encyclopedia. Retrieved October 23, 2007, from the World Wide Web: http://en.wikipedia.org/wiki/Pragmatism
“Rene Descartes.” Wikipedia, the free encyclopedia. Retrieved October 10, 2007, from the World Wide Web: http://en.wikipedia.org/wiki/Ren%C3%A9_Descartes
Smith, K. (2007). Descartes’ life and works. Stanford Encyclopedia of Philosophy. Retrieved October 10, 2007, from http://plato.stanford.edu/entries/descartes-works/
Smith, R. (2004). Aristotle’s logic. Stanford Encyclopedia of Philosophy. Retrieved October 15, 2007, from http://plato.stanford.edu/entries/aristotle-logic/
Sternberg, R. J. (2006). Cognitive psychology (4th ed.). California: Thomson Wadsworth.
“Structuralism.” Wikipedia, the free encyclopedia. Retrieved October 15, 2007, from the World Wide Web: http://en.wikipedia.org/wiki/Structuralism.
Thorndike, Edward. “Behaviorism.” Wikipedia, the free encyclopedia. Retrieved October 23, 2007, from the World Wide Web: http://en.wikipedia.org/wiki/Edward_Thorndike
Uzgalis, W. (2007). John Locke. Stanford Encyclopedia of Philosophy. Retrieved October 10, 2007, from http://plato.stanford.edu/entries/locke/
Ward, T. (n.d.). Empiricism. Retrieved October 15, 2007, from http://personal.ecu.edu/mccartyr/american/leap/empirici.htm
Witherington, D. C., & Crichton, J. A. (2007). Frameworks for understanding emotions and their development: Functionalist and dynamic systems approaches. Emotion, 7(3), 628-637. Retrieved October 23, 2007, from the EBSCOhost database.
Wozniak, R. H. (1999). Memory: Hermann Ebbinghaus. Retrieved October 23, 2007, from http://psychclassics.yorku.ca/Ebbinghaus/wozniak.htm
Zachar, P. (2002). The practical kinds model as a pragmatist theory of classification. Philosophy, Psychiatry & Psychology, 9(3), 219-227. Retrieved October 23, 2007, from the ProQuest Database.