History of Artificial Intelligence

Table of Content

     The interest in creating devices that imitate human behavior or behave in an intelligent way has engrossed man and endured through centuries (Reingold and Nightingale 1999). Superhuman robots, semi-human computers, statues, letter-writing dolls and player pianos have been introduced in science fiction as human representations. The concept of intelligent programs came in when computer scientists after the War found that they had excess computing power to use that could combine with major advances in the design of the machines. Newell and Simon were among the first researchers to do so with their Logic Theorist program, which used accepted rules of logic and a problem-solving program. The program not only reproduced human proofs but also better ones. Following the accomplishment of Newell and Simon was John McCarthy’s within the year of 1956, during which the McCarthy coined the term “Artificial Intelligence (Reingold and Nightingale).” The Artificial Intelligent or AI movement was thus created.

     Modern AI machines automate human tasks and actions, such as military units, answering questions about products for customers, understanding and recording speech and recognizing faces on cameras (Wikipedia 2005). AI systems are now routinely used in businesses, hospitals and military units worldwide as well as built-in into computer software. The aim has been to develop the so-called strong AI to simulate complete, human-like intelligence, as exemplified by the strong AI computer HAL 9000 in the film 2001: a Space Odyssey. The original focus of AI research grew from an experimental approach to psychology linguistic difference. Other approaches include robotics and collective intelligence that focus on the active manipulation of the environment or consensus decision-making and derived from biology and political science when seeking out models on how intelligent behavior is organized (Wikipedia).

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

     Fields of Implementation

     Natural language is one of the several fields of AI (Wikipedia 2005). One interesting example if the “most-human natural language chatterbots was ALICE, which uses the programming language AIML and other clones. AI can be viewed as the set of computer science problems without good solutions at this point. Examples are pattern recognition, image processing, neural networks, natural language processing, robotics and game theory (Wikipedia).

     Hubert Dreyfus, a known critic of the AI movement, observed that almost all of the works on the field were on the areas of Language Translation, Problem-Solving and Pattern Recognition (Reingold and Nightingale 1999). Dreyfus noted that, in the first 10 years of the research program, five governments spent more than $20 million in developing it. The programs in the late 50s could translate technical documents on an average level and that it required only extra databases and more computing power to make the programs work on less formal and more ambiguous texts. Dreyfus came up with a research on the unsuspected complexity of syntax and semantics. One was the inability of the programs to imitate the human capacity to use context to disambiguate meanings of words and sentences. Over-optimism towards AI, however, was seen as the cause of its failures as it was of its successes.

     Most of the accomplishments in the Problem-Solving field centered on the work of Newell, Shaw and Simon on the General Problem Solver or GPS (Reingold and Nightingale 1999). It used abstract problem solving rules in solving a wide range of problems. Most of these rules were heuristic an postulated as those used by human problem solvers. Despite over-optimism, the GPS failed not because of the simple lack of computing capacity but because of inherent or deep theoretical problems. Its general problem-solving strategies are limited.

    Pattern recognition is the last of the fields (Reingold and Nightingale 1999). Computers could handle Morse code operations, there was very little noise and a computer or a very efficient and precise human operator was the sender. But none of these performed satisfactorily in pattern recognition. Rather, they used ultra-specific and inflexible templates, defeated by the significant distortion of data. Furthermore, they were not capable of resolving ambiguity or using context.

     Neural Networks

     Neural networks are an attempt to model the functionality of the brain in the biological and computational areas of activity (Corr 1998). They have been used for different kinds of tasks from intrusion detection system to computer games. Professionals that can make use neural networks are neurobiologists and psychologists in understanding how the brain works; engineers and physicists in recognizing patterns in noisy data; business analysts and engineers in modeling data; computer scientists and mathematicians in seeking out an alternative model of computing and machines that could be taught instead of programmed; and the artificial intelligentsia, cognitive scientists and philosophers for their sub-symbolic processing (Corr).

     The positive properties of neural networks or NN are parallelism, capacity for adaptation, distributed memory, fault tolerance, capacity for generalization, and ease of construction (Corr 1998). NNs are naturally amenable to expression in a parallel notation and implementation on parallel hardware. They are generally capable of learning. Their memory is distributed over many units and, thus, resists noise. Designers of experts systems find difficult in formulating rules in encapsulating an experts knowledge in relation to some problem. NNs learn the rules from some examples. Their capacity for generalization refers to their capacity to give satisfactory response for an input not part of the set of examples for which they have been trained. It is the essential feature of a classification system and it is of interest in that it is quite resembles human generalization. And computer simulations of small applications can be done relatively quickly. NNs have their limitations, too. They are inherently parallel but are normally simulated on sequential machines. As a consequence of this scaling problem, Nns have been used to manage only small problems. Their performance can also be sensitive to the quality and type of processing of the inputs. They cannot explain the results they obtain. And many design decision are not too well understood (Corr).

     Latest Developments

      A MediMedia workshop discussed Artificial Intelligence as applied for pattern classification in medical images (Versweyveld 2000). Its image analysis algorithm developer at Nergal described three current AI methods in order to recognize specific fatures in a medical image. These approaches or methods utilize artificial NNs, fuzzy logic systems and genetic algorithms to do the classification task. The developer also discussed new ways of combining the three technologies to produce more accurate recognition outcomes (Versweyveld).

     AI can make a computer learn from experience, allow it to build a symbol of the world and derive logical and common-sense decisions (Versweyveld 2000). The first two fields have achieved good results, but AI continues to deliver only trivial decision-making performance, which is also time-consuming. AI has two sides: one aims to understand and reproduce the intelligent behavior of human beings, which is related to philosophical questions on the nature of the mind, the regularity of organic systems, syntax and semantic problems. The developer focused more on the other side of the world AI, which is on tasks that humans cannot do, such as manage large amounts of data and perform operations at the highest precision levels. He emphasized that the goal of this approach was not to replace human experts with expert systems but to use computers appropriately in procedures. Computers at present cannot deal with partial or “noisy” information, but they are intended to manage non-algorithmic problems and also extract significant data from incomplete information. Computers are now expected to study several features of the same problem simultaneously. He cited the concrete example of complex classification in  pattern recognition, which is also applied in medical imaging, in detecting the shape of a single abnormal cell in a picture from many normal cells.

     The developer introduced the three systems of neural networks, fuzzy logic and genetic algorithms (Versweyveld 2000). They share a common feature of imitating the biology of the human brain, which allows them to deal with uncertainty, chance and probability. He explained that the principal idea in artificial neural networking is to teach a computer to recognize and classify patterns through intensive training. The machine is first trained with many examples of images with the specific feature. Then the trained ANN extracts the features from a previously unknown sample. This ANN consists of myriads of neurons or variables with different values and which communicate with one another. Communication rules between synapses are externally tuned, such as to reproduce the correct output attuned to the specified input in optimizing recognition. The ANN can then deal with the whole new sample.

     Fuzzy logic approaches the human way of reasoning, according to the developer (Versweyveld 2000). Fuzzy systems divide the output in classes, using a non-binary multi-valued logic for qualitative classification. Special algorithms convert the fuzzy output but without sharp answers. Industries already widely use these systems and they have proved to be much more efficient than other computers for their traditional sharp and quantitative handling and evaluation of data. This type is of benefit to data mining of large databases (Versweyveld).

     The third AI classification method is genetic algorithm, which relies on the concept of evolution (Versweyveld 2000). A problem can be initially approached through random solutions, called the genome of the proposed solution. Then all the solutions are allowed to evolve according to their fitness to the answer by means of a mutual exchange of genes. The developer stated that this was a sort of sexual revolution of solutions. By combining the genes of the solutions, a successful offspring could develop. He said that this algorithm is very useful in handling huge amounts of data when taking a random approach to a random or time-consuming problem.

     The developer concluded that the combination of these three approaches would constitute the future evolution of AI (Versweyveld 2000). Neural networks with genetic algorithms could be trained and not begin from a complete random state of net connections between neurons. They could, instead, be tuned by means of some genetic algorithm. All these approaches are under research for possible use in medical imaging and industrial applications (Versweyveld).

     In Kenneth M. Ford’s projection of artificial intelligence, he does not envision armies of adroit robots running the world or pondering their own existence (Bower 2003). Nor does he expect them to turn out to be tiny computer-linked devices implanted in human bodies. He only assumes that the future of these robots will be confined to cognitive prostheses, still in the service of humankind. This is the concept behind the work of more than 50 scientists whom he directs at the Institute for Human and Machine Cognition or IHMC at the University of West Florida in Pensacola. Cognitive prosthesis is a computational tool, which serves to amplify or extend a human being’s thought and perception, like eyeglasses do for the eyes in improving vision. The difference is that cognitive prosthesis magnifies the strength of the human intellect instead of correct deficiencies. Cognitive prostheses are more comparable to binoculars. Their work is part of a wider discipline, called human-centered computing. It endeavors to mold computer systems to accommodate human behavior rather than have human beings adapt to computers. Human-centered projects are vastly unlike traditional artificial intelligence projects, which create machines to think as people do. Ford rejects the influential Turing Test as a guiding principle for AI research. This test, which has been around for more than 50 years, says that machine intelligence can be achieved only when a computer behaves or interacts so much like a human being that it will be impossible to detect the difference (Bower).

     The task of developing computers to replicate human brain function other than crude arithmetic computations reflects their creators’ very lack of knowledge on how the human brain really functions (Derbyshire 2001). Artificial insects alone, like ants, cannot be created with precision, considering insects’ complex social behavior, based on and drawn to scents and visual clues. For all the great deal of money poured into experiments in the field, real progress remains hardly perceptible. A computer can drive a car and look cute, but driving a car safely is a very low-level brain function. A person can do other things while driving. It is, in fact, hardly a brain function as the unconscious nervous system takes over most of the load because driving is a learned task. Whatever the source of the creation of robots, the central issue is the morality of it. Waling or talking like a human being, robots do not feel like human beings do. They cannot distinguish between good from evil or vice versa. Yet they can develop unexpected powers until restored to their inanimate state. There may not be, as yet, much harm in some entertainment on Artificial Intelligence. But it should not delude anyone to thinking that “genuinely” intelligent machines will eventually be an important or permanent feature of the environment. All that effort on the part of well-meaning AI researchers is commendatory. But the human personality still stands out, separate and irreplaceable. It will remain that way, because God created man in His image (Derbyshire). Man cannot take the role of God and create beings to his likeness. It is not within the power of man to do so and fervent attempts for any well-meaning goals or the best methods will fail or yield disaster.#

BIBLIOGRAPHY

1.      Bower, Bruce. Mind-Expanding Machines: Artificial Intelligence Meets Good Old

Fashioned Human Thought. Science News: Science Service, Inc., August 30, 2003

     2.  Corr, Pat. H. and David Newman. Neural Networks. Queen’s University Belfast., 1998.

3.Derbyshire, John. There is No Substitute for Man. National Review: National Review, Inc.

July 23, 2001

4.      Reingold, Eyal and Jonathan Nightingale. Artificial Neural Networks. Psy 371, 1999.

Retrieved April 7, 2007 from http://psycho.utoronto.ca/~reingold/course/ai/ai.html

5.Versweyveld, Leslie. Three New Approaches for Pattern Recognition in Medical

Imaging. Medical IT News: Virtual Medical World Community, 2000.

6.      Wikipedia. Artificial Intelligence. Media Wiki, 2007

http://en.wikipedia.org/wiki/Artificial_Intelligence

Cite this page

History of Artificial Intelligence. (2016, Jul 28). Retrieved from

https://graduateway.com/history-of-artificial-intelligence/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront