Applicable Artificial Intelligence

Table of Content

“ARTIFICIAL INTELLIGENCE” term is coined by McCarthy in the year 1956. Basically there were two ideas in the definition of Artificial Intelligence. (i) Intelligence (ii) Artificial device. There were different views regarding the scope and Artificial Intelligence. One view is that artificial intelligence is about designing systems that are as intelligent as humans. This view involves trying to understand human thought and an effort to build machines that imitate the human thought process.

This view is the cognitive science approach to AI. Neural Network is a different paradigm for computing Von Neumann machines which are based on the processing memory abstraction of human information processing. Neural networks are like multiprocessor computer system with simple processing elements, high degree of interconnections, simple scalar messages and adaptive interaction between elements.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Intellectual roots of AI date back to the early studies of the nature of knowledge and reasoning. The dream of transposing a computer imitates humans also has a very early history. Aristotle in the year 384-322 BC developed an informal system of syllogistic logic, which is the basis of the first formal deductive reasoning systems. Descartes early in the 17th century proposed that bodies of animals are nothing more complex than machines. in 1942, Pascal made the first mechanical digital calculating machine. George Boole developed a binary algebra representing “laws of thought” in the 19th century. Charles Babbage and Ada Bryon worked on programmable mechanical calculating machines. In the end of 19th century and in the early 20th century, mathematical philosophers develop mathematical representations of logic problems.

The advent of computers provides a revolutionary advance in the ability to study intelligence. The history of Neural networks described above can be divided into several periods:

  • First Attempt: There were some early simulations using formal logic. McCulloch and Pitts developed a Boolean circuit model of brain in 1943. They explained how the neural networks are helpful to compute. In 1951 Marvin Minsky and Dean Edmonds built the SNARC, which is the first randomly wired neural network learning machine. The neural network computer used 300 vacuum tubes and a network with 40 neurons.
  • Period of Frustration and Disrepute: Minsky and Papert in 1969 generalised the limitations of the single layer Perceptrons to multilayered systems.
  • Innovation: During this period several paradigms were developed which the modern work continues to enhance. Gross berg’s developed the Adaptive resonance Theory (ART) networks based on the biologically plausible models. Werbos developed and used the back propagation learning method. Back propagation is probably the most well known and widely applied of the neural networks. Amari was involved with the theoretical developments. Klopf developed a basis for learning in artificial neurons based on a biological principle for neuronal learning called heterostasis.
  • Re-Emergence: Progress increased during the late 1970s and early 80s in the neural network field. Several factors influenced this process. The books were the major factor and the conferences and the media which increased the activity and tutorials helped disseminate the technology.
  • Today: To attract a great deal of intention and fund further research, significant progresses have been made in the field of neural networks. Advancements beyond the current commercial applications appear to be possible, and research is advancing the field on many fronts. The chips are emerging based on the neural networks and applications to complex problems are developing. Today is the transition of the neural network technology. Turing’s definition is very thought provoking in itself and certainly seems on the face of things to be a very valid test for human level intelligence in machine.
  • Chinese Room Argument: John R Searle’s Chinese room argument which is a remarkable discussion about the foundations of artificial intelligence and cognitive science. Chinese Room Argument is considered as an example and introduction to the philosophy of mind. This Argument tries to show that strong AI is false.

But it cannot be shown until the human minds progress is not known. How can anyone know it a priori before any empirical tests have been given? This is the ingenious part of the Argument. The idea is to construct a machine which is not mental with any program. And if this machine exists then it is the case that strong AI would be false, since no program would ever make it mental. Artificial neural networks are among the most powerful learning models.

They approximate the wide range of complex functions representing multidimensional input -output maps. It is an information processing paradigm that is inspired by the way biological nervous systems, such as brain, process information. Neural networks with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complicated to be noticed by either humans or other computer techniques. Artificial neural network are represented by a set of nodes, often arranged in layers and a set of weighted directed links connecting them.

The nodes are equal to neurons, while the links denote the synapses. The nodes are the information processing units and the link is the communicating media. There are a wide variety of networks depending on the nature of information processing carried out at individual nodes, the topology of the links, and the algorithm for adaptation of link weights. Some of the popular among them include: Perceptron: It consists of a single neuron with multiple inputs and a single output. The processing of information is carried out through a transfer function which is either linear or nonlinear. draw:frame} Multilayered Perceptron: It consists of input, hidden layers and output layers. Each layer contains number of perceptrons.

The output is transmitted to the input of nodes in other layers through weighted links. The transmission is done only with the nodes of the next layer leading to feed forward networks. Formulae of Perceptron: It is a step function based on the linear combination of real valued inputs. If the combination is above a threshold it outputs a 1 else outputs -1. {draw:frame} A perceptron draws a hyper plane as the decision boundary over the input space. draw:frame} A perceptron can learn only linearly separable. Perceptron Learning: Learning a perceptron indicates finding the right values for weights W. The perceptron learning algorithm is stated as:

  • Assign random values to the weight vector.
  • The weight update rule is applied to every training example
  • Is the training examples correctly classified?

(a) Yes quit (b) No. Go back to step 2. There are 2 popular weight update rules.

They are as follows: The Perceptron Rule: For a training example X=(x1,x2,…,xn) update each weight according to this rule: wi=wi+? i where ? wi = ? (t-o)xi t= target output o=output generated by the perceptron ?=constant known as learning rate Delta Rule: This rule is used in order to check for non linear separable. The key idea is to use a gradient descent search. herethe following error is reduced E= 1/2 ? i(ti-oi)2 where the sum goes over all training examples. There are 2 differences between the perceptron and the delta rule. The perceptron is based on an output from a step function, whereas the delta rule uses the linear combination of inputs directly.

Perceptron is used to converge to a consistent hypothesis assuming data is linearly separable. The delta rule converges in the limit but it does not need the linearly separable data condition. The 2 limits of the above rule are: Convergence may take long time There is no guarantee to find the global minimum. Back-Propagation Algorithm: This is an algorithm which is used to train the multilayered perceptrons . The algorithm will learn the weights for all links in an interconnected multilayered network. Algorithm: Create a network with n input nodes, n hidden internal nodes, and nout output nodes.

Cite this page

Applicable Artificial Intelligence. (2018, Feb 12). Retrieved from

https://graduateway.com/applicable-artificial-intelligence/

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront