Pattern Recognition

Table of Content

Explain with reference to the relevant experimental evidence the main models of pattern recognition.

Adaptation of Sperlings model of Information processing explores the third process after Sensory Input, Pattern Recognition. Pattern recognition is the process by which we identify the various stimuli which have been encoded by our sensory systems. Evidence of the processes of how individuals assess stimuli is determined by establishing the main theoretical models and determing the patterns that arise of the stimuli of written and spoken word and objects. The investigation of these two stimuli involves recognising written and spoken letters, how individuals recognise words made up of these letters, and the recognition and semantic development of sentences made up of these words.

This essay could be plagiarized. Get your custom essay
“Dirty Pretty Things” Acts of Desperation: The State of Being Desperate
128 writers

ready to help you now

Get original paper

Without paying upfront

Three model types provide experimental evidence of Pattern recognition these being; Template Models, Feature Models, and Structural Models. The Template theory argues that we recognise patterns by comparing them to stored representations. There are three basic assumptions of the theory behind Template models; 1) Memories are represented as holistic unanalyzed entity (a template). 2) An input pattern is compared to the stored representation. 3)Identity is determined by selection of the template with the greatest amount of overlap and we recognise a given pattern by the template which matches that stimulus most closely. For example the numbers on a check are recognized by a Template system.

Although effective, a number of problems can occur with the template model of Pattern recognition. One of the main problems is that a given object may be represented by a range of different physical patterns, which suggests there is intolerance to deviations. The second problem is a number of templates may be required.

Due to the shortcomings of the Template theory, an alternative explanation of pattern recognition was developed. This explanation, known as Feature-Detection theory. It was designed to accommodate pattern variability by focusing on common elements across different instances of an object. The theory proposes that we break a stimulus down into its component features and then use these features to infer the identity of the stimulus. It also proposes that patterns are identified in terms of the set of features which define them. Supported by Gibson (1969), this model suggests individuals learn to identify patterns by learning which particular set of features allows us to discriminate that pattern. Feature theory Pattern Recognition theories work on three basic premises. The first premise is that stored representation is a description of past inputs in terms of list of attributes or features. The second premise is that inputs are broken down into a small list of constituent features, and the third premise is that identity is determined by selecting the feature list most similar to the input. One of the most influencive Features theory model was identified by Selfridge (1959) developed a feature based model of pattern recognition called Pandemonium. Pandemonium explored four key areas of Pattern recognition. Pandemonium consists of four separate layers: each layer is composed of “demons” specialized for specific tasksThe first type of demons is Image demons, these acquire a neural representation of the stimulus, and they also store and pass on the data. The second Type of Demons is Feature Demons. Feature demons extract the features of the stimulus. The third type is Cognitive demons and they try and match feature demons feature components to known stimuli. The fourth type, the decision demon decides what the pattern is based on the strength of the output of the cognitive demons.

How did Selfridge come up with this pattern recognition methodology? He demonstrated the effectiveness of Pandemonium via two different tasks.

(1)He went about distinguishing dots and dashes in manually keyed Morse code (2)He tested the ability to recognizing 10 different hand-printed characters Gibson (1969) expanded upon Selfridges model by proposing a set of features which could be used to discriminate upper case letters. He assessed four criteria for choosing these features. For example, features should be critical ones which are not present in all members, for example the Horizontal line in the A is critical for the A as it differentiates it from other letters. The features should remain intact regardless of changes in the in size, perspective or any other features. For example the features of the B are that it has a vertical line, with closed curves, an intersection and the redundancy features of symmetry in the vertical plane. The set of features defining each letter should be unique. As indicated the features between B and A are different, each establish independence. The number of proposed features should be small. For example it takes longer to respond to P & R versus G & M, as P & R share many critical features. Hubel and Wiesel (1965) sort to test the psychological evidence of Gibsons theory by neurologically investigating the etiology and responses of visual cortical cells. Hubel and Wiesel (1965) demonstrated of the existence of visual cortical cells and their response to orientation. They placed microelectrodes in cats brains (visual cortex) and examined the visual cortex responses to different stimuli. They found some neurons respond only to horizontal lines, others to diagonals, similar evidence was found in monkeys by Maunsell and Newsome (1987).

Perceptual confusion studies provided how individuals make mistakes when identifying key characteristics of Letters. Letters that share many features in common should be more easily confused. Kinney, Marsetta, and Showman (1966) flashed letters on a screen and asked the participants to name the letters.

Most of the errors were responses for letters sharing similar features with the target, for example when the stimulus C was shown many individuals responded to G, as both have similar open horizontal curves and symmetry. A logical prediction of feature-detection theory is that patterns with similar features (e.g., E and F) should be mistaken for one another more often than patterns with dissimilar features (e.g., E and W). That is, the rate of perceptual confusions should reflect the number of features that stimuli have in common. This has been observed in a number of experiments (e.g., Geyer & DeWald, 1973; Garner, 1979; Townsend, 1971). Similarly, when deciding if pairs of letters are the same or different, it takes longer if the letters share many features than if they share few features. Gibson, Schapiro, and Yonas (1968) conducted a reaction time study exploring how long it took participants to assess whether pairs of letters were the same or different. The data was analysed by hierarchical cluster analysis to reveal the letters which were most confusable.

Experimental evidence established consistence with the theory of the Feature Theory model, suggests it has an advantage in that it is economical to store features in memory. Conversely a number of disadvantages are established in the development of effective pattern recognition methodologies. For example, it lacks the ability to apply to a wide range of stimuli. In many cases the analysis of stimuli does not always begin with features, and it treats all features as the same.

For example, Weisstein & Harris (1974) explored how Feature theory does not account for context and expectation. They asked participants to detect a line embedded in either a briefly presented three-dimensional form (e.g., box) or random assortment of lines. Although Feature theory suggests that the target line should always activate the same feature detectors, their results indicated that detection was best when line was embedded in coherent form (e.g., box). Comparatively, Feature detection suggests pattern recognition is based solely on identifying stimulus features, but many objects have the same patterns but different features. For example:Letter A = two upright oblique lines and a dashBut / – is not recognized as AThe third type of model is the Structural Model. The basic premise behind this model is building on the Feature model by accounting for the relationship between features. For example, Lockhead and Crist (1980) demonstrated that we can use relational information when recognising patterns. They examined a number of conditions. In one condition they got children to sort a deck of cards containing the letters p and b into two piles. Conversely, in the other condition children had to sort the cards again but this time the letters had a dot added at the same physical location. This was to provide a relational cue which should aid letter discrimination. They found the children sorted the decks faster and more accurately when the dots were present.

A key component of Structural theories is that the removal of relational information should result in a reduction in pattern recognition accuracy. This was examined by Biederman (1985). He tested this by removing line segments from drawings of objects and asking participants to identify the image. In one condition the segments were removed from the middle of lines (relational information preserved).

In the other condition they were removed from the intersection of lines (relational information removed).Participants were most accurate when the segments were removed from the middle of the lines, in the relational information preserved condition.

Research in the shape recognition domain asks how we recognize the number of objects in our environment (e.g., chairs, telephone). This was investigated by Biederman (1990) where he developed a model of object perception which proposed the existence of 35 geons (simple volumes) from which all objects could be constructed provided the relations between volumes (i.e., features) was specified. Biederman developed recognition-by-components (RBC) theory where the it is proposed that the individual recognizes objects in 2 stages. First they identify a small set of basic shapes (Geons) that are grouped into objects, and then they identify edges and boundaries called (regions). RBC theory assumes that simple volumes (Geons) form the two-dimensional view of objects (e.g., blocks wedges, cylinders, etc), these are known as Geometric Primitives. A geon is created by combining edges, for example:Biederman propose the existence of 35 geons that act as a visual object alphabet. These Geons make up objects and alterations between different types of geons produce different types of objects. Initial recognition of sub-objects, then combinations. For example: RBC Theory suggests the relations among geons are specified in an object dictionary, for example: next-to; underneath; smaller than and the identification of constituent geons and their structural relations leads to object recognition. The RBC theory predicts pattern recognition by three components:(1)Object is recognized by identifying its geons. The correct spatial organization is essential for picture recognition in humans. It is possible to have different objects made up of the same parts, so discriminating between those objects necessarily involves sensitivity to spatial interrelations(2)System is robust (can delete 50% of objects geons). If a subset of only two or three geons are available and they are in the correct spatial organization, then successful object recognition will occur.(3)Colour, shading & texture not necessary for object recognition unless discriminating between similarly shaped objects (e.g., ball & apple).Object recognition in humans is largely invariant with regard to changes in the size, position, and viewpoint of the objectSpecific application of these three components was determined by Biederman where he argued that for object recognition to occur, one must determine the contour of an object in order to determine its constituent parts and decide which edge information possesses the important characteristics of remaining invariant across various viewing angles.

Specificity of the analysis of the contour, edges and regions of objects is assessed by Gestalt psychology, where they investigated the following shape analysis. Biederman applied these theories and formulated properties of boundaries that define regions, Smooth continuation, Cotermination, Parallelism, and Symmetry.

For example Assessment of Biedermans RBC was initially determined by assessing the theories ability to establish the quality of a pattern. Participants were presented with degraded line drawings of objects. In some of the presented objects both contour and relational information was removed. Results indicated that Object identification was poor when relational information (joints) was deleted even when the item had been seen previously.

Closer examination of Biedermans RBC Theory does suggest some discrepancies. For example, on some occasions global processing of an object often precedes processing of component parts. Navon (1977)Conversely, Cave & Kosslyn (1993) presented Ss with line drawing of objects that were divided into parts that made geons difficult to detect. Their results suggested that dividing objects into parts had very little effect on object recognition accuracy. Two concerns for RBC theory are that they have NOT been tested directly and there is still no convincing evidence for the number of geons proposed by Biederman.

Cite this page

Pattern Recognition. (2019, Jan 30). Retrieved from

Remember! This essay was written by a student

You can get a custom paper by one of our expert writers

Order custom paper Without paying upfront