# 370 Ch. 1 & 2

Personology
Term by Henry Murray (1938) meaning to study the person as a coherent entity
Case study
an in-depth study of one person, usually entails long period of observation (and repeated observations), and can include unstructured interviews. Pertains to normal life (real life settings observed). Very detailed and observer can dig deeper into interesting areas.
Experience Sampling
aka Diary studies) Person under study repeatedly stops and reports some aspect of current experience (often with prompting, e.g. Pager signal). Less opportunity for distortion in recalling experiences (because fairly current in memory). Like a case study, allows for patterns of behaviour within a given person to emerge across many situations = ideographic method.
Generalizability
as a continuum. Rarely does a study have total generalizability. Often a trade-off between generalizability and the depth of understanding of a person.
Variable
a dimension along which variations exist. Go beyond case studies in order to examine people who represent a range of levels of a given variable
We will write a custom essay sample on
370 Ch. 1 & 2
or any similar topic specifically for you
Do Not Waste

Correlation
Two aspects = direction (positive, negative) and strength (correlation coefficient).
Statistically Significant
Correlation = the correlation would have been that large or larger only rarely if no true relationship exists (probability of it being that large is rare). Conclusion that the relationship is real, rather than random occurrence
Clinically Significant or Practically Significant
both statistically significant and large enough to have some practical importance.
– Causality: cause and effect relationship. Not correlation (third variable, directionality)
Experimental Method
demonstrates cause and effect. Manipulates the independent variable (the possible cause).
○ Drawbacks: uncertainty about which aspect of the manipulation was important. Usually involve events of relatively short duration.
Experimental Control
making everything exactly the same except for what you manipulate.
Random Assignment
For any variable that cannot be controlled. Randomly assign participants to levels of independent variable.
Dependent Variable
the effect measured from the independent variable cause.
Descriptive Statistics
give a description
Inferential Statistics
researcher makes inferences about relationship between variables (probabilistic)
Multifactor Study
two (or more) variables are varied separately, which means creating all combinations of the various levels of the predictor variables
Experimental Personality Research
combining experimental procedures and individual differences
Main Effect
when a predictor variable is linked to the outcome in a systematic way, completely separate from the other predicton
Interaction
the effect of one variable differs across the two levels of the other variable. (only in multi-factor designs)
Observer Ratings
An assessment in which someone else produces information about the person being assessed.
○ Through interviews, often through talking about something other than themselves, watching other’s actions, summary judgments if know the person well
○ What does your stuff say about you? Three broad mechanisms that connect people to their spaces: 1) Identity claims – symbolic statements about who we are, 2) Feeling regulators – not intended to send messages but help us manage our emotions, 3) – Behavioural residues – physical traces left in our surroundings by everyday actions.
Self-Report
An assessment in which people make ratings pertaining to themselves.
○ E.g. True-false formats, multipoint rating scale
Inventory
A personality test measuring several aspects of personality on distinct subscales
Implicit Assessment
Measuring associations between the sense of self and aspects of personality that are implicit (hard to introspect to).
○ Task that involves making judgments, the pattern of responses informs what the person is like.
○ E.g. IAT by Greenwald et al., 2008.
Subjective
A measure incorporating personal interpretation
Objective
A measure that incorporates no interpretation
Reliability
Consistency (repeatability) across repeated measurements
Error
Random influences that are incorporated in measurements. (solve through reliability)
Internal Reliability/Internal Consistency
Agreement among responses made to the items of a measure.
○ The more observations/items, the more likely the random error will cancel out.
○ BUT, item response theory: determining the most useful items, and the most useful response choices, for the concept being measured (increase efficiency of error while reducing number of items). Also determines difficulty of an item (more difficulty will distinguish people better).
§ E.g. Computerized adaptive testing (application of IRT) – next items selected from test bank are based on the person’s responses to prior items.
○ Correlations among people’s responses to the items.
Split-half Reliability
Assessing internal consistency among responses to items of a measure by splitting the items into halves and then correlating them.
Inter-rater reliability
The degree of agreement between observers of the same events
Test-retest reliability
The stability of measurements across time.
– If personality is really stable, then measures of personality should be reliable across time.
Validity
The degree to which a measure actually measures what its intended to measure.
If what they’re measuring isn’t what they think they’re measuring, they will draw wrong conclusions.
Operational Definition
The defining of a concept by the concrete events through which it’s measured (or manipulated)
Construct Validity
The accuracy with which a measure reflects the underlying concept. (involves all the different kinds of validity)’
Criterion Validity
The degree to which the measure correlates with a separate criterion reflecting the same concept.
○ Regarded as most important way of establishing construct validity
○ Controversy: assume criterion used is perfect, when often it’s poor reflection of the construct
Predictive Validity
The degree to which the measure predicts other variables it should predict
Convergent Validity
The degree to which the measure correlates with a separate criterion reflecting the same concept.
E.g. Measure of dominance should be related (but not perfectly correlated) with qualities of leadership and shyness (inversely).
Discriminant Validation
The degree to which a scale does not measure unintended qualities.
○ Major line of defense against third-variable problem
Face Validity
The scale “looks” as if it measures what it’s supposed to measure.
○ Some believe that face-valid measures are easier to respond to
○ Researchers often refer to subtle differences in personality that can only be separated by measures high in face validity
Response Set
Acquiescence
The response set of tending to say “yes” (“agree”) in response to any questions.
§ Combat by keying half one way and half the other way of the personality dimension (but wording it negatively can be difficult to understand and lead to less accurate response)
Social Desirability
The response set of tending to portray oneself favorably.
Try to word items so that the social desirability isn’t salient
Rational/Theoretical Approach
The use of a theory to decide what you want to measure and then deciding how to measure it.
A good measure is when it has been shown to be reliable, predicts behavioural criteria, and has good construct validity
Empirical
The use of data instead of theory to decided what should go into the measure.
1) Person developing measures decides what qualities of personality exist
2) More pragmatic orientation to assessment, with the practical aim to sort people into categories
Criterion Keying
The developing of a test by seeing which items distinguish between groups.
○ E.g. MMPI (true false test to assess abnormality)
Haven’t Found A Paper?

Let us create the best one for you! What is your topic?