Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Psychology P2 Definitions

● experiment: an investigation looking for a causal relationship in which


an independent variable is manipulated and is expected to be
responsible for changes in the dependent variable.
● independent variable: the factor under investigation in an experiment
which is manipulated to create two or more conditions (levels) and is
expected to be responsible for changes in the dependent variable.
● dependent variable: the factor in an experiment which is measured
and is expected to change under the influence of the independent
variable.
● extraneous variable: a variable which either acts randomly, affecting
the DV in all levels of the IV or systematically, i.e. on one level of the IV
(called a confounding variable) so can obscure the effect of the IV,
making the results difficult to interpret.
● experimental condition: one or more of the situations in an experiment
which represent different levels of the IV and are compared (or
compared to a control condition).
● control condition: a level of the IV in an experiment from which the IV
is absent. It is compared to one or more experimental conditions.
● laboratory experiment: a research method in which there is an IV, a
DV and strict controls. It looks for a causal relationship and is conducted
in a setting that is not in the usual environment for the participants with
regard to the behaviour they are performing.
● experimental design: the way in which participants are allocated to
levels of the IV.
● independent measures design: an experimental design in which a
different group of participants is used for each level of the IV (condition).
● demand characteristics: features of the experimental situation which
give away the aims. They can cause participants to try to change their
behaviour, e.g. to match their beliefs about what is supposed to happen,
which reduces the validity of the study.
● random allocation: a way to reduce the effect of confounding variables
such as individual differences. Participants are put in each level of the
IV such that each person has an equal chance of being in any condition.
● repeated measures design: an experimental design in which each
participant performs in every level of the IV.
● participant variables: individual differences between participants (such
as age, personality and intelligence) that could affect their behaviour in
a study. They could hide or exaggerate differences between levels of
the IV.
● order effects: practice and fatigue effects are the consequences of
participating in a study more than once, e.g. in a repeated measures
design. They cause changes in performance between conditions that
are not due to the IV, so can obscure the effect on the DV.
● practice effect: a situation where participants’ performance improves
because they experience the experimental task more than once, e.g.
due to familiarity or learning the task.
● fatigue effect: a situation where participants’ performance declines
because they have experienced an experimental task more than once,
e.g. due to boredom or tiredness.
● counterbalancing: counterbalancing is used to overcome order effects
in a repeated measures design. Each possible order of levels of the IV
is performed by a different sub-group of participants. This can be
described as an ABBA design, as half the participants do condition A
then B, and half do B then A.
● matched pairs design: an experimental design in which participants
are arranged into pairs. Each pair is similar in ways that are important to
the study and one member of each pair performs in a different level of
the IV.
● standardisation: keeping the procedure for each participant in an
experiment (or interview) exactly the same to ensure that any
differences between participants or conditions are due to the variables
under investigation rather than differences in the way they were treated.
● reliability: the extent to which a procedure, task or measure is
consistent, for example that it would produce the same results with the
same people on each occasion.
● validity: the extent to which the researcher is testing what they claim to
be testing.
● field experiment: an investigation looking for a causal relationship in
which an independent variable is manipulated and is expected to be
responsible for changes in the dependent variable. It is conducted in the
normal environment for the participants for the behaviour being
investigated.
● generalise: apply the findings of a study more widely, e.g. to other
settings and populations.
● ecological validity: the extent to which the findings of research in one
situation would generalise to other situations. This is influenced by
whether the situation (e.g. a laboratory) represents the real world
effectively and whether the task is relevant to real life (has mundane
realism).
● natural experiment: an investigation looking for a causal relationship in
which the independent variable cannot be directly manipulated by the
experimenter. Instead they study the effect of an existing difference or
change. Since the researcher cannot manipulate the levels of the IV it is
not a true experiment.
● uncontrolled variable: a confounding variable that may not have been
identified and eliminated in an experiment, which can confuse the
results. It may be a feature of the participants or the situation.
● informed consent: knowing enough about a study to decide whether
you want to agree to participate.
● right to withdraw: a participant should know that they can remove
themselves, and their data, from the study at any time.
● privacy: participants’ emotions and physical space should not be
invaded, for example they should not be observed in situations or
places where they would not expect to be seen.
● confidentiality: participants’ results and personal information should be
kept safely and not released to anyone outside the study.
● self-report: a research method, such as a questionnaire or interview,
which obtains data by asking participants to provide information about
themselves.
● questionnaire: a research method that uses written questions.
● closed questions: questionnaire, interview or test items that produce
quantitative data. They have only a few, stated alternative responses
and no opportunity to expand on answers.
● open questions: questionnaire, interview or test items that produce
qualitative data. Participants give full and detailed answers in their own
words, i.e. no categories or choices are given.
● inter-rater reliability: the extent to which two researchers interpreting
qualitative responses in a questionnaire (or interview) will produce the
same records from the same raw data.
● social desirability bias: trying to present oneself in the best light by
determining what a test is asking.
● filler questions: items put into a questionnaire, interview or test to
disguise the aim of the study by hiding the important questions among
irrelevant ones so that participants are less likely to alter their behaviour
by working out the aims.
● interview: a research method using verbal questions asked directly,
e.g. face-to-face or on the telephone.
● structured interview: an interview with questions in a fixed order which
may be scripted. Consistency might also be required for the
interviewer’s posture, voice, etc. so they are standardised.
● unstructured interview: an interview in which most questions (after the
first one) depend on the respondent’s answers. A list of topics may be
given to the interviewer.
● semi-structured interview: an interview with a fixed list of open and
closed questions. The interviewer can add more questions if necessary.
● subjectivity: a personal viewpoint, which may be biased by one’s
feelings, beliefs or experiences, so may diff er between individual
researchers. It is not independent of the situation.
● objectivity: an unbiased external viewpoint that is not affected by an
individual’s feelings, beliefs or experiences, so should be consistent
between different researchers.
● naturalistic observation: a study conducted by watching the
participants’ behaviour in their normal environment without interference
from the researchers in either the social or physical environment.
● controlled observation: a study conducted by watching the
participants’ behaviour in a situation in which the social or physical
environment has been manipulated by the researchers. It can be
conducted in either the participants’ normal environment or in an
artificial situation.
● unstructured observation: a study in which the observer records the
whole range of possible behaviours, which is usually confined to a pilot
stage at the beginning of a study to refine the behavioural categories to
be observed.
● structured observation: a study in which the observer records only a
limited range of behaviours.
● behavioural categories: the activities recorded in an observation. They
should be operationalised (clearly defined) and should break a
continuous stream of activity into discrete recordable events. They must
be observable actions rather than inferred states.
● inter-observer reliability: the consistency between two researchers
watching the same event, i.e. whether they will produce the same
records.
● participant observer: a researcher who watches from the perspective
of being part of the social setting.
● non-participant observer: a researcher who does not become
involved in the situation being studied, e.g. by watching through
one-way glass or by keeping apart from the social group of the
participants.
● overt observer: the role of the observer is obvious to the participants.
● covert observer: the role of the observer is not obvious, e.g. because
they are hidden or disguised.
● correlation: a research method which looks for a causal relationship
between two measured variables. A change in one variable is related to
a change in the other (although these changes cannot be assumed to
be causal).
● positive correlation: a relationship between two variables in which an
increase in one accompanies an increase in the other, i.e. the two
variables increase together.
● negative correlation: a relationship between two variables in which an
increase in one accompanies a decrease in the other, i.e. higher scores
on one variable correspond with lower scores on the other.
● hypothesis (plural hypotheses): a testable statement predicting a
difference between levels of the independent variable (in an experiment)
or a relationship between variables (in a correlation).
● alternative hypothesis: the testable statement which predicts a
difference or relationship between variables in a particular investigation.
● non-directional (two-tailed) hypothesis: a statement predicting only
that one variable will be related to another, e.g. that there will be a
difference in the DV between levels of the IV in an experiment or that
there will be a relationship between the measured variables in a
correlation.
● directional (one-tailed) hypothesis: a statement predicting the
direction of a relationship between variables, e.g. in an experiment
whether the levels of the IV will produce an increase or a decrease in
the DV or in a correlation whether an increase in one variable will be
linked to an increase or a decrease in another variable.
● null hypothesis: a testable statement saying that any difference or
correlation in the results is due to chance, i.e. that no pattern in the
results has arisen because of the variables being studied.
● operationalisation: the definition of variables so that they can be
accurately manipulated, measured or quantified and replicated. This
includes the IV and DV in experiments and the two measured variables
in correlations.
● situational variable: a confounding variable caused by an aspect of
the environment, e.g. the amount of light or noise. control: a way to
keep a potential extraneous variable constant, e.g. between levels of
the IV, to ensure measured differences in the DV are likely to be due to
the IV, raising validity.
● population: the group, sharing one or more characteristics, from which
a sample is drawn.
● sample: the group of people selected to represent the population in a
study.
● sampling technique: the method used to obtain the participants for a
study from the population.
● opportunity sample: participants are chosen because they are
available, e.g. university students are selected because they are
present at the university where the research is taking place.
● volunteer (self-selected) sample: participants are invited to
participate, e.g. through advertisements via email or notices. Those who
reply become the sample.
● random sample: all members of the population (i.e. possible
participants) are allocated numbers and a fixed amount of these are
selected in an unbiased way, e.g. by taking numbers from a hat.
● quantitative data: numerical results about the quantity of a
psychological measure such as pulse rate or a score on an intelligence
test.
● qualitative data: descriptive, in-depth results indicating the quality of a
psychological characteristic, such as responses to open questions in
self-reports or case studies and detailed observations.
● measure of central tendency: a mathematical way to find the typical
or average score from a data set, using the mode, median or mean.
● mode: the measure of central tendency that identifies the most frequent
score(s) in a data set.
● median: the measure of central tendency that identifies the middle
score of a data set which is in rank order (smallest to largest). If there
are two numbers in the middle they are added together and divided by
two.
● mean: the measure of central tendency calculated by adding up all the
scores and dividing by the number of scores in the data set.
● measure of spread: a mathematical way to describe the variation or
dispersion within a data set.
● range: the difference between the biggest and smallest values in the
data set plus one (a measure of spread).
● standard deviation: a calculation of the average difference between
each score in the data set and the mean. Bigger values indicate greater
variation (a measure of spread).
● bar chart: a graph used for data in discrete categories and total or
average scores. There are gaps between each bar that is plotted on the
graph because the columns are not related in a linear way.
● histogram: a graph used to illustrate continuous data, e.g. to show the
distribution of a set of scores. It has a bar for each score value, or group
of scores, along the x-axis. The y-axis has a frequency of each
category.
● scatter graph: a way to display data from a correlational study. Each
point on the graph represents the point where one participant’s score on
each scale for the two measured variables cross.
● normal distribution: an even spread of a variable that is symmetrical
about the mean, median and mode. The graph showing this distribution
is sometimes called a ‘bell curve’ because of its shape. The graph of the
frequency of each score or value rises gradually and symmetrically to a
maximum at the point of the mean, median and mode.
● ethical issues: problems in research that raise concerns about the
welfare of participants (or have the potential for a wider negative impact
on society).
● ethical guidelines: pieces of advice that guide psychologists to
consider the welfare of participants and wider society.
● debriefing: giving participants a full explanation of the aims and
potential consequences of the study at the end of a study so that they
leave in at least as positive a condition as they arrived.
● protection of participants: participants should not be exposed to any
greater physical or psychological risk than they would expect in their
day-to-day life.
● deception: participants should not be deliberately misinformed (lied to)
about the aim or procedure of the study. If this is unavoidable, the study
should be planned to minimise the risk of distress, and participants
should be thoroughly debriefed.
● generalisability: how widely findings apply, e.g. to other settings and
populations.
● test–retest: a way to measure the consistency of a test or task. The
test is used twice and if the participants’ two sets of scores are similar,
i.e. correlate well, it has good reliability.

You might also like