Skip to main content
All posts
·ClearConcept Team

How to Master A-Level Research Methods: Complete Exam Technique

A-Level Research Methods: variables, experimental design, sampling, statistics, ethics. Master exam technique for psychology and sociology.

a-levelresearch-methodspsychologysociologyexam-technique

How to Master A-Level Research Methods: Complete Exam Technique

Research methods is the section that most A-Level Psychology and Sociology students underrevise. It seems technical, it feels separate from the "interesting" content, and students assume they will pick it up through exposure to studies in other topics.

That assumption is wrong. Research methods questions are highly predictable, highly specific, and carry marks that are genuinely available with focused preparation. This guide covers what you need to know, how the exam tests it, and where students most commonly lose marks.


Why Research Methods Matters

Research methods questions appear throughout the papers, not just in a dedicated section. A question about the Stanford Prison Experiment will assess your knowledge of the study and your ability to evaluate its methodology. A question asking you to design a study for a given scenario is testing your methods knowledge directly.

More importantly: research methods questions are among the most markable in the specification. Unlike extended essay questions, where marks depend partly on interpretation and argument, methods questions have clear correct answers. You either know what an independent variable is, or you do not. You either understand why random sampling produces more representative results than opportunity sampling, or you do not.

This means research methods is one of the highest-return areas to revise in the final weeks. Accurate knowledge of the core concepts, practised through application questions, can reliably secure marks that other students leave behind.


Variables: The Foundation

Almost every research methods question draws on an understanding of variables.

The independent variable is what the researcher manipulates or changes. In a study of whether background music affects concentration, the independent variable is the type of background music - perhaps silence versus instrumental music versus music with lyrics.

The dependent variable is what the researcher measures to assess the effect of the independent variable. In the same study, the dependent variable might be the score on a concentration task completed under each condition.

Extraneous variables are anything else that might affect the dependent variable and which the researcher has not intentionally introduced. Temperature, noise from outside, the time of day, or participant anxiety are all potential extraneous variables. When these are not controlled and actually do affect the outcome, they become confounding variables - they confound the results, making it impossible to know whether the effect came from the independent variable or from something else.

Operationalisation is the process of defining variables in measurable terms. "Concentration" is too vague to measure. "Score on a 10-item attention task completed in 5 minutes" is operationalised. Exam questions frequently ask you to operationalise a variable for a given scenario - practise this with different psychological constructs (anxiety, aggression, attachment security, stress) until you can do it quickly and specifically.


Research Designs

The design of a study determines how participants are assigned to conditions, and each design has specific strengths and weaknesses.

An independent groups design uses different participants in each condition. The advantage is that there are no order effects - participants who do one condition are not influenced by having done another. The disadvantage is participant variables: if the groups differ in some relevant way (average age, baseline ability, prior experience), this could confound the results.

A repeated measures design uses the same participants in all conditions. This eliminates participant variables, because the same people experience all conditions. The disadvantage is order effects: participants may improve across conditions due to practice (practice effect) or perform worse due to fatigue or boredom. Counterbalancing - splitting participants into groups that experience conditions in different orders - partially controls for this.

A matched pairs design uses different participants in each condition, but participants are matched on relevant variables before assignment. It combines some advantages of both other designs - no order effects, reduced participant variables - but matching is time-consuming and perfect matching is rarely achieved.


Sampling Methods

The target population is the group the researcher wants to draw conclusions about. The sample is the subset of that population actually studied. The method used to select the sample determines how representative it is.

Random sampling gives every member of the target population an equal chance of selection. It produces the most representative samples, but requires a complete list of the population (a sampling frame) and can be time-consuming.

Systematic sampling selects every nth person from a list. It is more practical than random sampling and produces reasonably representative results, but can introduce bias if the list has any regular pattern.

Stratified sampling divides the population into subgroups (strata) relevant to the research - for example, by age, gender, or socioeconomic background - and samples from each group in proportion to its representation in the population. It is effective but requires good population information and careful design.

Opportunity sampling uses whoever is available. It is quick and cheap but produces biased samples - whoever happens to be accessible is unlikely to represent the target population well.

Volunteer sampling uses self-selected participants who respond to a recruitment call. It is practical but introduces volunteer bias: people who choose to participate may differ systematically from those who do not.


Statistical Tests and Significance

At A-Level, you do not need to perform complex statistical calculations, but you do need to understand what statistical tests are for and when to use them.

Statistical tests allow researchers to judge whether a result is likely due to chance or whether it reflects a genuine effect. The significance level (typically p < 0.05) means there is less than a 5% probability that the result occurred by chance if there is actually no real effect.

The choice of test depends on three factors: whether the design is related (repeated measures or matched pairs) or unrelated (independent groups); the level of data (nominal - categories; ordinal - rankings; interval - measured on a consistent scale); and what is being tested (difference between groups, or correlation between variables).

For most A-Level scenarios, you will be asked to identify the appropriate test rather than calculate it. Common tests include the Mann-Whitney U test (unrelated design, ordinal data, testing for difference), the Wilcoxon signed-ranks test (related design, ordinal data, testing for difference), and Spearman's Rank Correlation Coefficient (testing for a relationship between two variables, ordinal data).

The exam question usually gives you a scenario and asks which test you would use, with justification. Practise identifying the design type and data level for different scenarios, then match these to the appropriate test.


Ethics

The British Psychological Society Code of Ethics sets the standards for research with human participants. The core principles examiners test are:

Informed consent: participants should understand what they are agreeing to before they participate. Fully informed consent is not always possible (if the study involves deception), but participants should be debriefed afterwards.

Right to withdraw: participants should be able to leave the study at any time, without penalty. Milgram's prods ("you must continue") are a direct violation of this principle.

Protection from harm: participants should not be exposed to greater risk of physical or psychological harm than they would encounter in ordinary life.

Confidentiality: participants' data should be kept confidential and not identified without their agreement.

Deception: this is sometimes unavoidable in psychological research, but should be used only when necessary, and should always be followed by a full debriefing. Deception that causes distress on revelation is ethically problematic.

For each ethical principle, you should be able to: define it, explain why it matters, give an example of a study that either respected or violated it, and evaluate the trade-off between ethical standards and the scientific value of the research.


Common Exam Errors

The most frequently made errors in research methods questions:

Confusing random sampling with random allocation. Random sampling is how you select your participants. Random allocation is how you assign participants to conditions within an independent groups study. They are different things, and confusing them is a common source of lost marks.

Weak operationalisation: writing "measure anxiety" rather than "score on the GAD-7 anxiety scale" or "number of anxious behaviours recorded during a 5-minute observation period."

Not applying evaluation to the specific study in question: writing "small sample sizes reduce generalisability" as a generic statement rather than "the sample of 20 psychology students limits generalisability to the broader adult population."

Ignoring the question command: "suggest a suitable sampling method and justify your choice" requires both a method and a reason. Giving only the method earns half marks at best.


Using ClearConcept for Research Methods

ClearConcept includes research methods content and practice questions mapped to the psychology and sociology specifications. Applied questions - "design a study to test the following hypothesis" - are included alongside concept-based flashcards.

Explore ClearConcept's research methods content


Further Reading

Related reading