Real-World Data (RWD) in Healthcare

Understanding randomised controlled trials

March 18, 2024 Off By admin
Shares

The concept of evidence hierarchy in evaluating intervention effectiveness is elucidated, with emphasis on the randomized controlled trial (RCT) as the gold standard. Critical appraisal considerations for RCTs, including assessing methodology validity, treatment effect magnitude and precision, and research result applicability, are discussed. Key terminologies like randomization, allocation concealment, blinding, intention-to-treat analysis, p-values, and confidence intervals are also clarified.

In the initial installment of this series, EBM was introduced as a systematic method for clinical issue resolution, integrating top-tier research evidence with clinical know-how and patient values. This article will delve into the hierarchy of evidence for gauging intervention effectiveness and focus on the randomized controlled trial (RCT), considered the benchmark for assessing intervention efficacy.

Hierarchy of Evidence

Certain research designs hold greater efficacy in answering queries regarding intervention effectiveness, leading to the development of the “hierarchy of evidence.” This framework ranks evidence, providing guidance on evaluating healthcare interventions and identifying which studies carry the most weight when multiple studies examine the same question using different methodologies.

Figure 1. Hierarchy of evidence for questions about the effectiveness of an intervention or treatment

Figure 1 depicts this hierarchy, illustrating a progression from basic observational methods at the base to increasingly stringent methodologies. The pyramid shape symbolizes the escalating risk of bias inherent in study designs as one moves down the pyramid. The randomized controlled trial (RCT) is hailed for offering the most dependable evidence on intervention effectiveness, as its processes minimize the influence of confounding factors on results. Consequently, RCT findings are deemed closer to the true effect compared to other research methods. This hierarchy suggests that when seeking evidence on intervention effectiveness, well-conducted systematic reviews of RCTs with or without meta-analysis, or well-conducted RCTs, offer the most robust evidence. For instance, if the inquiry revolves around whether children with meningitis should receive corticosteroids, the most pertinent articles would be systematic reviews or RCTs.

WHAT IS A RANDOMISED CONTROLLED TRIAL?

A randomized controlled trial (RCT) is a study design where participants are randomly allocated to different clinical interventions. It is considered the most scientifically rigorous method for testing hypotheses and is widely recognized as the gold standard for evaluating intervention effectiveness. The fundamental structure of an RCT is outlined in Figure 2.

Figure 2. The basic structure of a randomised controlled trial

In an RCT, a sample from the population of interest is randomly assigned to receive one of several interventions, and these groups are then followed over a set period. Besides the different interventions, both groups are treated and monitored in the same way. At the study’s conclusion, the groups are compared based on pre-defined outcomes. For example, outcomes from the group receiving Treatment A are compared with those from the group receiving Treatment B. Since both groups are treated the same way except for the intervention, any outcome differences are attributed to the trial therapy.

WHY A RANDOMISED CONTROLLED TRIAL?

The primary goal of random assignment in a controlled trial is to mitigate selection bias by randomly distributing patient characteristics that could influence the outcome among the groups. This random distribution ensures that any disparity in outcomes can be attributed solely to the treatment. Therefore, random allocation increases the likelihood of balancing baseline systematic differences between intervention groups concerning factors like age, sex, disease activity, and disease duration, which may impact the outcome.

APPRAISING A RANDOMISED CONTROLLED TRIAL

Evaluating a randomized controlled trial (RCT) involves assessing several key aspects to determine the study’s reliability and relevance to your patient or population. When reviewing an RCT article, focusing on three crucial areas can help you make this assessment:

  • The validity of the trial methodology
  • The size and accuracy of the treatment’s effect
  • The applicability of the results to your patient or population

A list of 10 questions that may be used for critical appraisal of an RCT in all three areas is given

Questions to consider when assessing an RCT

Did the study ask a clearly focused question?
Was the study an RCT and was it appropriately so?
Were participants appropriately allocated to intervention and control groups?
Were participants, staff, and study personnel blind to
participants’ study groups?
Were all the participants who entered the trial
accounted for at its conclusion?
Were participants in all groups followed up and data
collected in the same way?
Did the study have enough participants to minimise the
play of chance?
How are the results presented and what are the main
results?
How precise are the results?
Were all important outcomes considered and can the
results be applied to your local population?

ASSESSING THE VALIDITY OF TRIAL METHODOLOGY

Focused research question

Evaluating the validity of trial methodology involves several key considerations:

Clear and Focused Research Question: The research question should be well-defined and easily understandable, even to those not specialized in the field, to clarify the study’s purpose.

Randomization: Randomization is the process of assigning participants to experimental or control groups randomly, ensuring each participant has an equal chance of being assigned to any group. This process aims to eliminate selection bias and balance known and unknown confounding factors, creating comparable groups.

Methods of Randomization: Various methods, such as using random number tables or computer programs, can be employed to assign participants randomly. However, methods like alternating assignment or assignment by birth date or hospital admission number can introduce bias.

Addressing Imbalance: In large clinical trials, simple randomization can often achieve balance between groups in terms of patient numbers and characteristics. However, in smaller studies, this balance may not be achieved. Block randomization and stratification are strategies used to help ensure balance between groups in size and patient characteristics.

Block randomisation

Block randomization is a method used to ensure an even distribution of patients across treatment groups in a trial. Participants are grouped in blocks, often of four, and each block contains an equal number of patients for each treatment arm (A and B). With a block size of four, there are six possible arrangements of two As and two Bs in each block: AABB, BBAA, ABAB, BABA, ABBA, BAAB.

To implement block randomization, a random number sequence is used to select a block, determining the order of allocation for the first four subjects. Subsequent blocks are used to allocate treatment groups to the next four patients based on the sequence specified in the next randomly selected block.

Stratification

Stratification is another method used to ensure groups are comparable in terms of important patient characteristics. While randomization helps remove selection bias, it does not guarantee similarity in all factors. Stratification involves generating separate block randomization lists for different combinations of prognostic factors. For instance, in a trial of enteral nutrition for inducing remission in active Crohn’s disease, potential stratification factors could include disease activity (measured by the pediatric Crohn’s disease activity index) and disease location (small bowel involvement). Blocks would be generated for each combination of these factors to ensure balanced allocation of patients.

Allocation concealment

Allocation concealment is a crucial technique in research to prevent selection bias. It involves concealing the allocation sequence from those assigning participants to intervention groups until the moment of assignment. This prevents researchers from consciously or unconsciously influencing which participants are assigned to a specific intervention group. For example, if the randomization sequence indicates that patient number 9 will receive treatment A, allocation concealment ensures that researchers cannot manipulate the assignment of another patient to position 9.

In a study by Schulz et al, trials with inadequate allocation concealment were found to overestimate treatment effects by approximately 41% compared to trials with proper concealment. A common method of concealing allocation is to seal each assignment in an opaque envelope. However, this method has drawbacks, and “distance” randomization is often preferred. Distance randomization involves removing the assignment sequence from those making the assignments. When a patient is recruited, the investigator contacts a central randomization service, which then issues the treatment allocation.

While an RCT is designed to eliminate selection bias, it is not foolproof. It’s important not to assume that a trial is valid just because it claims to be an RCT. Any selection bias in an RCT can invalidate the study design, making the results no more reliable than those of an observational study. Therefore, it’s crucial to ensure proper allocation concealment to maintain the integrity of the study design and the reliability of the results.

Blinding

Blinding, also known as masking, is a critical practice in clinical trials to minimize biases that may arise from participants, healthcare professionals, or researchers knowing which treatment group they are in. When individuals are aware of the treatment they are receiving, it can influence their reporting of outcomes, the judgment of healthcare professionals, and the analysis of data. This can lead to overanalysis of data or biased management of patients based on perceived treatment effectiveness.

Blinding helps prevent these biases by ensuring that participants, healthcare professionals, and data analysts are unaware of who is receiving the experimental treatment and who is in the control group. In a single-blind study, participants are unaware of their treatment allocation, while in a double-blind study, both participants and data collectors are unaware. In rare cases, where participants, data collectors, and data evaluators are all unaware, it is referred to as a triple-blind study.

Recent studies have shown that blinding of patients and healthcare professionals can reduce bias. Trials that were not double-blinded tended to overestimate treatment effects compared to those that reported double blinding. However, it is important to note that blinding is not always feasible or appropriate, especially in studies where blinding of participants and healthcare professionals is challenging. In such cases, blinding of data analysts may still be possible to maintain the integrity of the study.

Intention to treat analysis

Intention-to-treat (ITT) analysis is a critical strategy in randomized controlled trials (RCTs) to minimize bias and maintain the validity of the study. In an RCT, randomization aims to balance known and unknown factors between treatment groups. However, some participants may not complete the study as planned due to various reasons such as misdiagnosis, non-compliance, or withdrawal. Excluding these participants from the analysis can introduce bias by altering the balance of prognostic factors between groups.

To address this potential bias, ITT analysis includes all participants who were allocated to a treatment group, regardless of whether they received the treatment as intended or completed the study. This approach reflects the real-world scenario where not all participants adhere to the protocol. By analyzing all participants according to their assigned treatment, ITT analysis maintains the integrity of the randomization process and provides a more accurate representation of the treatment effect.

According to the revised CONSORT statement for reporting RCTs, authors should clearly specify which participants are included in their analyses and provide sample sizes or denominators for all reported results. Main results should be analyzed based on ITT principles, but authors may also report additional analyses based on participants who adhered to the intended protocol (per-protocol analyses) when necessary.

Power and sample size calculation

Power in a randomized controlled trial (RCT) refers to its ability to detect a difference between groups if such a difference exists. It is influenced by various factors, including the frequency of the outcome, the size of the effect, the study design, and the sample size. To ensure that an RCT can effectively address its research question, it must have a sufficiently large sample size, with an adequate number of participants in each group.

A study with a small sample size may not be able to detect true differences in outcomes between groups, leading to potential waste of resources and ethical concerns. Unfortunately, many small studies are published without reporting their statistical power or the probability of detecting a clinically important effect if it exists. Therefore, researchers should carefully plan their studies to ensure an adequate sample size that provides a high probability of detecting even the smallest clinically significant effect.

MAGNITUDE AND SIGNIFICANCE OF TREATMENT EFFECT

After establishing the validity of a study’s methodology, the next consideration is the reliability of its results. This assessment typically involves evaluating the size of the treatment effect and the likelihood that the observed result is merely due to chance.

Treatment Effect Magnitude

The magnitude of the treatment effect refers to the extent of the effect measured. In randomized controlled trials (RCTs), treatment effects can be presented using various metrics such as absolute risk, relative risk, odds ratio, and number needed to treat. Each metric has its strengths and weaknesses, which have been recently reviewed. A larger treatment effect is generally considered more significant than a smaller one.

Statistical Significance

Statistical significance indicates the probability that the results of a study were not simply due to chance. This is often assessed using p-values and confidence intervals. These statistical measures help determine whether the observed effect is likely to be real or could have occurred by chance alone.

p-Value

The p-value represents the probability that the observed difference between two treatment groups occurred by chance. Researchers often use a significance level of 0.05, meaning that if the p-value is less than 0.05, the difference between groups is considered statistically significant. This indicates that the null hypothesis (no difference) is rejected in favor of the alternative hypothesis (a real difference). Conversely, if the p-value is greater than 0.05, the observed difference may have occurred by chance, and the null hypothesis is not rejected, indicating a lack of statistical significance.

Confidence Intervals

Confidence intervals (CIs) provide an estimate of the range within which the true treatment effect for the entire population of interest is likely to lie. A 95% CI is commonly used, indicating that there is a 95% probability that the true effect falls within the interval. Other confidence levels, such as 90% or 99%, can also be calculated.

For mean differences, if the CI includes 0, there is no statistically significant difference between the groups. If the CI does not include 0, a statistically significant difference is indicated. For relative risk or odds ratio, if the CI includes 1, there is no statistically significant difference, but if it does not include 1, a statistically significant difference is suggested.

Confidence Intervals vs. p-Values

Confidence intervals (CIs) offer more informative insights compared to p-values. CIs not only help assess statistical significance but also provide a range of plausible values for a population parameter, indicating the precision of the measured treatment effect. Authors should consider reporting both p-values and CIs. However, if only one is reported, the preference should be for the CI. This is because the p-value, while less important, can be inferred from the CI; when CIs are known, p-values add little additional information.

Clinical Significance

A statistically significant finding does not automatically imply clinical significance. Clinical significance refers to the practical value of results for patients, indicating a difference in effect size between groups that could be considered important in clinical decision-making, regardless of statistical significance. While magnitude and statistical significance are objective calculations, judgments about clinical significance are relative to the context of interest. Assessments of clinical significance should consider how patients value the benefits and potential adverse effects of an intervention.

Precision of Treatment

Effect The precision of an estimate can be understood through its confidence interval (CI). The width of the CI reflects the precision of the estimate: a wider interval indicates lower precision, while a narrower interval indicates higher precision. A wide interval suggests that more data may be needed to draw definitive conclusions about the estimate.

Applying Results to Your Own Patients

A fundamental principle of Evidence-Based Medicine (EBM) is that clinicians should evaluate whether valid study results are applicable to their individual patients. Just because there is strong evidence supporting a specific asthma treatment does not mean it is suitable for all asthma patients. Before incorporating research evidence into clinical practice, several considerations should be taken into account, as outlined below.

Do the Study Participants Reflect My Patients?

When considering a treatment studied in one population for use in another, such as a drug effective in American adults with meningitis for use in British children, it’s essential to evaluate if there are any biological, geographical, or cultural factors that could affect its efficacy in your patients.

Are the Benefits Worth the Risks?

Even if a treatment proves effective in a clinical trial, it’s crucial to weigh its potential benefits against its known or potential side effects. Additionally, consider any comorbid conditions that might influence the risk-benefit balance for an individual patient. In some cases, you may choose not to offer the treatment after discussing it with the patient or their caregivers.

Do the Patient’s Values Align with the Treatment?

Patients or caregivers should be fully informed about the treatment, and their preferences and values should be taken into consideration. It’s important to assess how they perceive the benefits and risks of the treatment.

Is the Treatment Accessible and Affordable?

Prescribing a treatment that is not available or affordable in your practice or hospital setting is impractical. Consider the availability and funding of the treatment in your area before making a decision.

CONCLUSIONS

In conclusion, while Randomized Controlled Trials (RCTs) are the gold standard for evaluating healthcare interventions, bias can occur due to flaws in trial design and management. Therefore, it is essential for readers of medical literature to be able to critically appraise RCTs, evaluating the validity of the methodology, the size and accuracy of the treatment effect, and the relevance of the results.

Shares