Sample size and influencing factors MCQs With Answer

Designing an appropriate sample size is a cornerstone of rigorous M.Pharm research. This concise quiz on “Sample size and influencing factors” focuses on the statistical and practical aspects that determine how many subjects, observations or events are needed for valid conclusions. You will encounter items on effect size, variance, confidence level, power, design effects, cluster sampling, attrition, pilot studies, and sample size adjustments for different study designs (cross-sectional, experimental, survival, and regression models). The questions emphasize conceptual understanding and application so you can confidently plan or critique sample size calculations in pharmacological and clinical research settings.

Q1. What is the primary purpose of calculating sample size in a study?

  • To ensure the study is completed quickly
  • To estimate a population parameter with desired precision and statistical power
  • To maximize the number of publications
  • To make data collection easier

Correct Answer: To estimate a population parameter with desired precision and statistical power

Q2. Which change will generally decrease the required sample size for detecting a difference between two groups?

  • Reducing the acceptable Type I error (alpha)
  • Increasing desired power (1 − beta)
  • Expecting a larger effect size
  • Increasing outcome variance

Correct Answer: Expecting a larger effect size

Q3. How does increasing the confidence level (for example from 95% to 99%) affect required sample size?

  • It decreases sample size
  • It increases sample size
  • It has no effect
  • It only affects qualitative studies

Correct Answer: It increases sample size

Q4. What is the effect of greater variability (higher standard deviation) in the outcome on the sample size needed for estimating a mean?

  • Greater variability reduces required sample size
  • Greater variability increases required sample size
  • Variability does not influence sample size for means
  • Variability only matters for categorical outcomes

Correct Answer: Greater variability increases required sample size

Q5. Which of the following is NOT a direct component of the basic sample size formula for comparing two independent means?

  • Desired significance level (alpha)
  • Estimated standard deviation of outcome
  • Expected difference between group means (effect size)
  • Population census data for unrelated regions

Correct Answer: Population census data for unrelated regions

Q6. When calculating sample size for estimating a prevalence (proportion) in a cross-sectional study, which parameter is essential?

  • Expected mean and SD of a continuous outcome
  • Estimated prevalence (p) and desired precision (d)
  • Number of covariates in a regression model
  • Median survival time

Correct Answer: Estimated prevalence (p) and desired precision (d)

Q7. When should the finite population correction (FPC) be applied to sample size calculations?

  • When the population is infinite
  • When the sample is less than 1% of the population
  • When the sample constitutes a sizable fraction (commonly >5%) of a finite population
  • Only for continuous outcomes

Correct Answer: When the sample constitutes a sizable fraction (commonly >5%) of a finite population

Q8. Increasing the desired statistical power from 80% to 90% will:

  • Decrease the required sample size
  • Increase the required sample size
  • Have no effect if alpha is fixed
  • Only affect non-parametric tests

Correct Answer: Increase the required sample size

Q9. In cluster sampling, the ‘design effect’ accounts for:

  • Loss of power due to unequal variances only
  • Correlation between observations within clusters, often using DE = 1 + (m−1)ICC
  • Type I error inflation only
  • Reduction in response rate

Correct Answer: Correlation between observations within clusters, often using DE = 1 + (m−1)ICC

Q10. What is the commonly recommended minimum number of events per variable (EPV) in logistic regression to avoid overfitting?

  • 1 event per variable
  • 5 events per variable
  • 10 events per variable
  • 100 events per variable

Correct Answer: 10 events per variable

Q11. How should you adjust the calculated sample size to allow for anticipated dropouts or non-response?

  • Subtract the expected number of dropouts from the calculated sample size
  • Multiply the calculated sample size by the expected dropout proportion
  • Divide the calculated sample size by (1 − expected dropout proportion)
  • Ignore dropouts; report completer analysis only

Correct Answer: Divide the calculated sample size by (1 − expected dropout proportion)

Q12. Why is a pilot study useful for sample size estimation?

  • It replaces the need for hypothesis testing
  • It provides preliminary estimates of variance or prevalence to inform sample size calculations
  • It always gives the final sample size directly
  • It eliminates Type I error concerns

Correct Answer: It provides preliminary estimates of variance or prevalence to inform sample size calculations

Q13. In non-inferiority or equivalence trials, which type of hypothesis test is typically used for sample size calculation?

  • One-sided test (for non-inferiority) or two one-sided tests for equivalence
  • Only two-sided superiority tests
  • Permutation tests exclusively
  • Only descriptive statistics without testing

Correct Answer: One-sided test (for non-inferiority) or two one-sided tests for equivalence

Q14. For paired (dependent) sample designs, how does increasing correlation between paired measurements affect required sample size?

  • Higher correlation increases required sample size
  • Higher correlation reduces required sample size
  • Correlation has no effect for paired designs
  • Correlation only matters for categorical paired data

Correct Answer: Higher correlation reduces required sample size

Q15. In survival analysis sample size planning, calculations are often based primarily on which quantity?

  • Total number of study sites
  • Number of observed events (e.g., deaths, failures)
  • Baseline mean of a continuous biomarker
  • Estimated prevalence of exposure

Correct Answer: Number of observed events (e.g., deaths, failures)

Q16. Cohen’s d is best described as:

  • A measure of internal consistency
  • A standardized mean difference used as an effect size for comparing two means
  • A non-parametric rank statistic
  • A measure of sampling error

Correct Answer: A standardized mean difference used as an effect size for comparing two means

Q17. Which of the following software tools is commonly used for power and sample size calculations in clinical research?

  • G*Power
  • Microsoft Word
  • ImageJ
  • BLAST

Correct Answer: G*Power

Q18. What does a Type I error (alpha) represent in hypothesis testing?

  • The probability of failing to detect a true effect (false negative)
  • The probability of detecting an effect when none exists (false positive)
  • The probability of effect size being large
  • The probability of model overfitting

Correct Answer: The probability of detecting an effect when none exists (false positive)

Q19. For the same alpha level, how does using a two-sided test compare to a one-sided test in terms of required sample size?

  • Two-sided test requires a smaller sample size than one-sided
  • Two-sided test requires a larger sample size than one-sided
  • Both require identical sample sizes
  • Two-sided tests are only used for non-inferiority

Correct Answer: Two-sided test requires a larger sample size than one-sided

Q20. If the true effect size is smaller than the effect size assumed in the sample size calculation, the study is likely to be:

  • Overpowered, detecting differences too easily
  • Underpowered, with reduced probability of detecting the true effect
  • Unaffected, as effect size estimates do not influence power
  • Guaranteed to find statistical significance

Correct Answer: Underpowered, with reduced probability of detecting the true effect

Leave a Comment