**Purpose:**Nurses scientists are finding themselves working with increasingly more quantitative data from a variety of sources and do not typically have the same degree of training in theoretical statistics that an epidemiologist or statistician might have. Even if nurse scientists were equipped with such training, they might still be uncertain of the best statistical approach for analyzing data under new or unknown assumptions. Statistical simulations can be beneficial, but do not appear to be frequently used by nurse scientists.

To illustrate the benefits of statistical simulation studies, this presentation will provide both a basic use case and a more complex exemplar. The first case compares basic statistical tests for differences in means and is intended to demonstrate the ability of simulation to illustrate rules and concepts taught in an introductory statistics course. The second case compares more complex statistical approaches to handling missing data in a clinical dataset as a way to demonstrate the ability of simulation to guide analysis approaches in real-life applications.

**Methods: ** The first demonstration will compare the use of the Student’s t-test, non-parametric Mann-Whitney U-test, and ordinary least squares regression for samples of varying sizes from populations with differing effect sizes. After creating 2 populations (N = 100,000) with differences in means ranging from 0 to 2 times the mean of the first population, we extracted sample sizes ranging from 6 to 400. For each effect size and sample size, we conducted 1,000 “studies.” In each study, we tested for a difference in means using the aforementioned t-test, U-test, and regression approaches.

The second exemplar will illustrate a case study where the presenter used a simulation study to explore the preferred approach for handling missing data in a clinical dataset where some variables had a large amount of missing data. The simulation study comprised the following assumptions: (a) incidence of missing from 1% to 60%, (b) data missing completely-at-random, at-random, and not-at-random, and (c) missing associated with outcome and not with outcome. Imputation procedures included simple median imputation and multiple imputation with chained equations approaches. Analysis approaches included both a logistic regression and a Cox proportional hazards regression, and sample results were pooled and compared to true population values.

Source code for replicating the simulation studies and resources for learning the statistical software R will be provided to interested audience members.

**Results: ** In the first example, graphical results demonstrate that: (a) the percentage of statistically significant findings [p < 0.05] increases with larger sample sizes and effect sizes and (b) the percentage of discordance among statistical tests becomes negligible at larger sample sizes and effect sizes. In the second example, graphical results demonstrate that multiple imputation with a model that includes the outcome variable obtained results closer to true values under most assumptions.

**Conclusion: ** Statistical simulations leverage modern computing abilities to explore results of statistical approaches under a variety of assumptions. By creating a large population where true values are known and then analyzing multiple samples taken from the population, one can gain an idea of which analysis method(s) might be preferred based on the assumptions he/she is willing to make.