• It provides an overview of the issues and challenges that should be considered when planning the sample size in descriptive quantitative research
• It offers strategies to assist nurse researchers in justifying sample size in descriptive quantitative research
• It may assist nurse researchers in writing for publication or submitting a research proposal
Background Descriptive quantitative researchers often use surveys to collect data about a group or phenomenon. Determining the required sample size in descriptive surveys can pose a challenge as there is no simple ‘formula’ by which to calculate an appropriate sample. However, when a sample is too small the study may fail to answer the research question and too many responses can create resource implications.
Aim To explore considerations regarding the justification of adequate sample size in descriptive quantitative research.
Discussion Several considerations may assist quantitative descriptive researchers in examining the appropriateness and justification of sample size. Response rates can guide decision-making around the proportion of the target population who respond. Additionally, consideration of any validated tools, the spread or responses and types of analysis can guide sampling decisions.
Conclusion The strategies in this article provide a considered approach to justifying sample size in descriptive quantitative research. Factors such as response rates and analytical considerations provide a transparent means of justifying an adequate sample.
Implications for practice Providing clear justification for the sample size within descriptive quantitative research demonstrates a robust research approach and optimises resource use.
Nurse Researcher. doi: 10.7748/nr.2025.e1958
Peer reviewThis article has been subject to external double-blind peer review and checked for plagiarism using automated software
Correspondence Conflict of interestNone declared
Mursa RA, Patterson C, McErlean G et al (2025) How many is enough? Justifying sample size in descriptive quantitative research. Nurse Researcher. doi: 10.7748/nr.2025.e1958
Open accessThis is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International (CC BY-NC 4.0) licence (see https://creativecommons.org/licenses/by-nc/4.0/) which permits others to copy and redistribute in any medium or format, remix, transform and build on this work non-commercially, provided appropriate credit is given and any changes made indicated.
Published online: 13 March 2025
The sample size is the total number of people participating in a study. Sample size is integral in ensuring validity, generalisability, and relevance (Curtis and Keeler 2023). The determination of sample size adequacy is key to having the confidence to generalise research results and optimises the likelihood of statistically and clinically meaningful findings (Curtis and Keeler 2023). If the sample size is too small, the study might not be able to answer the research question, fail to detect a real difference or not provide a true representation of the population (Burmeister and Aitken 2012). Conversely, if the sample size is too large, the study may be more complex than it needs to be and require more resources than is necessary, ethical or feasible (Bujang 2021, Kang 2021).
In observational or experimental studies, the calculation of sample size is well established and factors such as effect size, estimated drop-out rates, and desired statistical power drive the computation of the desired sample size (Kang 2021). Indeed, there is much debate in the literature and many papers and texts devoted to descriptions of how to undertake power calculations and sampling determinations for various studies testing interventions, proving hypotheses or exploring epidemiological issues (Lachenbruch 1991, Harden and Friede 2018, Johnston et al 2019, Tam et al 2020). Additionally, existing statistical software (eg, SAS, SPSS and Stata) and many specialised programs (eg, G*Power, PASS, and Power and Precision) provide the tools and guidance to calculate an appropriate sample size (Dattalo 2009, Imankhan 2023). However, power analysis seeks to identify the number of participants required to either test a hypothesis or detect a meaningful relationship between variables (Curtis and Keeler 2023). Therefore, this is not appropriate for descriptive quantitative studies that seek only to describe a particular group, rather than undertake inferential statistical analysis.
In descriptive quantitative studies, determining how many participants is enough is less clear. A search of the CINAHL database (11/1/24) did not identify any papers that explicitly discuss the calculation of sample size in studies that were non-experimental or were not cohort studies. While some authors identify that larger samples are preferred (Curtis and Keeler 2023), there is limited published guidance in deciding on an adequate sample size in descriptive quantitative research. Yet interestingly, one of the main reasons for articles being rejected for publication in peer-reviewed journals is inadequate sample size (Meyer et al 2018). Therefore, this paper seeks to address this gap by providing an overview of the issues and challenges that should be considered when planning and justifying sample size in descriptive quantitative studies.
• There is no simple formula to calculate the sample size in descriptive quantitative research
• There are three main elements to consider when examining the appropriateness of sample size in descriptive quantitative research: response rates, the use of a validated tool and analysis considerations
• Being able to openly justify how and why a sample size was determined will improve research reporting and quality
Descriptive quantitative research is a type of nonexperimental study that describes a population, situation or phenomena. These studies assist in identifying characteristics, frequencies, trends, correlations, and categories (Siedlecki 2020). Descriptive studies do not test a hypothesis but rather use data collection methods such as surveys or observations to quantify and summarise, or describe, a particular group or phenomena (Siedlecki 2020). Given this aim, it is not appropriate to seek a random sample, but rather to gather information from those who have experience with the topic of interest. This may be based on their characteristics, such as belonging to a particular professional body (eg, registered nurses) or working in a specific location (eg, rural general practice, cardiothoracic unit). For example, Kinghorn et al (2022) surveyed registered nurses working in a secure forensic mental health unit to seek their opinions and beliefs regarding transition and workforce experiences. Likewise, Smith et al (2023) and Halcomb et al (2023) explored the opinions of a range of primary healthcare professionals regarding the use of telehealth during COVID-19. Additionally, descriptive quantitative research could describe a particular patient group and their characteristics, experiences or perceptions. For example, Robinson et al (2017) surveyed residents living in manufactured home villages (what in the UK would be called mobile home communities) and sought to describe their health status and health service access.
One of the most frequently used data collection methods in descriptive quantitative research is the survey (Watson 2015). Surveys provide a tool to quantitatively collect information regarding the beliefs, preferences and attitudes of participants (Watson 2015). While surveys may be administered via mail, telephone or face-to-face, online options are becoming increasingly popular due to the ease of delivery (Stefkovics 2022). Internet-based software platforms including Survey Monkey, REDCap and Qualtrics have made online surveys increasingly simple to design and deliver. Social media and electronic mail allow invitations to participate to be widely circulated to large groups of people who may be geographically dispersed or part of a diverse population group. A further advantage of collecting data online is the ability to collect relatively large sample sizes within short time frames (Wright 2019). However, despite their allure in collecting data, a key question remains: how will you know when you have sufficient survey responses?
There are three main elements to consider when examining the appropriateness of sample size: response rates, the use of a validated tool, and analysis considerations.
Perhaps the simplest justification of optimal sample size is the response rate (Siedlecki et al 2015). A study’s response rate is the rate of participation, calculated by dividing the number of participants (numerator) by the number of people in the population (denominator). In some studies, the size of the population may be known. For example, Alshahrani et al (2018) sought the views of first-year undergraduate nursing students at a single university. With a population of 154 first-year student nurses, 58 completed the survey, so the response rate was 38%.
When calculating a response rate, it is important to understand what constitutes a ‘good’ or acceptable rate of response. The literature describes the response rate in surveys as being dependent on a range of factors such as the type of survey, participant group and the method of survey delivery (Meyer et al 2022). In their meta-analysis of 1,071 published online surveys, Wu et al (2022) found an average response rate of 44.1%. This is similar to other systematic reviews that report average response rates for online surveys of 34% (Shih and Fan 2008) and 36% (Daikeler et al 2022). Despite these averages, interestingly, response rates range from as high as 61% (Creavin et al 2011) to as low as 10.3% (Medina-Lara et al 2020).
The mode of administration can affect a survey’s response rate, with response rates to postal surveys reported to be comparatively higher than online surveys (Meyer et al 2022). In their comparison, Daikeler et al (2022) found postal surveys had a 12 percentage point higher average (48%), while Meyer et al (2022) found a 19 percentage point gap (web 46% and postal 65%). It is suggested that a range of economic, socio-cultural, and technological factors impact the appropriateness of online survey approaches for particular participant groups (Daikeler et al 2022). Beyond the impact of delivery mode, it has been noted that there has been a general decrease in rates of participation in contemporary survey research (Krieger et al 2023).
There is an assumption that a lower response rate indicates a poorer study (Shiyab et al 2023). However, Morton et al (2012) argue that response rates alone may not be sufficient evidence to judge the adequacy of sample size and quality of a study, as there is no straightforward answer to an ‘acceptable’ response rate. Indeed, there are many factors that need to be considered.
It is important that the nature of the survey is understood, given that factors such as survey fatigue and length, incentives, mode of administration and follow-up methods all impact on response rates (Shiyab et al 2023). Additionally, the characteristics of participants contribute to likely response (Meyer et al 2022). Demographic factors such as age, gender, education level, marital status, ethnicity, and socio-economic status all impact likelihood of response (Shiyab et al 2023).
A further caveat here is that to calculate the response rate the researcher needs to know the number of people within the population reached by the study. This number may be elusive in many circumstances where the population is large or dispersed. Additionally, when recruitment is undertaken via social media or professional groups, the number of people in the population who are reached by the survey advertisement may not be clear. For example, research undertaken by Halcomb et al (2022) used social media platforms (Facebook, LinkedIn, and Twitter) to recruit nurses working in primary healthcare throughout Australia, with 359 nurses completing the survey. It was not possible to calculate a response rate as the number of nurses meeting the inclusion criteria was unclear due to the large number of employing organisations. Additionally, given that dissemination occurred via social media it is not clear how many potential participants were actually reached.
A validated tool can provide some justification for sample size calculations when used as an outcome measure in a descriptive survey. For example, instrument scores of quality of life, depression or anxiety can provide measurable outcomes to use as effect sizes (Burmeister and Aitken 2012). The broader literature can indicate what expected normal values or differences can inform effect sizes (Burmeister and Aitken 2012). This is particularly useful when using the data to compare sample and population norms.
In general, the smaller the anticipated effect size, the larger the required sample size (Burmeister and Aitken 2012). With an estimated effect size, a power calculation can provide evidence for justifying an adequate sample size (Bujang 2021).
While statistical experts have varying opinions about the appropriate sample size for analysis, there is limited guidance in the literature. It is generally considered important to have a sample size large enough to provide a distribution of responses across the variables in the data set, as comparisons within small groups of participants are unlikely to yield meaningful findings (Bujang 2021). To this end, the length of the survey and the number of variables it explores can impact the number of participants required (Shiyab et al 2023).
Secondly, it is important to consider the number of comparisons being undertaken. The larger the number of comparisons being undertaken the larger the sample should be to reduce the risk of Type II error (Curtis and Keeler 2023). Type II errors are false negatives and are created by accepting the null hypothesis when it is really false. That is, concluding that no relationship exists when it actually does. The risk of Type II errors can be reduced by researchers increasing the sample size (Curtis and Keeler 2023).
Some types of statistical analysis have established principles about sample size. For example, factor analysis is generally considered to require ten participants for each item of an instrument (Kline 2016). This principle was used by Halcomb et al (2022) in their study using the 28-item Brief COPE scale, resulting in a minimum sample required of 280 participants.
While descriptive quantitative research is widely used in nursing and other health disciplines to describe groups and phenomena, discussions around its methodological underpinnings are limited (Han et al 2022). In this methodological paper, we highlight the importance of considering the various issues that impact sampling and sample size. To promote the rigour of descriptive quantitative research, it is important that ongoing attention is paid to its methodological concepts.
Peer-reviewed journals increasingly require authors to use reporting tools to guide the reporting of research. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist (von Elm et al 2007) is commonly applied to descriptive quantitative studies. This checklist states that authors should provide an explanation of ‘the eligibility criteria and the sources and methods of participant selection’ (von Elm et al 2007). While these criteria prompt authors to provide some detail about their sampling processes, the checklist does not specifically seek clarity about considerations of sample size. Additionally, many tools used to critically appraise quantitative descriptive studies focus solely on the selection of participants and non-responders and not all consider the justification of sample size (Moola et al 2024). The tool used by Downes et al (2016) for appraising descriptive studies includes a criterion regarding justification of sample size, but there is little explanation of appropriate justifications that could be provided.
These limitations in critical appraisal tools create challenges in capturing the quality of sampling considerations. Despite this apparent gap in reporting requirements, there is a need for researchers to be able to justify and explain each step in the research process. Having such a clear audit trail allows readers to have confidence in the process and subsequent findings.
Sample size is an integral component of the conduct and evaluation of descriptive quantitative research, with sample sizes that are too large or too small resulting in negative impacts. Given the lack of literature addressing this methodological issue, this paper seeks to open the discussion about sampling in descriptive quantitative research. Open academic debate and clear reporting of sampling processes are both important to advance the science in this area.
Nurse researchers need to consider these issues and more openly justify how and why they arrived at a particular sample size. Such transparency will improve research reporting and quality.
Exploring the role and expertise of ward-based oncology clinicians
Aim The aim of this study was to involve ward-based band 6...
The biology of cancer
Cancer research is moving fast. Understanding of the biology...
Making sense of cancer nursing research design
There is a bewildering array of research designs and...
Informing cancer patients: Truth telling and culture
Truth-telling about life-threatening cancer illness is a...
The value of a hospital palliative care team to other staff
A hospital palliative care team (HPCT) aims to provide...