Methodological Sampling Considerations for Researchers Studying Journalists and Trauma

Which journalists should be studied? Under what conditions? Which research questions are well-settled, and what demands further study?

A growing number of researchers are interested in studying news professionals, particularly the impact of trauma exposure on journalists’ health (Backholm & Björkqvist, 2010; Backholm & Björkqvist, 2012; Barton & Storm, 2016; Beam & Spratt, 2009; Brown et al, 2012; Lee et al., 2018; Smith et al., 2018). This is valuable scholarship given the central role of journalism in democratic society, and the numerous pressures facing today’s reporters, editors, photographers and producers. But as this field of research expands, careful consideration is needed: Which journalists should be studied? Under what conditions? Which research questions are well-settled, and what demands further study?

This fact sheet is designed for those interested in studying journalists and trauma. Understanding and improving sampling methods will foster accurate information collection that can be appropriately applied to specific news industry practices. In addition, it may help researchers design studies that do not overly burden journalists, rely on or otherwise promote inaccurate assumptions or use problematic sampling designs. One byproduct of better research methodology would be more trusting and willing participants for research projects focused on journalism and trauma.

Sampling Groups

Journalists work in different roles, mediums, and beats (some of them overlapping). Each has a unique subculture and faces distinct challenges. Before beginning a study, educate yourself thoroughly about the rapidly-changing news workplace, and where your research question fits into the occupational structure of journalism and the specific and broader landscape of news professionals. Ask yourself which group is pertinent to your specific question.

Sampling is particularly important when studying journalists because of the wide array of environments, media and subject matters within the field. For example, information gathered from a sample of war correspondents may not apply to sports editors. Television videographers may have different occupational-health and craft challenges than print reporters. Daily beat reporters face different choices and conditions than long-form podcast producers; investigative reporters may employ distinct methodologies in reviewing and verifying trauma-drenched material. Local news reporters in the U.S. may face demoralizing harassment through online comments on social media while their colleagues in other countries may routinely receive assassination threats. All of these differences are pertinent to designing research and interpreting the results.

When studying journalists, researchers must first carefully select a particular sample that will best help them answer their specific questions about different journalist populations. Second, the researcher needs to decide if the sample should represent a wide range of experiences or a narrow one. Typically, quantitative studies capture a wider range and qualitative studies capture a narrower range.

Participant Numbers

Researchers must determine how many participants they need to answer their research question. In a quantitative study, a larger sample is more likely to accurately reflect the larger population, while providing researchers with more statistical power, which is defined as the probability that researchers will correctly identify differences between two groups. For example, if researchers are trying to determine if a group of journalists working in Syria is exposed to more trauma than a group working in Iran, a study with greater statistical power will be able to better detect small differences between the groups.

When using statistics, a priori power calculations may determine the sample size necessary to discern small, medium or large differences between groups. Researchers rely on previously published research to estimate the expected effect size – or differences between groups – and to estimate the number of participants required for a study. Cohen’s d, a commonly measured effect size, measures the differences between group means in standard deviations (a measure of how spread out the data points are around the mean). Cohen’s (1988) rules of thumb suggest d = 0.2 is a small effect, d = 0.5 is a medium effect, and d = 0.8 is a large effect. Free resources, such as G*Power, exist to help researchers calculate sample sizes based on expected effect sizes.

In qualitative research, sample size guidelines vary based on the research question, data collection, sample population and research methodology (Malterud et al., 2016). Typically, qualitative studies are most effective when used for exploratory research, or to develop ideas or hypotheses related to specific new topics, allowing researchers to better understand underlying beliefs, experiences, attitudes, behavior and interactions. One common approach is to collect and analyze data until saturation, meaning that when new data no longer provides new information, collection stops (Mason, 2010).

Response Rates

Regardless of research methodology, researchers must account for those who choose not to participate in a study. Proportionately large numbers of non-responders may suggest journalists are self-selecting for a particular reason. Response rates are a ratio of individuals who complete a research study to the total number of individuals asked to participate. Low response rates may result from:

  • a lack of interest in the topic being studied;
  • a lack of relevance to respondents’ experiences;
  • respondents forgetting to complete a survey;
  • surveys require significant effort to complete;
  • timing a survey during a particular news intense period;
  • ineffective sampling strategies, like using outdated contact lists or having emails flagged as spam.

No matter the cause, low response rates affect the interpretation of research results. If a particular subgroup of journalists systematically does not participate, the results of the study may not reflect the experiences of that group. For example, a study on journalist safety that does not include responses from minority journalists will not reflect the concerns of this group, so generalizations and interpretations about the experience of that group cannot be made. Within a qualitative study, low response rates may suggest that a particular characteristic within a group needs to be analyzed. Within a quantitative study, fewer participants may make it more difficult to detect patterns (Groves & Peytcheva, 2008). If a study has a low response rate but a large number of participants – say 5,000 journalists are asked to participate and 1,000 respond – researchers can effectively study the relationship among variables. However, attempts to generalize to larger groups or populations will be difficult.

Research suggests the average response rate for generic, mailed paper surveys is 56%, and the average rate for email surveys ranges from 17 to 34% depending on other factors, such as survey length (Guo et al., 2016). Response rates for various journalism-focused research are listed in Table 1. Half of the studies included had response rates lower than 33%.

Table 1

Response Rates of Studies Examining Journalists

Study

Journalists Sampled

Country

Survey Type

Recruitment

Sample

Response Rate

Feinstein & Nicolson (2005)

Experienced war correspondents

United States & Britain

Online

Names randomly selected from employee rosters provided by news organizations

100 contacted, 85 usable responses

85%

Feinstein et al. (2002)

Experienced war correspondents

Multiple

Online

Contacted through CNN, BBC, Reuters and other news organizations

169 contacted, 140 usable responses

83%

Feinstein et al. (2014)

English-speaking journalists working with user-generated content

Multiple

 

 

 

 

Online

Contacted through three international news organizations

144 contacted,

116 usable responses

80.6%

Beam & Spratt (2009)

US journalists

United States

Telephone

Journalists previously interviewed as part of the 2002 American Journalist survey, a survey conducted every ten years to track changes in the industry

596 called, 400 usable responses

67%

Levaot et al. (2013)

Television and print journalists

Israel

Online

Three news organizations

58 contacted, 38 usable responses

66%

Dworznik (2011)

Television news workers

United States

Online

Posted on websites used by individuals working in television

Number of people who saw survey is unknown; 425 responses; 280 usable responses

Less than 65%

Morales et al. (2012)

Mexican reporters and photographers, some of whom covered drug trafficking

Mexico

Online

Journalists working for news organizations in 16 of the 32 Mexican states

253 emailed, 100 useable responses

49.5%

Weidmann et al. (2008)

Journalists who covered a December 2004 tsunami in Japan

Japan

Online

Contacted through news organizations

170 - 190 emailed; 61 usable responses

 

Authors provided a range

30 - 35%

 

Authors provided a range

Hatanaka et al. (2010)

Television journalists

Japan

Mail

Contacted through broadcasting companies

1073 distributed through news departments; 360 usable responses

34%

Pyevich et al. (2003)

Newspaper journalists

United States

Online

Contacted through newspaper websites

3,713 emailed; 906 usable responses

24.4%

Backholm & Björkqvist (2010)

News journalists and photographers / camera operators employed by radio, newspapers, television, and digital publications

Finland

Online

Contacted through media websites; direct contact with media organizations

2,865 emailed, 571 usable responses

20%

Novak & Davidson (2013)

British journalists who reported at the scene of dangerous events outside the British mainland

Britain

Online

Contacted through direct email and through a news organization

50 emailed, 10 usable responses

20%

Morales et al. (2014)

Mexican journalists who covered drug trafficking; Mexican journalists who covered other beats

Mexico

Online

Contacted in person, by phone, and through email using information gathered from national and local guild organizations

938 asked to participate, 161 usable responses

15%

Brown et al. (2012)

Journalists who reported on stories within the UK and abroad

Britain

Online

Contacted through a media organization and by the research team

323 emailed, 50 usable responses

15%

Smith et al. (2017)

Print and television journalists

United States

Email

Contacted through a database of journalists

1524 valid emails sent, 167 usable responses

11%

Newman et al. (2003)

Photojournalists

United States

Mail

Contacted via the National Press Photographers Association (NPPA) magazine

8.480 mailed; 875 usable responses

10%

    

A striking pattern emerges in Table 1: the studies that included a more diverse array of journalists in their sample had lower response rates. The studies that focused on a very narrowly defined set of journalists, had higher response rates. Research by Newman et al. (2003), which queried individuals who were members of a US photojournalist organization by disseminating a survey through a society magazine, had a 10% response rate. This relatively broad study, including any type of photojournalist in the United States who receives a specific magazine, had the lowest response rate but the third greatest overall sample size. Conversely, Feinstein’s homogenous samples of war correspondents working for a select set of companies with whom Feinstein has research relationships (Feinstein, 2006), had the highest response rates (83% and 85%), but represented a narrow group of journalists.

Debate ensues about what constitutes a good response rate (Fowler, 2014; Nulty, 2008). The United States Office of Management and Budget requires a minimum of 80% (Fowler, 2014), whereas the University of Texas (n.d.) requires 30%.  If the conservative standard (80%) is applied to the studies in Table 1, only three studies have acceptable response rates, and all of these are recruited through specific news agencies with whom the researcher had a direct relationship. If the more liberal standard (30%) is applied, almost half (46%) of all surveys in Table 1 have acceptable rates.

Response Rates Matter

Response rates in current studies of journalists and mental health suggest that the current prevalence estimates of mental health concerns including PTSD, depression, and substance abuse are likely not representative of all journalists. The current prevalence studies document that mental health problems exist and provide the best available estimate for the rates of these problems, but these estimates are unreliable.

While the current understanding of rates of mental health concerns is unreliable, a sufficient sample size exists to allow an accurate understanding of the relationships among variables in studies on journalists and traumatic stress. Thus, predictions about what might contribute to PTSD (e.g. exposure to traumatic events in one’s personal life; Backholm & Bjӧrkqvist, 2010; Newman et al., 2003), especially when replicated in other diverse samples, is more likely to be accurate (University of Texas, n. d.).

Why are Response Rates Low?

No research specifically examines why surveys with journalists yield low responses rates, but comparisons to other groups with low survey response rates may be helpful. In the Department of Defense (DOD), low response rates are more common for younger and more-junior level employees (response rates of 9% among those ages 18 - 24, rates of 29% among those 45 and older; Miller & Aharino, 2015). The study authors suggested the age difference might be due to the limited time junior-level employees have to complete surveys. Similarly, in journalism, the high demands of the job and deadline-driven pace may prevent journalists from completing studies. The authors also attributed low response rates to the large number of surveys administered, as well as overlap between survey content (Miller & Aharino, 2015). It is possible that there is a similar effect among journalists, as individuals who are on mailing lists or belong to certain organizations are inundated with requests to complete various surveys. Elana Newman, Dart Center Research Director, believes journalists whose job it is to observe others are resistant to being the object of study. She also believes journalists may be hesitant to participate in online surveys due to concerns about surveillance, confidentiality, and other security concerns.

Solutions to Low Response Rates?

Recommendations to address this problem include (See Table 2):

  1. Provide incentives offered to survey participants (Singer & Ye, 2012). None of the documented studies reported using incentives, such as gift cards, to promote participation. General research on surveys sent via mail suggests that small incentives are more effective when provided at the time survey participation is requested, rather than larger incentives provided after a survey has been completed (Dillman, 2007). In contrast, lottery incentives have little to no effect on response rates.
  2. Provide clear explanations for why a study is important to journalists (Frohlich, 2002).
  3. Decrease time required to complete surveys by reducing the number of questions (Frohlich, 2002), or design surveys to appear visually shorter in length (Dillman, 1972).
  4. Send reminder letters or emails (Edwards et al, 2002; Dillman, 1972). Dillman (1972) suggested those using postal surveys should send a thank you postcard after one week, a letter informing non-respondents they have not been heard from at three weeks, and a replacement questionnaire at seven weeks. A similar contact strategy can also be utilized for email surveys (Dillman, 2007).
  5. When sending email surveys, ensure the cover letter is brief and that participants will not have to scroll down to see the first question (Dillman, 2007).
  6. When designing a survey on a web platform (e.g., Survey Monkey, Qualtrics), choose a first question that will be interesting to most respondents and easy for them to answer. Display this question on the first screen of the survey (Dillman, 2007).
  7. Do not design web-based surveys to force respondents to answer a question before they can move on to other questions (Dillman, 2007).
  8. Capitalize on existing relationships with news organizations or individuals for sampling, when that would not lead to an ethical conflict such as coercive influence.

Table 2

Reasons for and Possible Solutions to Low Response Rates

Reasons for Low Response Rates

Possible Solutions

Email surveys do not reach respondents because of spam filters or outdated email lists (Drevo, 2017; Sills & Song, 2012).

Assure email lists are as up to date and as comprehensive as possible.

Do not send an email to multiple respondents at once.

Postal surveys are not opened by respondents.

 

When possible, personalize postal mail by addressing it to the specific occupants of the household (Dilman, 1972; Dillman et al., 1993)    

Survey does not seem relevant to respondents’ needs (Frolhlich, 2002).

Emphasize the potential benefits of contributing to the field (e.g., how results can be used to improve general working conditions; Frohlich, 2002).

Emphasize the benefit to the researcher (Edwards et al., 2002).

Emphasize the benefit to society.

Respondents become busy (Miller & Aharino), may forget to complete surveys.

Send reminder emails or mail (Edwards et al., 2002).

Survey requires significant effort from the respondent (Frolhich, 2002).

Create shorter surveys (Frolhich, 2002).

Minimize overlap between different surveys (Miller & Aharino, 2015).

Design surveys to seem shorter, without reducing the number of questions (Dillman, 1972).

 

Summary

At this time, the sampling problems in research on journalists and mental health are notable. These problems prevent researchers from accurately estimating the percentage of journalists who experience traumatic experiences and psychological symptoms.  Future studies must become more sophisticated in obtaining representative samples with high response rates.

Specifically, researchers should carefully consider the characteristics of the journalists they want to recruit for their studies in order to appropriately address its aims; researchers must also note these characteristics in their final reports. Reviewing and implementing appropriate strategies to increase response rates is also recommended.

Studies that examine specific samples and newsgathering sub-groups should exhibit caution in interpreting the results as representative to the entire field of journalism. Finally, published study reports should detail recruiting methods and non-response rates, and provide analyses of potential limitations in recruitment to help future researchers more expediently develop effective recruitment strategies, avoid repeating past mistakes and further collective knowledge and research on journalists and occupational health.


Resources

Backholm, K., & Björkqvist, K. (2010). The effects of exposure to crisis on well-being of journalists. A study on crisis-related factors predicting psychological health in a sample of Finnish journalists. Media, War & Conflict, 3, 138-151.

Backholm, K., & Björkqvist, K. (2012).  The mediating effect of depression between exposure to potentially traumatic events and PTSD in news journalists. European Journal of Psychotraumatology, 3, 183-88.

Barton, A., & Storm, H. (2016). Violence and harassment against women in the news media: A global picture. Women’s Media Foundation and the International News Safety Institute.

Beam, R. A., & Spratt, M. (2009). Managing vulnerability: job satisfaction, morale and journalists’ reactions to violence and trauma. Journalism Practice, 3, 421-438.

Brown, T., Evangeli, M., & Greenberg, N. (2012). Trauma-related guilt and posttraumatic stress among journalists. Journal of Traumatic Stress, 25, 207-210.

Browne, T., Evangeli, M., & Greenberg, N. (2012). Trauma-related guilt and posttraumatic stress among journalists. Journal of Traumatic Stress, 25, 207-210.

Cohen, J. (1988), Statistical power analysis for the behavioral sciences. (2nd ed.). Hillsdale: Lawrence Erlbaum.

Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York, NY: Wiley.

Dillman, D. A., Sinclair, M. D., & Clark, J. R. (1993). Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant addressed census mail surveys. The Public Opinion Quarterly, 57, 289–304.

Dillman, D. A. (1972). Increasing mail questionnaire response in larger samples of the general public. The Public Opinion Quarterly, 36, 24 – 257.

Drevo, S. (2016). The war on journalists: Pathways to posttraumatic stress and occupational dysfunction among journalists. Unpublished doctoral dissertation, University of Tulsa, Oklahoma.

Dworznik, G. (2011). Factors contributing to PTSD and compassion fatigue in television news workers. International Journal of Business, Humanities, and Technology, 1(1), 22-32.  

Edwards, P., Roberts, I., Clarke, M. DiGuispeppi, C., Pratap, S., Wentz, R., & Kwan, I. (2002). Increasing response rates to postal questionnaires: Systematic review. British Medical Journal, 324, 1183 – 1185.

Fowler, F. J. (2014). Nonresponse: Implementing a sample design. In Survey Research Methods (5th ed. pp. 42 – 60.). London, UK: Sage.

Feinstein, A., Audet, B., & Waknine, E. (2014). Witnessing images of extreme violence: A psychological study of journalists in the newsroom. Journal of the Royal Society of Medicine, 5(8), 1 – 7.

Feinstein, A. & Nicholson, D. (2005). Embedded journalists in the Iraq war: Are they at greater psychological risk? Journal of Traumatic Stress, 18, 129-132.

Feinstein, A., Owen, J., & Blair, N. (2002). A hazardous profession: War, journalism, and psychopathology. American Journal of Psychiatry, 159, 1570-1576.

Fogliani, M. (1999). Low response rates and their effects on survey results, Methodology Advisory Committee.

Frohlich, M. T. (2002). Techniques for improving response rates in OM survey research. Journal of Operations Management, 20, 53 – 62.

Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-anyalsis. Public Opinion Quarterly, 72, 17 – 189.

Guo, Y., Kopec, J. A., Cibere, J., Li, L. C., & Goldsmith, C. H. (2016). Population survey features and response rates: A randomized experiment. American Journal of Public Health, 106, 1422-1426.

Hatanaka, M., Matsui, Y., Ando, K., Inoue, K., Fukuoka, Y., Koshiro, E., & Itamura, H. (2010). Traumatic stress in Japanese broadcast journalists. Journal of Traumatic Stress, 23, 173-177.

Institute for Digital Research and Education (n.d.). Statistical computing seminars introduction to power analysis.

Lee, M., Ha Hye, E., Pae Kun, J. (2018). The exposure to traumatic events and symptoms of posttraumatic stress disorder among Korean journalists. Journalism, 19, 1308 – 1325.

Levaot, Y., Sinor, M., & Feinstein, A., (2013). Trauma and psychological distress observed in journalists: A comparison of Israeli journalists and their Western counterparts. Israeli Journal of Psychiatry and Related Science, 50, 118 – 121.

Malterud, K., Siersma, V. D., & Guassora, A. D. (2016). Sample size in qualitative interview studies: Guided by information power. Qualitative Health Research, 26, 1753 – 1760.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11.

Miller, L. L., & Aharoni, E. (2015). Understanding low survey response rates among young U.S. military personnel. Santa Monica, CA: RAND Corporation.

Miller, L. L., & Aharoni, E. (2015). Understanding low survey response rates among young US military personnel (No. RR-881-AF). RAND Project Air Force Santa Monica, CA.

Morales, R. F., Perez, V. R., & Martinez, L. (2014). The psychological impact of the war against drug-trafficking on Mexican journalists. Revista Colombiana de Psicología, 23, 177-193.

Morales, R. F., Perez, V. R., & Martinez, L. (2012). Posttraumatic stress symptoms in Mexican journalists covering the drug war. Suma Psicológica, 19, 7-17.

Newman, E., Simpson, R. & Handschuh, D. (2003) Trauma exposure and post-traumatic stress disorder among photojournalists. Visual Communication Quarterly, 10, 4-13.

Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33, 301 – 314.

Novak, R. J., & Davidson, S. (2013). Journalists reporting on hazardous events: Constructing protective factors within the professional role. Traumatology, 1, 1-10. 

Pyevich, C., Newman, E., & Daleiden, E. (2003). The relationship among cognitive schemas, job-related traumatic exposure, and posttraumatic stress disorder in journalists. Journal of Traumatic Stress, 16, 325-328.

Sills, S., & Song, C. (2002). Innovations in survey research: An application of web surveys. Social Science Computer Review, 20, 22 – 30.

Singer, E., & Ye, C. (2012). The use and effects of incentives in surveys. The Annals of American Academy of Political and Social Science, 645, 112 – 141.

Smith, R., Drevo, S., & Newman, E. (2017). Covering traumatic news stories: Factors associated with post-traumatic stress disorder among journalists. Stress Health. doi: 10.1002/smi.2775

University of Texas (n.d.). Response rates.

Weidmann, A., Fehm, L., & Fydrich, T. (2008). Covering the tsunami disaster: Subsequent post-traumatic and depressive symptoms and associated social factors. Stress and Health, 24, 129-135.