Health and science holds a hallowed place in our modern society. As a result, if you’re a communications or PR pro in the sector, when it comes time for you to amplify your message, factual accuracy is paramount.
Many stories we read about health and science are based on research studies presented in peer-reviewed journals or at conferences. Journalists often learn about these studies from press releases issued by universities or medical centers. The studies tend to carry a lot of weight with journalists — and the public — because they’re vetted by experts and often come from prestigious institutions.
But sometimes the media coverage misstates or exaggerates the results, or fails to mention limitations in the research. People often make health decisions based on these stories, so it’s not hyperbole to say that any misinformation can have life-or-death consequences.
This has led to efforts by journalism organizations to improve the quality of reporting about medical research. And because the reporting often begins with a press release, PR pros in the health sector play a very big role in how the information is eventually presented.
Each year, the Association of Health Care Journalists hosts a well-attended workshop on the topic as part of its annual conference. The most recent meeting, held in April in Orlando, offered valuable insights not just for reporters and editors, but also for PR folks whose jobs involve publicizing research.
Ivan Oransky, a physician and journalist who serves as the group’s vice-president, gave a dramatic example of how a published study can be completely wrong.
He asked if attendees were aware of the “crippling” urinary disorder known as “uromycitisis,” which was the subject of a paper published in Urology & Nephrology Open Access Journal. The punchline was that it’s a fake disease originating from an episode of Seinfeld. A medical editor named John McCool submitted the paper to demonstrate how easy it is to get published in what’s known as a “predatory journal,” which derives revenue by charging authors for submissions.
The larger point, he said, is that “at every stage of the pathway” leading to a news story, there are incentives that work against quality scientific and medical journalism. Researchers need to make “everything splashy and exciting” to get grants. They have to be published in journals to earn tenure. “Then we get these press releases that obviously cherry-pick the hugely exciting results.” And finally, reporters are under deadline pressure to write the stories.
One piece of advice he offered: “Put a biostatistician in your back pocket.” At the core of most studies are statistics that can be flawed or misinterpreted, and biostatisticians “can save you from writing about something that shouldn’t be written about,” or suggest that the study be covered more critically.
Tara Haelle, who serves as the AHCJ’s topic leader on medical studies, offered a “crash course” about the different categories of research.
Clinical studies, she said, involve testing a certain intervention with one group of people and then comparing the results with a control group. They tend to be more reliable than epidemiological or observational studies. “Those are your ‘chocolate, wine and coffee’ studies,” she explained, referring to research touting the health benefits or drawbacks of certain dietary habits. “They end up dominating a lot more of our news.” But even clinical studies, she said, can be flawed by various factors, such as the number of test subjects or how they were selected.
Regardless of the category, “any part of the study can be spun,” she said, either by the researcher or by someone writing the press release. For example, to make the research seem more interesting, they might highlight one part of the study while downplaying others.
If a study is making a health claim, a common practice is to discuss “relative risk” instead of “absolute risk” because it makes the results seem more dramatic. Someone might claim that a drug reduces risk of heart attack by 33 percent, because two percent of people who took the drug had a heart attack compared with three percent who took a placebo. “It’s two ways of saying the same thing, but one of them isn’t as useful for the people reading your story,” she advised, and sometimes she has to dig to get the absolute numbers.
Those cautionary words
Kevin Lomangino of HealthNewsReview.org had the most to say about PR. His organization, founded in 2005 by former CNN medical chief Gary Schwitzer, reviews and rates news stories about health interventions, which can include drugs, devices, diagnostic tests, and behavioral changes.
“We try to offer constructive criticism,” Lomangino said, with a goal of improving the quality of health information that reaches the public. “A lot of what we see is sloppy, incomplete, not well researched, maybe a bit sensationalized…Where does this sloppiness start? I think a case can be made that a lot of it originates with press releases from institutions that conduct studies.”
“When a study is done, they want to put it out there. They want to frame the results in the best possible light. They want to garner the most possible news coverage to promote what they’ve done. Unfortunately, a lot of times, that leads to what we call ‘spin’ — exaggeration, sensationalism, things that are intended to get your attention as journalists.” (This can) filter into the news product that you put out and misinform people.”
Help is here
Two years ago, this concern led the organization to begin reviewing press releases, mostly from academic medical centers and non-profits. News stories and press releases are both rated on 10 criteria, such as whether they adequately point out the limitations in the research. Two of the criteria were changed to make them applicable to press releases.
Reviewers watch out for news stories and press releases that make exaggerated claims based on studies with small sample sizes, or studies using animals. They also tend to look askance at headlines that include words like “cure” and “revolutionary”.
In March, the group launched a pilot project in which it will review drafts of press releases before they’re made public. PR people will receive private feedback from reviewers, but the press releases won’t be publicly reviewed or mentioned in the group’s blog. To be considered, the press release must make a claim about a health intervention. Schwitzer wrote that the group will determine whether there’s demand for the service and gauge the resources required to provide it on a regular basis.
Are you a PR pro in the health sector? Do you work in communications in medicine? Let us know if you think a press release-vetting service like HealthNewsReview.org’s could be useful.