The recent debate about the Human Technopole venture, which has been conveyed largely by the media with an intricate and often redundant meshwork of scientific, pseudo-scientific and scandal-evoking contributions, makes it wothwhile to think seriously about how science is communicated. We've all seen how scientific conclusions that were carefully vetted by other scientists can be reduced or distorted beyond recognition for the sake of TV ratings or story clicks. Check this semi-serious point of view by John Oliver on HBO!
The systemic failure of science communication by mass media is the topic of John Oliver's latest diatribe, and he really nails it. There are a variety of problems all mashed together:
- Journalists often don't take the time, or have the skills, to actually read through, comprehend, and translate scientific findings that can be very technical. After all, scientific papers are written for other scientists, not for the general public, so it takes a certain amount of training and effort to unpack what they mean. But that's, like, hard and boring, and it's not as if your audience will know any better if you screw it up.
- Journalists like big, bold conclusions: "X Thing Cures Cancer!" Scientists don't work like that. Most peer-reviewed papers focus on very narrow problems and wade far into the weeds of complicated scientific debates. That doesn't mean studies are all too esoteric to be be useful (although some undoubtedly are). It means that scientists draw their overarching conclusions about the universe based on a broad reading of entire bodies of literature, not individual studies. Single studies rarely yield revolutions; instead, our understanding evolves slowly through tedious, piecemeal work. Scientists want to understand the forest; journalists often just want to show you, dear reader, this one REALLY AWESOME IMPORTANT tree they just found. Those conflicting interests can lead to misleading reporting.
- Not all studies are created equal; some contain a variety of inadequacies that should give you pause about the conclusions. But journalists often do a poor job of reporting on these inadequacies, either because they don't do enough reporting to know the inadequacies exist or because reporting them would undermine the big, bold conclusion the reporter wants to tell you about. Some studies have extremely small samples sizes. Some relied on rats or monkeys or whatever, but the journalist doesn't explain that the conclusion might not be the same for humans. Actual studies that were published in peer-reviewed journals are often given equal air time to "studies" that some activist/lobbying group/bozo in his garage threw together. Some studies lack important context or conflict with preexisting science—something that journalists often fail to point out.