A Probing Look at Local News Comes to Some Disturbing Conclusions

Share this:

To accurately gauge the quality of digital local news and assess its impact on communities, you would have to do what no one has wanted, or dared, to do. You would have to look at enough websites, enough stories and videos, in enough communities until you were glassy-eyed.

We can thank the four-person research team from the DeWitt Wallace Center for Media & Democracy at Duke University, led by Professor Philip Napoli, for doing the daunting work—compiling “1.6 million documents (html files, pdfs, images, audio files, etc.), 2.2 terabytes of total data, and an archive of over 16,000 news stories.”

I asked Napoli what he did to keep the team motivated through their slog: “Paid them well. Plus, they’re doctoral students. They actually enjoy this stuff. And it took a long time.” Eight months, to be exact.

The researchers did set some boundaries. They decided to focus on news generated in communities ranging in population from 20,000 to 300,000. From their base of 493 communities, they randomly selected a hundred cities and other municipalities as their focus.

Then they went to work examining a week’s worth of output from all the news providers in the hundred communities—primarily daily newspapers, sites operated by radio and TV stations, and most of the balance being “pure-plays” with no print legacies. Every one of the 16,0000-plus URLs is listed here.

In what they call their “most striking finding,” the researchers found that “only about 17% of the news stories provided to a community are truly local—that is, actually about or having taken place within the municipality.”

These are among other unsettling findings:

“… Eight communities contained no stories addressing critical information needs. Twelve communities contained no original news stories. Twenty communities contained no local news stories. Thus, a full 20% of the sample of communities was completely lacking in journalism about their communities.”

To answer some concerns I had about the study, I put these questions to Napoli:

The study looked at a variety of digital news sites, including ones operated by radio and TV stations. Many of those sites have comparatively low-quality news, based on your three criteria. As a result, wouldn’t your quantification-focused methodology skew your measurement of the quality of news coverage per community downward even if there were at least one site with high-quality news?

That’s a good question, one that actually points to the next stage in our analysis. In this first report, we focused on the community as the unit of analysis, and on the overall quantities of original, local, and critical-information, need-addressing journalism in those communities.

One thing we did in our pilot study (and plan to do with these data) is to calculate concentration ratios for individual communities. These provide us with an indicator of the extent to which the production of original, local, and critical-information, need-addressing journalism is concentrated within a few outlets or more widely dispersed.

We’re also going to be analyzing if, or to what extent, these outputs are concentrated within certain types of outlets (e.g., are local newspapers still the primary source of original reporting, etc.). So we’ll know more about the distribution of the production of journalism within communities after the next phase of our analysis.

The 2017 study “Local News in a Digital World: Small-Market Newspapers in the Digital Age” by journalistic academics Christopher Ali and Damian Radcliffe found local news in smaller communities to be better than what was commonly perceived. It said many news sites in smaller communities stressed solutions to issues rather than just presenting what your study calls “critical information.” Do you see your study’s findings in conflict with those of the earlier study?

No, not necessarily, especially since I believe their study focused exclusively on newspapers, while ours cast a bit of a wider net. And your point about their work highlights some of the ways individual news stories can be analyzed more deeply than we did in our analysis.

Given the scale we were working at, our content analysis was fairly basic (original, local, addressing critical information need). Our hope is that other researchers will take advantage of the massive content archive that we’ve made accessible and conduct further analyses of the stories (whether they stress solutions, what types of sources they rely on, do they provide calls to action, etc.). The potential is there to dig deeper. But at the same time we think our analysis taps into some key criteria that matter.

The Ali-Radcliffe study also stressed what it said was the importance of partnerships and collaboration among local news providers. That was not one of your criteria. Any comment?

Correct, that’s not one of the variables we gathered for our analysis. Should we later seek to try to explain the performance of individual outlets, that would certainly be relevant.

I’m curious—which are the 12 communities that had no original news stories, as you defined them, and the 20 communities that had no local news stories, also as you defined them?

We’re not providing data on individual communities at this point, as we don’t want the report to be used to generate criticisms of individual media outlets (that’s not the point of the study). The goal was to provide a generalizable study (just like a survey), not an evaluation of specific communities.

Can you quantify how many “news deserts” there are among the hundred communities you spotlighted?

Figure 1, where we plot out the number of “zero story communities” across our various measures, is pretty useful in terms of assessing the prominence of news deserts within our sample. But I think exactly what constitutes a news desert remains open to discussion.  Hopefully this report can help generate that discussion.

Your study implies that inadequate local news has a harmful effect on communities. But to establish a causal link, wouldn’t you have had to demonstrate how local democracy, from public opinion to governance, was adversely affected, which the report didn’t attempt to do?

We didn’t explore that causal link, but a number of other studies have. You can find a good summary of this research here: If we’re able to obtain the resources to expand our research in this direction, we may do that as well.

***

Napoli answered most of my key concerns about the report. But I still think it’s too narrow in defining what is and isn’t local news. An example: The Daily Journal covers a group of cities and other municipalities in San Mateo County on the San Francisco Peninsula. One of them, the City of Belmont (pop. 25,835), was one of the study’s random selections. The Daily Journal’s coverage for Belmont also includes stories from nearby communities in San Mateo. That coverage, I’m sure, earned a low rating for Belmont in the study. But residents of Belmont surely want to know and may need to know what’s going on in communities clustered with theirs.

I also think the report focuses too much on the negative. The privately owned TAPinto network of more than 70 news sites—most in North Jersey and the rest in New York State north of New York City—does an admirable job meeting all three of the benchmarks of the report.

But “Assessing Local Journalism” is an important milestone in local journalism, especially in setting more defined parameters for what constitutes quality news. We are now much better prepared to understand the continually evolving relationship between local news and local democracy and why both need each other, now more than ever.

Tom GrubisichTom Grubisich (@TomGrubisich) has written “The New News” column for Street Fight since 2011. He is also working on a book about the history, present, and future of Charleston, S.C.

Tags: