Why quality journalism is winning

It was an epiphany: On the list of most read stories on nytimes.com during the last week of January, number one and three were thoroughly researched and expertly edited, and they were long — 5000+ words. These articles about how Apple’s remarkable success is tightly interwoven with the evolution of a global economy are examples of top-class journalism. But stories like these aren’t supposed to make it to the top of the web’s most-read lists, even on a highbrow website. So what’s going on here?

I noticed these stories first on the New York Times’ free Android app. Then I saw them tweeted and retweeted; it was clear they were being shared and recommended, driving readers in cascades to the articles (maybe the NY Times people could share the traffic data?) Recommending and sharing has a social logic* that may well have played a part in the stories’ success. When you recommend something to your friends, followers, circles or whatever, it reflects back on you. If you want to belong to the group of people who consider themselves enlightened citizens, politically aware, curious, discerning, and even with good taste, you recommend the investigation of working conditions producing the iPad, not the latest paparazzi scoop (in this case, it didn’t hurt that the imagined recipient of the recommendation would indeed be reading the article on an… iPad).

On a structural level, we might be witnessing the well-known circulation spiral effect, but now on an international scale. In its original version, the theory claims that in a competition between two newspapers, over time the bigger one gains more and more of the advertising market, making it able to invest in quality journalism. The smaller newspaper gradually loses its share of the ad market, the quality declines, and often it eventually will fold. In the digital market(s), there are x competitors. But it appears that size and quality matters here as well. The NY Times and others are able to reinvent themselves digitally while keeping the printed newspaper going as main source of revenue for as long as possible, financing increasingly attractive digital content across platforms. Buoyed by the combination of what The Economist Group’s CEO terms “the mega-trend of mass intelligence”, and the recommendation effect, this strategy might well succeed.

You guessed it, my second example of quality journalism winning is The Economist. Its audience is growing strongly. In the first half year of 2011 the weekly had close to 1,5 million subscribers and 100.000 digital subscribers. The large majority of the latter paid for the iPad app. The Economist now has a unique opportunity to invest in the quality of its content, reinforcing the positive spiral effect. Especially interesting is how the website is used to maintain and increase interest in the weekly edition. Keen users will have noted how several new blogs have strengthened both quality and interactivity. Essential ingredients here are that the blogs are authored by the magazine’s own writers and that they contain lots of original research and observations. This is more expensive than the more common model of having external bloggers contributing (often for free). However in-house bloggers makes it easier to ensure a consistent level of quality, and consistency with profile and brand values.

The Economist’s concept of quality journalism sets it apart from traditional thinking on the nature and value of news. By reading The Economist you expect to gain a better understanding of a topic. You do not expect to find the latest expose of some secret document. In fact, many of the articles are well-written and edited summaries, with an analytical edge, strictly speaking not news at all. Many old school news journalists would find that utterly meaningless, but it might be just what many readers need. Classic news stories have a very short halflife and are seldom very interesting to read — as texts. You probably will not recommend a typical news story of five paragraphs about working conditions in Guangzhou. But a broader analysis of how China’s labour market affects your country’s economy or an investigaton into how Apple is managing its supply chain, that’s something else.

*Thanks to Håvard for pointing out this dynamic to me.

Finding the source of an epidemic: Faster with open data?

German public health officials are working around the clock to find the source of the e.coli (EHEC) epidemic. Today as many as 365 new cases were confirmed, in all there are more than 1000 cases. So far at least 14 deaths have been registered.

Germany is probably one of the countries in the world most able to deal with such a serious epidemic. However, looking at media coverage and the way public health agencies are informing citizens, I think a different approach could speed up the crucial process of finding the source.

A similar, though not so quickly developing epidemic occurred in Norway in 2006 (link in Norwegian). 18 persons, 16 of them children, were hospitalized with e.coli. One of the children died. Research by the public health agencies, including of course interviews with patients and their families, pointed initially towards ground meat as the culprit. But this turned out to be a wrong lead (as many of us following the news had suspected). Several weeks went by until the bacteria was found in “morrpølse” and traced back to a specific production facility.

The story and the “ground meat hypothesis” dominated the media, and this was a hot topic for discussion around breakfast and lunch tables all over the country. At the time, I wondered why the public health experts didn’t disclose more of their findings. If they had published all data and information from interviews and research (of course anonymized for privacy), then other experts, and smart people in general, could have contributed their own analysis. Perhaps they could have pointed to leads that officials had overlooked, patterns they hadn’t observed. Can we for example rule out that some IT experts have better tools at their disposal, or at least other tools than the officials in charge? Of course, data should be published in English so that foreigners also could weigh in.

One obvious counter-argument is that asking for ideas and analysis from the public would open the floodgates and confuse rather than help researchers. But this is again a question of having the right tools available for filtering and analysing contributions. After all, crowdsourcing research processes has been tried before.

In the ongoing German epidemic, online media could play a constructive part by starting such a process of asking the audience for advice and ideas. At least they should start by offering more in-depth interactive presentations of how the epidemic is spreading. Detailed maps would be interesting and helpful in itself. At least some government agencies are providing quite specific information about where cases originated (Schleswig-Holstein, pdf). Media could in general use this opportunity to file requests for data and demonstrate the potential of data journalism.

***
UPDATE June 5: The suggestion above more or less takes for granted that the responsible government agencies, hospitals etc. at least have an efficient way of collecting and disseminating information among themselves. But this is doubtful, as criticism in the German media the last week of several aspects of how the epidemic is handled shows. Hospitals complain about the late arrival of questionnaires to be used in interviews with patients. The Robert Koch Institut does not disclose much information on how they are working to find the source of the epidemic, one hospital director says. By tomorrow, June 6, a new Internet government platform for sharing of information between agencies will be launched — another indication that the information infrastructure part of dealing with the epidemic has had flaws so far.

UPDATE June 15: On Zeit Online’s Data Blog, some of the same questions are raised and debated, with comments from the Robert Koch Institut.

Steal this story vs. please pay here: The coming debate about public service media

The re-emergence of “paid content” in the past couple of years, most aggressively marketed by Rupert Murdoch, has dominated media coverage. But in the shadow of The Times’ new paywall and the apps for Apple craze another development has taken hold — an approach to news publishing that has the potential to reinvent the idea of public service media. This is the idea of promoting (almost) unrestricted re-use, re-publication of your material, in order to achieve the greatest possible impact of your journalism. ProPublica is one of the news organizations to embrace this principle in their invitation to steal their stories. Logically, they use the established Creative Commons licensing system, but they implement it in an innovative way. Instead of just the discreet Creative Commons logo attached to stories, there is a “Republish” button that produces the text with html tags, ready for pasting into a publishing tool — exactly the kind of extra service that has always been needed to unleash the potential in Creative Commons.

The US startup ambitious journalism projects that have sprung up recently, wholly or partly funded by foundations, in essence share the “steal this story” approach:

Instead of planning how to get the story published before word of it leaked, the excited editors started throwing out ideas for how they could share Johnson’s reporting with a large array of competitive news outlets across the state and around the country. No one would get a scoop; rather, every outlet would run the story at around the same time, customized to resonate with its audience, be they newspaper subscribers, Web readers, television viewers, or radio listeners.

The quote describes California Watch, who also have case-studied themselves.

Continue reading

The welcome comeback of the image

The internet saved our culture of writing, it has often been claimed. The image saturation caused by television had, in this narrative, reached dangerous levels by the mid 1990s. Enter the commercial internet with email and the web. At the latest with web 2.0, everyone is writing all the time. Hurrah!

People like David McCandless bring a fresh approach to question this now received wisdom. By visualising data instead of just referring to them in text, modern infographics can be more enlightening than acres of text, not less:

I’ve spent the last year exploring the potential of information visualisation for my website and a book. I’ve taken loads of information and made it into simple, colourful and, hopefully, beautiful “visualisations” – bubble charts, concept maps, blueprints and diagrams – all with the minimum of text. I don’t just mean data and statistics. I love doing this with all kinds of information – ideas, issues, stories – and for all subjects from pop to philosophy to politics. Personally, I find visualisations great for helping me understand the world and for sifting the huge amounts of information that deluge me every day.

More of his visualisations can be enjoyed at his Flickr page.

Information and data visualisation has come to seem increasingly important to me as I in the past few months have spent a lot of time on the topic of opening up data in government (project blog in Norwegian). Clearly, it’s possible to do harm with data, as it is with all kinds of information. But the solution in an open society cannot be to lock down government data. That’s why it’s so important to have an ongoing discussion about how data can be used to promote better understanding of society, like McCandless does with his infographics. That he helps to improve journalism at the same time, isn’t actually a drawback these days.

Real-time challenge

Bjarke Myrthu sees a business opportunity for “old media” in “the challenge of the age” identified by Google’s Eric Schmidt: Learning how to rank user-generated, real-time information. Bjarke:

Part of what he calls “real-time social content” is what old media is calling “breaking news”. In other words Google is working hard at becoming the best at collecting and organizing breaking news produced by all of us. While most of us had no idea what Google was about to do first time around (I remember thinking it was a great service but too bad they would never make money), this time around the Newspapers and the rest of the media industry actually have a chance to compete. Why should the best brands in old media not be able to create a great search technology and future business model for breaking news?

Time, money and skill

ProPublica editor Paul Steiger is fairly optimistic on behalf of investigative journalism in the web era:

Last year, a 20-something, self-taught Internet genius named Amanda Michel mobilized hundreds of politically active citizens to supply info for her “Off the Bus” report on the Huffington Post Web site. When Candidate Obama voiced the notion that some folks who were losing out in the global economy were clinging to such things as religion and guns to compensate, Michel’s network captured it and we soon all heard about it. Without that network, we might never have known, because reporters weren’t invited into the area where Mr. Obama spoke. Michel now works for ProPublica and has put together a team of more than 2,200 volunteers who will do similar reporting for us. This army permits us, for instance, to track progress on 500 representative federal stimulus projects in real time, even though our own news staff numbers just 32.

We will still need journalists’ special skills:

The process of finding and communicating what we used to call news may no longer require newspapers-at least not as we have known them, as seven-day-a-week, ink-on-paper compendiums of new information on a broad range of subjects. But the process will still require journalism and journalists, to smoke out the most difficult-to-report situations, to test glib assertions against the facts, to probe for the carefully contrived hoax. These are reporting activities that take a great deal of time, money, and skill.

Guardian hiring “beatbloggers” for local project

From the Guardian’s digital content blog:

Starting with Leeds, Cardiff and Edinburgh, guardian.co.uk is planning to launch a local news project in a small number of locations. At the moment guardian.co.uk is looking for bloggers – with journalistic qualifications “desirable” – to help cover community news, and report on local developments. The project will emphasise local political decision-making, and is scheduled to go live next year.

The job description for bloggers:

Working from your home, or anywhere with WiFi, as a ‘beatblogger’ you will lead the Guardian’s innovative approach to community news coverage in Leeds. This will include reporting on local meetings and events with an emphasis on local political decision making, identifying issues of importance to local residents and signposting information and news provided via other sources. You will be willing to collaborate with others to create a vital resource for the city.

The post-paper newsroom

daveaskins2.jpg

The photo says almost all, but make sure to read Nieman Journalism Lab’s text about The Ann Arbor Chronicle as well:

There’s no fixed publication schedule for full-length stories, said Morgan, a former business and opinion editor for the defunct News. Rushing to get the story first is outdated and doesn’t really matter to readers, she said. “The assumption is, well, we’re going to get it done as soon as we can given everything else we’ve got going,” she said. (my emph.

(Nieman Lab is published with a CC by-nc license, like this blog).

New owner and expansion for EveryBlock

MSNBC has bought EveryBlock, which means that the previously foundation-funded, innovative hyperlocal/microlocal site made by Adrian Holovaty & co can continue and even expand:

…it means that we’ll have resources to expand EveryBlock profoundly. MSNBC.com is the most-visited news Web site in the U.S. and is in solid financial shape in a time when news organizations around the world are struggling. We’re excited about the possibilities of pointing a massive audience at EveryBlock and having the resources to beef up our technological infrastructure and staff. Our site is very young — it’s only been live for about a year and a half — and we have a lot of ideas and expansion plans. I often tell friends and industry colleagues that EveryBlock in is current incarnation is only about 5 percent of what we want to do with it. We’re now in a position to make this happen.