Database journalism or structured journalism is a principle in information management whereby news content is organized around structured pieces of data, as opposed to news stories. See also Datajournalism
Communication scholar Wiebke Loosen defines database journalism as "supplying databases with raw material - articles, photos and other content - by using medium-agnostic publishing systems and then making it available for different devices."
History and development of database journalism
Computer programmer Adrian Holovaty wrote what is now considered the manifesto of database journalism in September 2006. In this article, Holovaty explained that most material collected by journalists is "structured information: the type of information that can be sliced-and-diced, in an automated fashion, by computers". For him, a key difference between database journalism and traditional journalism is that the latter produces articles as the final product while the former produces databases of facts that are continually maintained and improved.
2007 saw a rapid development in database journalism. A December 2007 investigation by The Washington Post (Fixing DC's schools) aggregated dozens of items about more than 135 schools in a database that distributed content on a map, on individual webpages or within articles.
The importance of database journalism was highlighted when the Knight Foundation awarded $1,100,000 to Adrian Holovaty's EveryBlock project, which offers local news at the level of city block, drawing from existing data. The Pulitzer prize received by the St. Petersburg Times' Politifact in April 2009 has been considered as a Color of Money[clarification needed] moment by Aron Pilhofer, head of the New York Times technology team, hinting that database journalism has been accepted by the trade and will develop, much like CAR did in the early 1990s.
Seeing journalistic content as data has pushed several news organizations to release APIs, including the BBC, the Guardian, the New York Times and the American National Public Radio. By doing so, they let others aggregate the data they have collected and organized. In other words, they acknowledge that the core of their activity is not story-writing, but data gathering and data distribution.
Beginning with the early years of the 21st century, some researchers expanded the conceptual dimension for databases in journalism, and in digital journalism or cyberjournalism. A conceptual approach begins to consider databases as a specificity of digital journalism, expanding their meaning and identifying them with a specific code, as opposed to the approach which perceived them as sources for the production of journalistic stories, that is, as tools, according to some of the systematized studies in the 90s.
Difference with data-driven journalism
Data-driven journalism is a process whereby journalists build stories using numerical data or databases as a primary material. In contrast, database journalism is an organizational structure for content. It focuses on the constitution and maintenance of the database upon which web or mobile applications can be built, and from which journalists can extract data to carry out data-driven stories.
Examples of database journalism
- Wiebke Loosen, The Second-Level Digital Divide of the Web and Its Impact on Journalism, First Monday, volume 7, number 8 (August 2002).
- Adina Levin, Database journalism - a different definition of “news” and “reader”
- Adrian Holovaty, A fundamental way newspaper sites need to change
- Rich Gordon, Data as journalism, journalism as data
- EveryBlock's page on newschallenge.org
- Aron Pilhofer, A PolitiFact Moment for Journalism
- Jeff Jarvis, APIs: The new distribution
- Suzana Barbosa; Beatriz Ribas (2008), Databases in Cyberjournalism: methodological paths
- Adrian Holovaty, Announcing chicagocrime.org