Memory Practices in the Sciences

Bowker, Geoffrey C. Memory Practices in the Sciences. Cambridge, Mass: MIT Press, 2005. Print.

Introduction

This book explores how information technology has become connected with the production of knowledge (science) over the past 200 years, starting with the Industrial Revolution in England. He focuses on the work done to create a "perfect memory" and the design of databases "that contain traces of the past that are currently cast into oblivion" (pg. 4). He discusses how all humans leave traces, in both physical (e.g., books, notes, signatures in visitor logs, etc.) and digital (e.g., citations, web pages that decay, emails) spaces. These individual and minute experiences are agreggated through the eyes of others. The research questions guiding this book are:
- "How do scientists figure their own pasts - both as creatures on earth and in terms of disciplinary lineage?"
- "How do scientists figure the past of their entitities - the earth, the climate, the extinction event?" (pg. 6).

He discusses how the traces left are not often authentic, but a "negotiation between ourselves and our imagined auditors" (aporia) (pg. 7). In other words, people act in certain ways when they imagine they are being watched. Therefore, recording memory occurs embedded within a range of practices that Bowker calls "memory practices." Memory, he writes, is not solely an act of consciousness about what can be called to mind, but may also non-conscious. In archiving, storing things in memory, there is no guarantee that memory will be recalled. Archival practice often involves an instance of recording, even without intention of ever revisiting that recording. Archives are "disaggregated classifications that can at will be reassembled to take the form of facts about the world" (pg. 18). A great deal of data has also been lost when changing information technologies.

He also discusses the role of standardization in memory, which he describes as sequential in that when a "radically new" standard is introduced, there is a new starting point in history. As a new standard is introduced, new practices, trainings, applications, etc. can be developed as well.

Chapter 3: Databasing the World

Bowker argues that the most powerful technologie in terms of the control of our world in the past 200 years is the database. Starting in the 1830s with the rise of statistics and archives commissioners, ordering information into lists using classifications became the "key to both state and scientific power" (pg. 108). He argues that the computer revolution was likely a drive to database, but has allowed us to database yet more data than before.

He moves on to examine scientific practice, and the archiving of scientific information with the goal of being able to understand processes of change. He calls this "memory as remembering what happened and memory as making present again" (pg. 110). In discussing archiving data, he points out that scientific papers do not offer enough information for repetitive testing, necessitating access to the original data.

He writes that standardization and classification are both necessary for developing working infrastructures, with each later of infrastructure requiring its own set of standards. Further, not all standards are the best standards (e.g., political or capital victories that set the standards, like VHS over Betamax), and what might be best can shift (e.g., the QWERTY keyboard which worked well for key jamming on manual typewriters, but puts the burden on the left hand on modern keyboards). Often, they just become so entrenched, they become difficult to change. As standards are implemented, the "stronger get stronger" in that development tends to focus on those with the most users or consumers.

He describes a continuum for standard setting: on one end, one standard fits all, as imposed by governments or monopolies, and at the other end, thousands of standards, like in the world of APIs. The ideal standards would allow people to reliably find a piece of data across multiple databases. However, Bowker notes, that studies have sown that people do not dedicate work towards standardizing their data beyond what is useful to them immediately. Though standards are necessary technically, socially archiving and maintenance of databases is laborious and difficult.

He also discusses the knowledge economy of scientific work in three ways: (1) control of knowledge; (2) privacy; (3) patterns of ownership. Control of knowledge refers to "who has the right to speak in the name of science"; where traditionally only experts had authority over their domains, the Web has flattened knowledge hierarchies, but also brought the need for more critical thinking (pg. 117). Privacy concerns have become a major concern in the information economy. And new patterns of ownership of information has led to the privatization of knowledge in a knowledge-based economy, and research is more closely tied to industry. Of course, who owns what knowledge has always been political, considering the dominance of Western science over indigenous knowledges. "Information infrastructures such as databases should be read both discursively and materially; they are a site of political and ethical as well as technical work" (pg. 123). He discusses the globalization of information as a "second wave of colonialism" focused on information and information policies.

Scientific knowledge needs to be shared across disciplines, yet scientists are not trained to do this and computer scientists building technical infrastructures are not translators.

Chapter 4: The Mnemonic Deep

This chapter focuses on moving from "the precuse conceptions of metadata" (the data about the data) to the "messy daily practice" of scientific work (pg. 138). The Federal Geographic Data Committee (FGDC) outlines classes of metadata, from Class I to Class V (pg. 139). However, Bowker notes, not all data can be easily captured in these five classes and some contextual information may be lost when things are added are subtracted. What happens to data between collection and publication has been studied by Goodwin and Latour; however, what happens from raw data to databases to analysis has been little studied (pg. 140). He writes that "the development of metadata standards should go hand in hand with an exploration of modes of scientific communication" and decisions around data standards "quickly become issues of historiographical import within the sciences they concern" (pg. 140).

When attempting to database data, it becomes necessary to classify. What Bowker and Star termed a bootstrapping problem becomes a barrier: the researcher may have been the first to collect this data, so how does one classify it? Some data can only be classified when another has classified it. How then, do you back-apply that classification system? Often, poor classification becomes an issue when adoption outside the localized field or database happens. Those who abandon scientific literature might instead create "arbitrary coding systems." Further, different artifacts may have different meaning to different disciplines, leading to different classifications systems (e.g., soil for agriculturists meaning what soil is good for what plants; soil for ecologists also including rock).

He moves from looking at things that do get classified, to things that do not. Some entities are overlooked "because they do not lead to spectacular science or good funding opportunities" (pg. 146). For example, in biodiversity, there are "charismatic" species that appeal to the public and funding aencies - like the koala versus a species of seaweed. Scientists may be more likely to get funding studying charistmatic species, and new people may enter science to study them, creating a feedback loop that "skews our knowledge of the world" (pg. 146). The things that do not get classified are rendered invisible. Databases often represent our social and political economy accurately, in terms of what matters to us, at the expense of things being othered.

Reclassification is a long, expensive process, where even for digital databases, there is no single button or approach to an update. Who has the authority to rename something, and how does that impact everything prior to its renaming? Some communities, like the botanical community, has developed a strict set of procedures for naming, yet it remains difficult to carry out in practice. Even in a perfect world, in terms of naming, how we deal with old data becomes a barrier.

Chapter 5: The Local Knowledge of a Globalizing Ethos

This chapter focuses on the "universal nature" of ordering memory practices while also examining the "local" approaches to universalizing.

In globalizing practices, Bowker discusses what is lost. He discusses how collapsing multiple categories (in this case, registers) into a single classification (in this case, currency) is problematic in that the result is filled with contradictions (e.g., what is natural vs what is human). He discusses how biodiversity on the tree of life are meant to represent the web of life into countable units, but their is "fierce debate" over the nature of these units. Further, temporaliy is collapses into "flat time" and mapping the complexity of the world becomes linear and featureless. He calls this a "modality of implosion," where representations "are imploded into a singular form rather than exploded into full detail" (pg. 216).

He then discusses the desire to preserve local knowledges in local forms, but also the difficulty of doing so. For example, databasing knowledge in local languages can render that data "unusable" as has been previously dicussed about other categories. (There are also some fascinating excerpts about the language around knowledges in certain cultures, such as indigenous or traditional being offensive or rejected in some cultures (pg. 219)).