Race After Technology: Abolitionist Tools for the New Jim Code

Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. John Wiley & Sons. ISBN: 978-1-509-52643-7

** indicates a chapter particularly salient / insightful to my research agenda

Introduction: The New Jim Code

The introduction to Race After Technology introduces the reader to the underlying social inequities embedded within our society that are also present in our technologies - and, always has been. My notes are structured by important concepts that jumped out at me during this chapter.

Social and Technological Codes

Benjamin starts out by introducing the notion of names as data. Cultural, social, and historical data. Naming is tied to many aspects of our identities and histories, and also perceived and labelled by outsiders - including outside technologies. For example, in marketing, security, resume parsing, and policing. The normalization of white names and the seeming invisibility of white culture offers immunity to discrimination, othering those that stand out against this normalization (e.g., applicant discrimination based on names) (pg. 4) Technologies reify and amplify these types of culturally encoded biases.

Benjamin compared social codes to the codes of data and technology. Social coding, like racially coded names being synonymous with risk, are reinforced through computer codes, such as California's gang database (which even includes names of babies). Both social codes and computer code are difficult and arduous to change and laden with political power. Race is not simply inherent in technology, but operates as a technology.

The New Jim Code

Benjamin defined the New Jim Code as: "The employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective and progressive than the discriminatory systems of a previous era." (pg. 5-6)

Benjamin introduces the book not by focusing on some "sinister story of racist programmers scheming in the dark corners of the web," but rather, "that the desire for objectivity, efficiency, profitability, and progress" (pg. 7) contributes to bias and inequity in technology. Throughout the book, she reviews technologies that "explicitly amplify hierarchies, ... ignore and thus replicate social divisions, and ... aim to fix racial bias but end up doing the opposite" (pg. 8).
Four Dimensions of the New Jim Code:
  • Engineered inequity: the explicit amplification of social hierarchies.
  • Default discrimination: how discrimination grows out of socially and historically ignorant design processes
  • Coded exposure: how "being watched (but not seen)" (pg. 47) enables differing forms of visibility
  • Technological benevolence: tech products that offer fixes for social bias yet still reproduce or deepen

"Objective" Code and the Monetization of Identity

Benjamin argues that the "datification of injustice" (pg. 8) is not simply attributed to some bygone era, but also weaponized through the notion of "progress." She discusses how the ethos of large tech corporations have become public policy decisions, and how data-sharing has created a more efficient method of marginalization. Diversity is monetized, while the needs of Black users are widely unconsidered (as in the example of not bothering to include African American English in Siri's dialect training) (pg. 28). Through targeted and "personalized" algorithmic advertising, "diversity" is shallowly represented at the surface.

"Economic recognition is a ready but inadequate proxy for political representation and social power," (pg. 19) Benjamin writes. "Celebrating diversity, in this way, usually avoids sober truth-telling so as not to ruin the party" (pg. 20). "Automated systems are alluring because they seem to remove the burden from gatekeepers ... Profit maximization, in short, is rebranded as bias minimization" (pg. 30).

Human biases are often positioned as fixable or overcomed by technological "progress." Benjamin states: "Vows of colorblindness are not necessary to shield coded inqeuity if we believe that scientifically calculated differences are somehow superior to crude human bias" (pg. 21). Thus, if we believe technology is inherently objective, we can replicate social inequality while paying it no mind. This way of viewing technology also ignores the roles and intentions of its creators: "The notion that tech bias is 'unintentional' or 'unconscious' obsceres the reality - that there is no way to create something without intention and intended user in mind" (pg. 28).

Black Boxes Operationalize and Obscure Inequality

Race is not simply inherent in technology, but operates as a technology. The operationalization of identity and identity proxies in algorithmic black boxes becomes more difficult to contest, and more efficient at operationalizing inequality (pg. 33).

A Thin Description Approach

Thin description as "being racialized is 'to be encountered as a surface'" (pg. 45) Pushing back against the notion of all-knowing analysis, but an acknowledgment of the changing surface of "coded inequity" in technological systems and social systems.

Chapter 1 | Engineered Inequity: Are Robots Racist?

Engineered inequity: the explicit amplification of social hierarchies.

The objective farce and inconstestability of black box technology attributes unfair outcomes to the quality of the person affected, rather than the infrastructure of the technology. If machine learning is meant to replicate theories of the human brain, Benjamin asks: "is there only one theory of the mind, and whose mind is modeled on?" (pg. 52). To rethink technology, we must first rethink race (and other tools of social stratification). By comparing seemingly different technologies with seemingly different stakes, she argues we can see how "Blackness can be both marginal and focal to tech development" (pg. 68-69).

Benjamin argues against the notions that machines are themselves neutral, and racism only exists in the meaning and tasks given to them by human beings - because racism is not always explicit and intentional (e.g., the stereotype that racism is only embodied in neo Nazis and the KKK). Racism also occurs without self-conscious intent. "Legal codes, financial practices, and medical care" are also non-organic, but rooted in racism (pg. 61). Thus, robots and technologies themselves can be racist, and learn to communicate in the same coded implicitly racist language as humans. "Discrimination is displaced and accountability is outsourced in this postdemocratic approach to governing social life," she writes (pg. 54).

Though the "robot" has historically been tied to dehumanization, explicitly tied to racialization. The word "robot" is etymologically linked to servitude. Slavery is romanticized in the de-racialized form of the robot (pg. 56). In glorifying the robotic slave aesthetic since the birth of computing, we see "the implicit Whiteness of early tech culture" (pg. 56).

She also criticizes sci-fi for imagining the harm of technology once the same harms that have impacted Black people finally impact white ones: "the anxiety that, if 'we' keep going down this ruinous road, then we [white people] might be next" (pg. 76, 118).

Chapter 2 | Default Discrimination: Is the Glitch Systemic?

Default discrimination: how discrimination grows out of socially and historically ignorant design processes.

In this chapter, Benjamin starts with the definition of a "glitch": "a minor problem, a false or spurious electronic signal, a brief or sudden interruption or irregularity, [a slipper place]" (pg. 77). Glitches are "considered a fleeting interruption of an otherwise benign system" (pg. 80). Programmers may focus on what they view as a technical solution (e.g., parsing Roman numerals) while inadvertently reinforcing social problems (e.g., misreading Malcom X's name as Malcolm Ten). Benjamin argues that glitches offer insight into how systems are designes, "evidence illuminating underlying flaws in a corrupted system" (pg. 80). Patching a glitch is only temporarily delaying the degradation of the entire system, like continuously patching a worn and tattered cloth.

In discussing predictive policing, she points out that even fairness researchers seeking to fix the "glitch" of racist recidivism rates do so by reinforcing racist culture. They use crime rates as ground truth, when statistics on crime rates are inherently racist (pg. 82). She argues that crime prediction algorithms are actually "crime production algorithms," a feedback loop which encourages specific policing strategies and then evaluates the success of the algorithm on those same strategies (pg. 83).

Like the architecture of benches with dividing arm rests or city sills with spikes, digital technologies - including the Internet - can be both obvious and insidious. And it spans beyond the interpersonal interactions between individuals who use it (e.g., activists, white supremacists, family). Tools designed intentionally to be discriminatory also harm everyone - "collateral damage" to the otherwise intended users (pg. 93).

Chapter 3 | Coded Exposure: Is Visibility a Trap?**

Coded exposure: how "being watched (but not seen)" (pg. 47) enables differing forms of visibility

Who is seen is tied to when and how they are seen. As Benjamin writes: "Some technologies fail to see Blackness, while others render Black people hypervisible and expose them to systems of racial surveillance" (pg. 99). She discusses how early photography aided in stratifying racial distance, necessitated with visual evidence unencumbered with the subjectivity of drawings or paintings. Photographs of racial difference were "objective." As photography technologies evolved, the inability for cameras to "see" dark skin was explained away by citing "physics," rather than engaging with practices of color balancing (pg. 100). She delves into this further through discussion of the Shirley Card, an image of a white woman used to standardize exposure (pg. 103-104).

Benjamin quotes Sofia Samatar who wrote: "The invisibility of a person is also the visibility of a race" (pg 101). When a person is Black (or non-white), that person ceases to be an individual, but instead becomes an expectation, a stereotype, a fantasy or a nightmare. Polaroid's investment in properly exposing dark skin in photographs was tied to its role in capturing passbook images during South African apartheid (pg. 106). The inability for webcams to see dark skin shows that technology is not an indicator of progress (pg. 108). Surveillence technologies, she argues, treats faces like a surface (pg. 128).

She reviews research on facial analysis technology biases, such as failure to detect Black faces or misidentification. Algorithms trained primarily on faces of one race tend to recognize those faces more. This is often influenced by the socio-geographic landscape where technologies are developed (pg. 112). When used in policing, where photo databases are disproportionately filled with Black men, the inability to correctly identify Black people is dangerous. Pre-determined visibility is further exposed in the case of Parabon NanoLab's generated sketch of a murder suspect from DNA samples. The software predicted the suspect to be a Latino man. Whereas DNA tests are often used to confirm or rule out suspects, "phenotyping is a predictive technology" (pg. 120). Yet there is no clear connection between DNA and appearance.

Benjamin questions whether improved accuracy of systems like facial recognition is truly ideal in an unjust society.

Chapter 4 | Technological Benevolence: Do Fixes Fix Us?

Technological benevolence: tech products that offer fixes for social bias yet still reproduce or deepen.

Technologies are often develop to attempt to alleviate social difficulties and mitigate human biases. However, technologies that aren't exposed to human difference - like race and gender - can actually become biased. Bias from underexposure, or "colorblind designs," may result in a system enacting insidious and indirect bias (pg. 143). The indirect use of many cultural markers of bias can "[streamline] discrimination - making it easier to ... justify why tomorrow's workforce continues to be racially stratified" (pg. 143). The blackbox would make it more difficult to assess what qualities are gendered or racialized, and easier to explain away through meritocracy.

In the example of Diversity Inc., who uses names and zip codes to predict ethnicity, actually constructs identity categories as a tool to improve predictions. They create new racial categories for their customers. Further, they bypass civil rights legislation which prohibits collecting ethnicity data directly for employment. The purpose has shifted from intentional segregation (e.g., redlining) to attempts to profit off targeted inclusion (e.g., marketing) (pg. 145-146). The results of redlining are now being used as profit-makers for tech companies. Benjamin writes: "Racialized zip codes are the output of Jim Crow policies and the input of New Jim Code practices" (pg. 147).

Without public oversight of these identification practices, and cloaked in the shield of "diversity," there is no way for the public to know if these databases will be used for intentional discrimination. She further writes that the feel-good mantras of diversity, equality, openness, and personalization are actually a form of rigidity, given individuals have no choice but to engage with datamining technologies or else be labelled suspicious (pg. 152-153).

"Racial fixing" attempts to benevolently solve racial issues and customize for different races while reinforcing and justifying otherwise racial inequity. "They are lucrative devices for capitalizing on the misery of others while claiming humanistic concern" (pg. 158).

Chapter 5 | Retooling Solidarity, Reimagining Justice

In this final chapter, Benjamin challenges practices of design, including "human-centered design" processes. She argues that "if design as a branded methodology is elevated, than other forms of generic human activity are diminished" (pg. 178). Rather than code-switching, fitting into the current economic, social, and political paradigms, she proposes rewriting cultural codes (pg. 182). This "would require prioritizing equity over efficiency, social good over market imperatives ... slower and more socially conscious innovation" (pg. 183).

Through engagement with movements for prison and police abolition, Benjamin argues for abolitionist approaches to data, aimed at emancipating the public from data-driven capitalist and surveillance motives. To do this, she proposes taking solidarity - not allyship, which upholds privilege and power differentials - more seriously (pg. 194).