Artificial Unintelligence: How Computers Misunderstand the World

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. United States, MIT Press, 2018.

Broussard takes issue with tech culture that seeks to solve every human problem with technology, often resulting in badly designed technology. She believes we both cannot and should not build certain things. She stands against technochauvinism, the belief tech should be used to solve everything (what is commonly called technosolutionism). Technochauvinism also adopts: "Ayn Randian meritocracy; technolibertarian political values; celebrating free speech to the extent of denying that online harassment is a problem; the notion that computers are more 'objective' or 'unbiased' because they distill questions and answers down to mathematical evaluation" (pg. 8). She breaks down in simple terms how most computer programs work; she discusses how artificial intelligence is designed and its limits. She points out that our machines are not intelligent; they are unintelligent machines which we call intelligent when they work correctly.

She argues that technosolutionism often doesn't think through multiple perspectives or the overwhelming infrastructure of large bureaucratic systems to include largescale technical changes (e.g., the example of school book shortages and tracking in chapter 5). She uses the example of idealistic, chaotic, and creativity thinking in AI forefather Minsky's approach to explain how computer science has grown into a field which fundmanetally lacks concern for ethics or safety (chapter 6). She writes that a small, elite group of white men who have systemically excluded others in favor of math and machinery and embraced an anti-social norms and anti-rules approach have shaped computer science. She takes up issue with the term machine learning (and CS's general imprecise language choices) as being misleading and implying an intelligence to AI (chapter 7). She walks through how machine learning works by detailing an example, pointing out that all data is incomplete and inaccurate, but in ML you can make things up to ensure it runs smoothly (which you cannot do in other disciplines). She points out that the process of creating an ML model can be dehumanizing, just working with numbers, even though when deployed, models can have real consequences. By the end, she insists that humans and machines work better together, rather than on their own, and we should center humans given humans are what machines are meant to serve.