AI Systems As State Actors

Kate Crawford and Jason Schultz, 2019 'AI Systems As State Actors', Columbia Law Review, 119(7), 1941-1972. https://columbialawreview.org/content/ai-systems-as-state-actors/

Few accountability frameworks have worked to hold AI systems accountable to fair, unbiased, due process. Crawford and Schultz state this is because "they have failed to address the larger social and structural aspects of the problems or because there is a lack of political will to implement them" (pg. 1943). Such frameworks have also hardly been used to address vendors. There is an accountability gap concerning algorithmic systems that are used in government decisionmaking, where laws have not been applied to private companies supplying AI architectures and government officials claim no knowledge or understanding of how the systems work. The authors argue for courts to address this gap by applying state action doctrine to third party vendors, as third party vendors supplying AI for government decisionmaking should be considered state actors for the purposes of constitutional liability.

Seeing Like a State AI System

There is currently no comprehensive method for tracking the use of AI in government in the U.S., which also makes assessing their impact on the public difficult. Some systems are developed entirely in-house by governments, while others are developed by contractors or as a donation. They highlight two challenges in public accountability: "(1) lack of clear public accountability and oversight processes; and (2) objections from vendors that any real insights into their technology would reveal trade secrets or other confidential information" (pg. 1944). They point out some known government uses of AI, such as Palantir's supplying of AI systems for ICE and the Trump Administration's focus on algorithmic of disability fraud using social media data.

They discuss case studies where government algorithmic systems were taken to court, such as the home-care assessment systems in AK and IL in 2016 which were subsequently made illegal. They discuss how algorithms are designed not with constitutional liability in mind, but cutting costs for populations considered "expensive," often marginalized populations. The authors write: "an algorithmic system itself, optimized to cut costs without consideration of legal or policy concerns, created the core constitutional problems that ultimately decided the lawsuits" (pg. 1950). A lack of accountability and bias proliferation is further problematized as systems become adopted from state to state through "software contractor migration," oftening training it on one's state population but applying it to another.

Litigation is primarily aimed at the government entities, but not the vendors. While accountability can often be enacted against government actors around specific problems, they do not prevent future harms from AI systems. Justice is taken against government agencies, but has little effect on the design or implementation of systems. In one example of litigation against an AI system directly, the company which built the teacher evaluation system fought to keep its algorithm secret from the plaintiffs' experts. In other cases, like the specific example of a protested criminal risk score, the judge was convinced to find the score inadmissible, but the system was not barred from use.

A Framework for Private Actor Constitutional Acountability

"Constitutional liability doctrines, including liability under 42 U.S.C. § 1983, have traditionally focused on the activities of public actors, such as government agencies or officials. These doctrines operate under the assumption that government actors have both the greatest power and responsibility for upholding those rights and protections, and should therefore be held to the highest levels of accountability Meanwhile, private actors, such as corporations or citizens, need only be held accountable under traditional tort or regulatory approaches" (pg. 19570). They state that courts have now been forced to adapt their state action doctrine for private party liability, generally relying on three tests: "(1) the public function test, which asks whether the private entity performed a function traditionally and exclusively performed by government; (2) the compulsion test, which asks whether the state significantly encouraged or exercised coercive power over the private entity’s actions;101 and (3) the joint participation test, which asks whether the role of private actors was “pervasively entwined” with public institutions and officials" (pg. 1959). Yet there has been no model of consistency in cases for courts to follow. Crawford and Schultz examine the three tests to determine their applicability to AI vendors.

(1) The public function test: whether the private company has performed a core governmental function that has been exclusively/traditionally performed by the state.
Although few functions are solely performed by the state anymore, courts may still rule that the function is within the scope of the state rather than the private entity. "When private AI vendors provide their software to governments to fulfill duties that are specifically tied to a state’s overall public and constitutional obligations, the possibility of the vendor being held a state actor becomes a reality" (pg. 1962). The question becomes about when AI is merely a tool for the government employee to perform state functions or whether the system performs that function itself. If viewed as a tool, the vendor is outside the purview of public functioning.

(2) The compulsion theory: "the extent to which the private entity has discretion to make substantive choices that impact constitutional concerns" (pg. 1964).
Who controls the design and implementation and data for a system is relevant to liability. If the state provides significant encouragement and direction in a system's implementation and maintenance, the state is likely to be seen as constitutionally liable.

(3) Joint participation theory: "whether the government was significantly involved in the challenged action that is alleged to have caused the constitutional harm, so much so that the two entities can be considered joint participants ... If the government were merely involved through standard setting but not active decisionmaking, no joint participation exists" (pg. 1966).

What Courts Should Do

Above, Crawford and Schultz discuss how courts could assess algorithmic vendor accountability. Next, they discuss what they think courts when courts should be intervening.

(1) When the state lacks sufficient accountability or capacity to provide appropriate remedies
When state accountability is weakest, like when the state relies on a private vendor for virtually all design and implementation of an AI system, and the state lacks in capacity to address the consititutional harm caused. "The state had very little knowledge of how the AI software code had been written, where the mistakes were made, what data had been used to train and test it, or what means were required to mitigate the concerns raised in the case" (pg. 1969). They argue that holding a private vendor accountable, rather than simply expecting the state to correct the vendor, particularly in cases of large institutional harm by a system, provides incentives for the vendors to mitigate harm. "Unless vendors are subject to the court’s jurisdiction, the court cannot assert any real oversight or impose any specific injunctive relief on that party, even if it is in the best position to fix errors in how the AI performed" (pg. 1970).

(2) When AI providers are unregulated
There is very little regulation in place for AI vendor accountability. They argue state action remedies could set norms for viewing AI vendors as state actors, while expecting harmed plaintiffs to individually sue vendors would risk viewing them as not acting on behalf of the state.

(3)When trade secrecy or third-Party technical information is at the heart of the constitutional liability question
In the case where the vendor strives for opacity, state action should intervene. Otherwise, government employees may be unable to provide any answers about how the system functions. "In such cases, considering the vendor a state actor would allow courts access to the necessary information to decide cases while also directly addressing vendor trade secrecy concerns" (pg. 1971).