Graphic: Masthead for Washburn Law Journal blog.

Acquiring Ethical Algorithmic Governance

by David S. Rubenstein | November 16, 2020

On November 5, 2020, Washburn School of Law hosted a symposium that explored the “rights” and “wrongs” of artificial intelligence (AI). My presentation at the symposium focused on the federal government’s uses of AI. Currently, the Federal Bureau of Investigation uses AI in law enforcement; the Social Security Administration uses AI to adjudicate benefits claims; the Department of Homeland Security uses AI to regulate immigration; and countless other agencies are experimenting with AI for the delivery of government services, customer support, research, and regulatory analysis. This small sampling presages a new era of “algorithmic governance,” in which government tasks assigned to humans will increasingly migrate to machines.

Algorithmic governance holds great promise. Under the right conditions, AI systems can solve complex problems, reduce administrative burdens, and optimize resource allocations. Moreover, AI can bring greater efficiencies and consistencies to government operations. Under the wrong conditions, however, AI systems are fraught with peril. A single cyberattack, design flaw, or lapse in human oversight, can cause individual or widespread public harm. The polity expects the government to act in unbiased, explainable, fair, and accountable ways. But these norms of good governance pose major challenges for AI systems, which are fueled by data, agnostic to democratic values, and ‘think’ beyond human-scale capacities.

Of course, the risks of harm are contextually contingent. It is one thing when an AI system misclassifies emails as spam or recommends purchasing more office supplies than needed. It is quite another when an AI system mistakenly deprives individuals of unemployment benefits, encroaches on personal privacy, leads to a wrongful arrest, perpetuates gender and racial biases, deprives access to government food programs, impedes the right to travel, and so on.

A burgeoning literature has emerged to meet the challenges of algorithmic governance. Most of the public law scholarship to date has shined critical light on the tensions between algorithmic governance and constitutional rights. Scholars have also begun the important work of squaring algorithmic governance with separation of powers and administrative law. Yet federal procurement law has been dangerously neglected in the reformist agenda. And this is a dangerous neglect. The government’s pent up demand for AI systems far exceeds its inhouse capacity to design, develop, field, and monitor this powerful technology. Accordingly, many (if not most) of the tools of algorithmic governance will be procured by contract from the technology industry.

The government’s procurement of goods and services from private vendors in not inherently problematic. But the government’s procurement of AI requires close attention; this is not business as usual. To begin with, this technology is virtually unregulated in the private market. Unless and until that changes, the government will be acquiring unregulated products and deploying them at scale. Moreover, when procured from private vendors, AI systems may be shrouded in trade secrecy, which can impede public transparency and accountability. Beyond these concerns lies another: AI systems are embedded with value-laden decisions about what is technically feasible, socially acceptable, economically viable, and legally permissible. Thus, without intervention, the government will be acquiring value-laden products from private actors whose financial motivations and legal sensitivities may not align with the government or the people it serves.

It is no novelty to observe that the government’s acquisition of AI from the commercial market exacerbates the risks of algorithmic governance. What has yet to permeate academic imagination, however, are the ways that procurement law might be harnessed to mitigate those risks. My forthcoming law review article, "Acquiring Ethical AI," hopes to steer the conversation toward procurement law’s positive potential. Indeed, I argue, the acquisition gateway is uniquely positioned and well suited to promote responsible algorithmic governance.

Currently, the government is investing huge amounts of taxpayer dollars in AI systems that may be inoperable, either because they are untrustworthy, or because they are unlawful. For example, if the government cannot explain how an AI system works, then it may run afoul of constitutional due process or administrative law principles. Even if an AI system clears those thresholds, it may still violate federal anti-discrimination laws, privacy laws, and domain-specific strictures across the regulatory spectrum. Litigation will no doubt surface these tensions; indeed, it already has. Yet much of that screening can occur, ex ante, through procurement mechanisms that are more efficient, effective, and prior to harm. To be sure, procurement law will not solve all the challenges ahead. Just as surely, the challenges of algorithmic governance cannot be solved without procurement law.

More than a marketplace, the acquisition gateway can be a policymaking space. There are a wide range of possibilities, including (but not limited to) the following, which I explain more fully in forthcoming works-in-progress.

First, to establish a baseline, federal lawmakers should mandate the creation of a government-wide inventory report that includes clear information on each AI system used by federal agencies. Increasingly, policymakers and stakeholders are wrangling about algorithmic governance, including whether AI tools such as facial recognition should even be permitted. But an informed policy debate is impossible without knowledge about what AI tools have already been adopted by which agencies, for what purposes, from which vendors, and so on.

Second, federal lawmakers should require that agencies prepare “AI risk assessment” reports prior to the government’s acquisition of AI tools and services. These risk assessments would foreground several challenges and vulnerabilities that inhere in AI systems—most notably, relating to transparency, accountability, fairness, privacy, and safety. An AI risk requirement, along the foregoing lines, could fit neatly within the current structure of acquisition planning. Federal regulations already leave room for specialized policy considerations in the planning process. For example, agencies in the market for building construction and renovation are required to comply with pre-established “Guiding Principles” for green-energy. Along similar lines, agencies in the market for AI technologies should be required to undertake an AI risk assessment tailored to the unique challenges of algorithmic governance.

Importantly, the risk assessment should be conducted by a multi-disciplinary team that includes not only agency acquisition and IT personnel, but also domain experts, legal experts, sociotechnical ethicists, and data specialists. Moreover, as much as possible, the team should be comprised of individuals with diverse backgrounds and perspectives, which can help mitigate the risk of blind spots in the creation of the AI risk assessment itself. In addition, to promote accountability, the agency official overseeing the acquisition should be required to sign the risk assessment.

Third, federal lawmakers can require that agencies explicitly account for these risk factors in market solicitations and contractual awards. Doing so will force agency officials and vendors to think more critically—and competitively—about the AI tools passing through the acquisition gateway. Incorporating AI risk-related questions in market solicitations can yield several direct and indirect benefits. Most directly, the answers by market participants will enable the agency to make side-by-side comparisons of the risks associated with a particular vendor relative to the field. Anticipating this, strategic and innovative vendors will compete for an ethical edge.  In some instances, the agency might even find opportunities for collaboration—for example, between two or more startup enterprises—to mitigate the overall risk based on their respective strengths and weaknesses.

Less directly, yet as importantly, the government’s purchasing power and virtue signaling can spur market innovation and galvanize public trust in AI technologies. While industry is generally wary of more procurement regulations, the prescriptions sketched above seize upon areas of shared interest. For the government and industry alike, AI innovation is a complex ambition that scopes well beyond unlocking AI’s technological potential. Innovation also entails the responsible development and deployment of AI tools. Every major technology company has teams of high-skilled workers and mounds of investment capital dedicated to “ethical AI.” And the government, for its part, is pouring billions of dollars into related research and development. Despite motivational differences, public and private interests around trustworthy AI merge in the acquisition gateway. That shared reality is a foundation for principled and pragmatic regulatory compromise.

It is encouraging that the United States has committed to the responsible and trustworthy use of AI. But proselytizing is not actualizing. If the nation is truly committed to these principles, then federal procurement law must be part of the AI agenda.

 

Professor Rubenstein is the James R. Ahrens Chair in Constitutional Law and Director of Robert Dole Center for Law and Government, Washburn University School of Law.

 

Share This Post
Icon: Share on Facebook.  Icon: Share on Twitter.
Disclaimer

The Washburn Law Journal Blog aims to provide timely content from a variety of viewpoints. We use an abbreviated editing process for Blog posts as compared to the traditional process used for print and online content. The views expressed on the Washburn Law Journal Blog belong to the individual authors alone and should not be construed to be those of the Washburn Law Journal, Washburn University School of Law, individual editors, other authors, or the institutions with which authors are affiliated.