AI as a Subject of Democratic Governance

From P2P Foundation
Jump to navigation Jump to search

Discussion

By Baillie Gifford, Centre for Technomoral Futures:

"The link between AI and our capacity for self-governance has two dimensions that interweave in complex ways, not unlike a Mobius strip. One side of the strip is our need, in any democratic society, to jointly exercise responsible self-governance of the new social, political and economic powers that AI technologies inject into our institutions. The second side of the strip is the way that AI technologies can undermine our confidence in, and will to exercise, those same human capabilities of self-governance. My testimony will expand on these two interrelated challenges for AI and democracy.

As a new source of immense socioeconomic and political power, AI is something we must govern. Why? Because ungoverned, or ungovernable, social and political power is deeply incompatible with democratic principles. A core principle of modern democracies is that free peoples may not justifiably be subjected to social and political powers which determine their basic liberties and opportunities, but over which they have no say, which they cannot see and freely endorse, and which powers are in no way constrained by or answerable to them.

A point of clarification is needed. AI is not a single technology, but many; AI is not one monolithic power, but a smorgasbord of them. Many types and uses of AI do not affect our fundamental liberties and opportunities at all. So it does not make sense to talk about our need to govern AI as a whole. However, for the sake of clarity in this testimony, take ‘AI’ here to refer to the subset of AI technologies that increasingly do affect our basic liberties and opportunities to flourish, either by greatly amplifying the power of certain individuals and existing institutions, or by generating new powers that can be exercised upon us. Now, in a democratic society, might does not make right. Power is not self-justifying. Any exercise of power that extends beyond the private conduct of the individual, in ways that significantly impact the opportunities, welfare and liberties of others, always requires social and political legitimization. We do this through the joint exercise of our political capacity. Legislation and regulation are one way that democratic peoples, through their elected representatives, jointly govern the powers that impact us. Expressing and enforcing shared moral and political norms is another way we collectively self-govern. Adopting professional and technical standards is yet another. Market incentives do some of the work (though far less than some economists imagine), and various forms of public and organizational policy not enacted in law do much of the rest. For most significant forms of power, a combination of these tools is needed. What matters is that the right incentives are created to ensure the responsible, trustworthy and politically legitimate use of that power, under appropriate, mutually agreed upon, and reliably enforced constraints. In the United States, as with most if not all democratic countries, AI as a new source of power has yet to be socially or politically legitimized by the necessary acts of effective governance. These technologies are operating in most domains as ungoverned and growing powers; either because the necessary laws, regulatory bodies, policies, norms and standards to govern them have yet to be created, or more often, because the existing modes of governance that already apply are simply not being used or reliably enforced. Nor is the growing power that AI technologies generate being distributed equally across our society, so that all of us might choose to use this new power for our benefit. Rather, the power and benefits these technologies afford are increasingly concentrated in very few hands, especially those (corporate) hands that are already operating with undue influence on our democratic systems of self-governance. This concentration only amplifies the economic inequality that has been rising in this country for over 40 years, all while intergenerational socioeconomic mobility fell well behind most other developed countries – a linked phenomenon known by economists as the Great Gatsby Curve.

If AI technologies were an equalizing force, as well as transparently and widely beneficial, generating little risk to society or vulnerable groups, then closing the AI governance gap would not be so urgent. Unfortunately, we have a growing pile of evidence for the many harmful effects of AI systems and related technologies on humans living today. While the potential for AI technologies to benefit society remains immense, far fewer of those shared benefits have materialized than were once confidently predicted as imminent.

There are many promising applications of AI that we can use now, in health research, energy research, and environmental management; but the most socially beneficial use cases suffer from low investment, and very few are being deployed commercially at scale.

The disparity between the timelines of AI benefit and AI risk is striking. The Committee has heard much about AI’s risks in previous testimony, and I will not detail them here. Unlike the benefits, the risks have suffered no delay in arriving. And yet we have done vanishingly little about them. Most Americans, and billions of others around the world, remain highly vulnerable and exposed to unjust and discriminatory automated decisions, dangerous AI malfunctions, unwarranted algorithmic surveillance, false arrests and profiling, misidentification, disinformation, fraud, and unsafe or illegal content. As a result, public attitudes toward AI are souring, a serious warning sign for those of us who want the technology to mature and succeed. Other technologies, from GMOs to nuclear energy generation, have historically suffered public backlash in ways that greatly limited their beneficial use and advancement, and AI is very much at risk of a similar backlash. This is not a scientific problem. We have studied AI’s risks extensively over the past decade, and there is a mountain of evidence about their causes. They are not mysterious, but fairly well understood. And while there is a lot still to learn about how to manage these risks, we are already well on our way. That is the good news! For nearly a decade, researchers in the closely related fields of AI and data ethics, Responsible AI, machine learning fairness, and AI trust and safety have been generating a truly impressive body of powerful tools for documenting, evaluating and auditing AI systems, for anticipating and measuring harmful AI outcomes, and for mitigating their risks.

Organizations like NIST, the IEEE Standards Association, the British Standards Institute, the Alan Turing Institute, the Ada Lovelace Institute, AI Now, the World Economic Forum, and many others have released ample bodies of guidance on how to use these tools. Research programs like BRAID, which I co-direct in the UK, are creating new pathways to embed, test, and adopt Responsible AI tools in industry, government and third sector organizations that want to use AI safely and successfully.6 If we mandated the judicious and responsible use of these tools tomorrow, across the AI ecosystem, it would take a bit of time for companies to comply and for the effects to show up, but they would come. Sooner, I would wager, than will safe driverless vehicles.

However, the use of these tools in high-impact and high-risk AI systems remains sporadic, opaque, inconsistent and under-incentivized. For example, in 2022 and 2023, just as today’s new generative AI tools were being released, several AI companies made heavy cutbacks to their in-house AI ethics and trust and safety teams, or even removed them altogether. Recently, many of the same corporations rebranded such efforts under the label of ‘AI safety’, with a new focus. Instead of bringing back the experienced, expert teams they laid off, or using the tools those teams created to responsibly address the individual and social harms caused by their existing products and business models, several of these companies now seek government and philanthropic funding to invest in technical study of the unknown, longer-term risks of future ‘frontier’ models that could be more dangerous than those we have today.

Yet many of the same commercial AI leaders warn that government interference in their efforts, whether through regulatory constraint, demands for transparency, or greater exposure to liabilities for AI harms, will only delay or diminish their capacities to get ahead of these so-called ‘existential risks’ for our benefit. The implication is clear: as long as we stand aside and let them work (ideally subsidized by public investment), they will keep us all safe from the AI bogeyman that they assure us threatens human eradication. This problem is not scientific or technical. It is political. The history of political thought that shaped this nation tells us how we should assess promises of safety and security from those who seek release from accountability as the payment. As the philosopher John Locke said in 1689 of similar promises from unaccountable monarchs: “This is to think that men are so foolish that they take care to avoid what mischiefs may be done them by polecats or foxes, but are content, nay, think it safety, to be devoured by lions.”9 Nor is safety enough, if it comes at the expense of our liberty and capacity for self-determination. As noted by Jean-Jacques Rousseau in 1762’s The Social Contract, humans can also find safety and tranquility in dungeons. "

(https://hsgac.senate.gov/wp-content/uploads/Testimony-Vallor-2023-11-08.pdf)