Public AI

From P2P Foundation
Jump to navigation Jump to search

= "refers to publicly accessible AI models funded, provisioned, and governed by governments or other public bodies on behalf of their citizens".

URL = https://publicai.network/


Contextual Quote

}Public AI is distinct from highly-regulated AI insofar as a public entity directly owns or governs it—so that decisions about the AI are ultimately made by the public entity. It’s also distinct from but related to public-interest AI managed by a typical nongovernmental organization. Even though such forms of AI are, by definition, maintained in the public interest, they may not be publicly accountable depending on the governance structure of the organization. Public AI also resists intermediation of use by private parties. Finally, public AI as we define it here does not include forms of AI deployed and operated by governments but not accessible to the public, such as internal applications using non-public data."

- Public AI Network [1]


Description

Public AI Network:

"Public AI in this context refers to publicly accessible AI models funded, provisioned, and governed by governments or other public bodies on behalf of their citizens. Public AI is a concept with strong political, ethical, and economic underpinnings.

The essential features of public AI are public access, public funding, and public accountability.

Public AI might take the form of a “public option” for large language models akin to public options for health insurance or banking; a national AI service akin to a national health service or a public highway administration; or a publicly-funded effort to deploy many AI models for public and community use, akin to public libraries.

These implementations might be carried out by government bodies, managed by a government-supported nonprofit (cf. ICANN or IANA), built through public-private-academic partnerships, or bootstrapped through open-source software. "

(https://docs.google.com/document/d/1ykjsXpTRZu4Obu9miJlkR9vIqWSLey5m0G4Utlm6HBg/edit#heading=h.xwxvanopr70o)


"What is public AI?

  1. Public AI is built by the public sector: ensuring AI capacity is not limited to Big Tech.
  2. Public AI is accessible: ensuring AI delivers benefits to all.
  3. Public AI is accountable: ensuring AI reflects society’s values.
  4. Public AI is being designed and built right now - all around the world.


Why do we need public AI?

  1. Limited public capacity in AI is stifling our collective capacity to shape the direction of society.
  2. AI is becoming essential infrastructure, like roads and water pipelines - fitting for the public sector.
  3. AI systems are growing in power and should reflect society’s values, not the values of shareholders.
  4. Private companies have a head start - without investment in public capacity now, the gap will grow."

(https://publicai.network/)


Status

(links to news stories and original sources via [2])

Review of existing priors, by the Public AI Network:

"Existing efforts, related ideas, relevant research, and institutional analogies organized from earliest to latest:

Ben Gansky, Michael Martin and Ganesh Sitaraman in New York Times (Nov. 2019): “Americans don’t have to be beholden to the tech Goliaths to get the benefits of artificial intelligence. An alternative possibility is for government to provide the infrastructure needed for a technological future — through a public option for artificial intelligence… a public program that provides universal access to goods and services, with a private opt-out.”

Sam Altman, in Exponential View podcast (Oct. 2020): “Let’s say we really do create AGI. Right now, if I’m doing a bad job with that, our board of directors, which is also not publicly accountable, can fire me and say, “Let’s try somebody else”. But it feels to me like at that point, at some level of power, the world as a whole should be able to say, hey, maybe we need a new leader of this company, just like we are able to vote on the person who runs the country."

Bruce Schneier, Nathan Sanders, and Henry Farrell in Slate (April 2023) and Foreign Policy (June 2023): “A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products.”

NAIRR Task Force, including Final Report on Implementation: “Yet progress at the current frontiers of AI is often tied to access to large amounts of computational power and data. Such access today is too often limited to those in well-resourced organizations. This large and growing resource divide has the potential to limit and adversely skew our AI research ecosystem. The imbalance threatens our Nation’s ability to cultivate an AI research community and workforce that reflect America's rich diversity and the ability to harness AI to advance the public good. A widely accessible AI research cyberinfrastructure that brings together computational resources, data, testbeds, algorithms, software, services, networks, and expertise, as described in this report, would help to democratize the AI research and development (R&D) landscape in the United States for the benefit of all.”

AI Risk Management Framework | NIST - NIST AIRC - Home: The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.

Public Knowledge NTIA AI Accountability Comment (June 2023): “Policymakers should consider that public computational resources, datasets, and expert oversight would enable not just public-private partnerships, but present the opportunity to develop publicly owned and operated AI systems.”

Ezra Klein Interviews Alondra ​​Nelson (formerly of OSTP) in New York Times (April 2023): “And so what the “Blueprint for an A.I. Bill of Rights” was trying to do, in part, was to grow the democratic constituency around the issue. And you mentioned the OpenAI case and that they had done some consultation and called for more. I think that we can see already, with the rollout of some chat bots and some of the generative A.I. tools, ways that a little bit more engagement might have changed, I think, where we are. So I think this is a moment of profound political opportunity and opportunity for democracy, in part because these tools are becoming consumer-facing.”

British House of Commons, Inquiry into Governance of Artificial Intelligence, Dame Wendy Hall 2023): “We are not going to compete with companies like Google and Microsoft now in terms of what they are doing, but we absolutely need to develop a sovereign large language model capability. There is a meeting at the Turing today with 300 people attending and we have pitched to ARIA, but it really needs the UK Government to get behind this; not in terms of money—the money is out there—but we need the Government to get behind it, because otherwise we will be in the same place with this as we are with the cloud. [...] Do we really want to be reliant on technology from outside the UK for dealing with something that can be so powerful in enabling the UK to access all that health data, both for the UK and for the rest of the world? It is the same with our intelligence services; they are getting very behind this as well.”

Labour’s Longterm (May 2023): “The UK Government should lay out a plan for founding, and over the next parliament ramping up investment in, two new publicly owned companies: Great British Cloud to £1-10 billion, and BritGPT to up to £1 billion.”

AI Sweden, as described in Computer Weekly (June 2023): “‘The ultimate goal of our research project – and now of the consortium – is to determine whether home grown language models can provide value in Sweden,’ said Sahlgren. ‘We are completely open to a negative answer. It might prove to be the case that our resources are too limited to build foundation models.’”

Alexandre Zapolsky and others in Le Point (June 2023): “Open Source, a unique chance to create a European trusted AI”. The authors outline an approach to a French, sovereign, open source AI stack. Key elements include: 1) open datasets 2) explainable algorithms 3) open source models 4) model-sharing platform. United Nations Security Council, comments by Jack Clark of Anthropic (July 2023): “We cannot leave the development of artificial intelligence solely to private sector actors. The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.”

Charles Jennings in Politico (August 2023): “The only entity on earth with both the resources and values necessary to harness AI effectively and humanely is the government of the United States. Managing AI on a global scale could well be America’s greatest scientific and diplomatic challenge, ever. The Manhattan Project, cubed.”

Mohar Chatterjee in Politico (August 2023): “In June, French President Emmanuel Macron announced new funding for an open “digital commons” for French-made generative AI projects, a €40 million investment intended to attract significantly more capital from private investors. “On croit dans l’open-source,” Macron stressed in his speech at VivaTech, France’s top tech conference: “We believe in open-source.”

Peter Martin and Katrina Manson in Bloomberg (September 2023): ”The Central Intelligence Agency is preparing to roll out a feature akin to OpenAI Inc.’s now-famous program that will use artificial intelligence to give analysts better access to open-source intelligence, according to agency officials. The CIA’s Open-Source Enterprise division plans to provide intelligence agencies with its AI tool soon.”

Bruce Schneier and Nathan Sanders in New York Times (October 2023). “By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st century’s key technology while offering a platform for the ethical development and use of A.I.”

(https://docs.google.com/document/d/1ykjsXpTRZu4Obu9miJlkR9vIqWSLey5m0G4Utlm6HBg/edit)


Discussion

Public AI has advantages over private AI

Public AI Network:

"AI demands intensive capital investment, including in physical computational infrastructure and expensive fundamental research. While this barrier to entry might exclude new entrants, governments have access to the resources to compete with private firms. Governments, or public-private partnerships, have historically undertaken responsibility for large-scale infrastructure projects and for funding and coordinating the critical research that drives innovation. . Public AI projects could tap directly into these resources, rather than allowing the fruits of these public expenditures to be exclusively captured by private firms. Government-provided access to the capital necessary for AI development promotes efficiency and coordination instead of speculation and potentially dangerous competitive race dynamics between private actors.

Government is also in a better position than private companies to draw upon exclusive resources that will result in better models. For example, public AI could leverage government data sets, benefit from legal exceptions and liability waivers that restrict private actors, and convincingly mobilize the public to contribute data or other resources for the public good. Public models also have more incentive to address bias and inequity early and often in the process."

(https://docs.google.com/document/d/1ykjsXpTRZu4Obu9miJlkR9vIqWSLey5m0G4Utlm6HBg/edit#heading=h.y04gqx3i3fuw)

More information