Effective Accelerationism

From P2P Foundation
Jump to navigation Jump to search

Discussion

Vitalik Buterin:

"Over the last few months, the "e/acc" ("effective accelerationist") movement has gained a lot of steam. Summarized by "Beff Jezos" here, e/acc is fundamentally about an appreciation of the truly massive benefits of technological progress, and a desire to accelerate this trend to bring those benefits sooner.

I find myself sympathetic to the e/acc perspective in a lot of contexts. There's a lot of evidence that the FDA is far too conservative in its willingness to delay or block the approval of drugs, and bioethics in general far too often seems to operate by the principle that "20 people dead in a medical experiment gone wrong is a tragedy, but 200000 people dead from life-saving treatments being delayed is a statistic". The delays to approving covid tests and vaccines, and malaria vaccines, seem to further confirm this. However, it is possible to take this perspective too far.

In addition to my AI-related concerns, I feel particularly ambivalent about the e/acc enthusiasm for military technology. In the current context in 2023, where this technology is being made by the United States and immediately applied to defend Ukraine, it is easy to see how it can be a force for good. Taking a broader view, however, enthusiasm about modern military technology as a force for good seems to require believing that the dominant technological power will reliably be one of the good guys in most conflicts, now and in the future: military technology is good because military technology is being built and controlled by America and America is good. Does being an e/acc require being an America maximalist, betting everything on both the government's present and future morals and the country's future success?

On the other hand, I see the need for new approaches in thinking of how to reduce these risks. The OpenAI governance structure is a good example: it seems like a well-intentioned effort to balance the need to make a profit to satisfy investors who provide the initial capital with the desire to have a check-and-balance to push against moves that risk OpenAI blowing up the world. In practice, however, their recent attempt to fire Sam Altman makes the structure seem like an abject failure: it centralized power in an undemocratic and unaccountable board of five people, who made key decisions based on secret information and refused to give any details on their reasoning until employees threatened to quit en-masse. Somehow, the non-profit board played their hands so poorly that the company's employees created an impromptu de-facto union... to side with the billionaire CEO against them.

Across the board, I see far too many plans to save the world that involve giving a small group of people extreme and opaque power and hoping that they use it wisely. And so I find myself drawn to a different philosophy, one that has detailed ideas for how to deal with risks, but which seeks to create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems. This philosophy also goes quite a bit broader than AI, and I would argue that it applies well even in worlds where AI risk concerns turn out to be largely unfounded. I will refer to this philosophy by the name of d/acc."

(https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html)

Critique by Molly White

Molly White:

"While effective altruists view artificial intelligence as an existential risk that could threaten humanity, and often push for a slower timeline in developing it (though they push for developing it nonetheless), there is a group with a different outlook: the effective accelerationists.

This ideology has been embraced by some powerful figures in the tech industry, including Andreessen Horowitz’s Marc Andreessen, who published a manifesto in October in which he worshipped the “techno-capital machine”5 as a force destined to bring about an “upward spiral” if not constrained by those who concern themselves with such concepts as ethics, safety, or sustainability.

Those who seek to place guardrails around technological development are no better than murderers, he argues, for putting themselves in the way of development that might produce lifesaving AI.

This is the core belief of effective accelerationism: that the only ethical choice is to put the pedal to the metal on technological progress, pushing forward at all costs, because the hypothetical upside far outweighs the risks identified by those they brush aside as “doomers” or “decels” (decelerationists).

Despite their differences on AI, effective altruism and effective accelerationism share much in common (in addition to the similar names). Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take.

Both ideologies embrace as a given the idea of a super-powerful artificial general intelligence being just around the corner, an assumption that leaves little room for discussion of the many ways that AI is harming real people today. This is no coincidence: when you can convince everyone that AI might turn everyone into paperclips tomorrow, or on the flip side might cure every disease on earth, it’s easy to distract people from today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others. This is incredibly convenient for the powerful individuals and companies who stand to profit from AI.

And like effective altruists, effective accelerationists are fond of waxing philosophical, often with great verbosity and with great surety that their ideas are the peak of rational thought.

Effective accelerationists in particular also like to suggest that their ideas are grounded in scientific concepts like thermodynamics and biological adaptation, a strategy that seems designed to woo the technologist types who are primed to put more stock in something that sounds scientific, even if it’s nonsense. For example, the inaugural Substack post defining effective accelerationisms’s “principles and tenets” name-drops the “Jarzynski-Crooks fluctuation dissipation theorem” and suggests that “thermodynamic bias” will ensure only positive outcomes reward those who insatiably pursue technological development. Effective accelerationists also claim to have “no particular allegiance to the biological substrate”, with some believing that humans must inevitably forgo these limiting, fleshy forms of ours “to spread to the stars”, embracing a future that they see mostly — if not entirely — revolving around machines."

(https://newsletter.mollywhite.net/p/effective-obfuscation)


More information