Algorithmic Regulation

From P2P Foundation
Jump to navigation Jump to search

Description

1. Tim O'Reilly:

"Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time.

Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws.

Increasingly, in today’s world, this kind of algorithmic regulation is more than a metaphor. Consider financial markets. New financial instruments are invented every day and implemented by algorithms that trade at electronic speed. How can these instruments be regulated except by programs and algorithms that track and manage them in their native element in much the same way that Google’s search quality algorithms, Google’s “regulations”, manage the constant attempts of spammers and black hat SEO experts to game the system?

Revelation after revelation of bad behavior by big banks demonstrates that periodic bouts of enforcement aren’t sufficient. Systemic malfeasance needs systemic regulation. It’s time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come."

(https://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/)


2. Anna-Verena Nosthoff et al. :

"Tim O’Reilly, one of the first defendants of neo-cybernetic concepts. O’Reilly coined the term ‘algorithmic regulation’ around 2011 and ‘government as platform’ in 2010. Here, ‘regulation’ is in no way comparable to any traditional notion of government regulation; in contrast, O’Reilly’s aim is to replace regulation with reputation, that is, government with algorithmic regulation, such as with mutual ratings. For instance, he argues that services such as Airbnb and Uber can provide valuable models for providing maximum efficiency and oversight. Thus, according to O’Reilly, they do a great job ensuring quality and availability while ‘drivers who provide a poor service are eliminated’. Albeit O’Reilly is not as specific as Khanna on the countries that should serve as role models for proper neo-cybernetic politics, and seemingly envisions an agenda that is less state-centric than Khanna’s, he is equally focused on depicting the government‘s prior function as a ‘service provider’ while openly propagating to outsource most government activities to the private sector: ‘The whole point of government as platform,’ O’Reilly argues, ‘is to encourage the private sector to build applications that the government didn’t consider or doesn’t have the resources to create.’ "

(https://networkcultures.org/longform/2018/10/18/res-publica-ex-machina-on-neocybernetic-governance-and-the-end-of-politics/)

Characteristics

Tim O'Reilly:

"As outlined in the introduction, a successful algorithmic regulation system has the following characteristics:

A deep understanding of the desired outcome

Real-time measurement to determine if that outcome is being achieved

Algorithms (i.e. a set of rules) that make adjustments based on new data

Periodic, deeper analysis of whether the algorithms themselves are correct and performing as expected.

Open data plays a key role in both steps 2 and 4. Open data, either provided by the government itself, or required by government of the private sector, is a key enabler of the measurement revolution. Open data also helps us to understand whether we are achieving our desired objectives, and potentially allows for competition in better ways to achieve those objectives."

(https://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/)

Discussion

Evgeny Morozov:

1.

"In another contribution to “Beyond Transparency,” the technology publisher and investor Tim O’Reilly, one of Silicon Valley’s in-house intellectuals, celebrates a new mode of governance that he calls “algorithmic regulation.” The aim is to replace rigid rules issued by out-of-touch politicians with fluid and personalized feedback loops generated by gadget-wielding customers. Reputation becomes the new regulation: why pass laws banning taxi-drivers from dumping sandwich wrappers on the back seat if the market can quickly punish such behavior with a one-star rating? It’s a far cry from Beer’s socialist utopia, but it relies on the same cybernetic principle: collect as much relevant data from as many sources as possible, analyze them in real time, and make an optimal decision based on the current circumstances rather than on some idealized projection. All that’s needed is a set of fibreglass swivel chairs." (http://www.newyorker.com/magazine/2014/10/13/planning-machine)


2.

" If policy interventions are to be – to use the buzzwords of the day – "evidence-based" and "results-oriented," technology is here to help.

This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.

To see algorithmic regulation at work, look no further than the spam filter in your email. Instead of confining itself to a narrow definition of spam, the email filter has its users teach it. Even Google can't write rules to cover all the ingenious innovations of professional spammers. What it can do, though, is teach the system what makes a good rule and spot when it's time to find another rule for finding a good rule – and so on. An algorithm can do this, but it's the constant real-time feedback from its users that allows the system to counter threats never envisioned by its designers. And it's not just spam: your bank uses similar methods to spot credit-card fraud.

In his essay, O'Reilly draws broader philosophical lessons from such technologies, arguing that they work because they rely on "a deep understanding of the desired outcome" (spam is bad!) and periodically check if the algorithms are actually working as expected (are too many legitimate emails ending up marked as spam?).

...

Algorithmic regulation could certainly make the administration of existing laws more efficient. If it can fight credit-card fraud, why not tax fraud? Italian bureaucrats have experimented with the redditometro, or income meter, a tool for comparing people's spending patterns – recorded thanks to an arcane Italian law – with their declared income, so that authorities know when you spend more than you earn. Spain has expressed interest in a similar tool.

Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O'Reilly's question: for whom are they working? If it's just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it's hardly a democratic success.

With his belief that algorithmic regulation is based on "a deep understanding of the desired outcome", O'Reilly cunningly disconnects the means of doing politics from its ends. But the how of politics is as important as the what of politics – in fact, the former often shapes the latter. Everybody agrees that education, health, and security are all "desired outcomes", but how do we achieve them? In the past, when we faced the stark political choice of delivering them through the market or the state, the lines of the ideological debate were clear. Today, when the presumed choice is between the digital and the analog or between the dynamic feedback and the static law, that ideological clarity is gone – as if the very choice of how to achieve those "desired outcomes" was apolitical and didn't force us to choose between different and often incompatible visions of communal living.

By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past. Yes, these systems are terrifyingly efficient – in the same way that Singapore is terrifyingly efficient (O'Reilly, unsurprisingly, praises Singapore for its embrace of algorithmic regulation). And while Singapore's leaders might believe that they, too, have transcended politics, it doesn't mean that their regime cannot be assessed outside the linguistic swamp of efficiency and innovation – by using political, not economic benchmarks.

As Silicon Valley keeps corrupting our language with its endless glorification of disruption and efficiency – concepts at odds with the vocabulary of democracy – our ability to question the "how" of politics is weakened. Silicon Valley's default answer to the how of politics is what I call solutionism: problems are to be dealt with via apps, sensors, and feedback loops – all provided by startups. Earlier this year Google's Eric Schmidt even promised that startups would provide the solution to the problem of economic inequality: the latter, it seems, can also be "disrupted". And where the innovators and the disruptors lead, the bureaucrats follow.

...

The true politics of algorithmic regulation become visible once its logic is applied to the social nets of the welfare state. There are no calls to dismantle them, but citizens are nonetheless encouraged to take responsibility for their own health. Consider how Fred Wilson, an influential US venture capitalist, frames the subject. "Health… is the opposite side of healthcare," he said at a conference in Paris last December. "It's what keeps you out of the healthcare system in the first place." Thus, we are invited to start using self-tracking apps and data-sharing platforms and monitor our vital indicators, symptoms and discrepancies on our own.

This goes nicely with recent policy proposals to save troubled public services by encouraging healthier lifestyles. Consider a 2013 report by Westminster council and the Local Government Information Unit, a thinktank, calling for the linking of housing and council benefits to claimants' visits to the gym – with the help of smartcards. They might not be needed: many smartphones are already tracking how many steps we take every day (Google Now, the company's virtual assistant, keeps score of such data automatically and periodically presents it to users, nudging them to walk more).

The numerous possibilities that tracking devices offer to health and insurance industries are not lost on O'Reilly. "You know the way that advertising turned out to be the native business model for the internet?" he wondered at a recent conference. "I think that insurance is going to be the native business model for the internet of things." Things do seem to be heading that way: in June, Microsoft struck a deal with American Family Insurance, the eighth-largest home insurer in the US, in which both companies will fund startups that want to put sensors into smart homes and smart cars for the purposes of "proactive protection".

An insurance company would gladly subsidise the costs of installing yet another sensor in your house – as long as it can automatically alert the fire department or make front porch lights flash in case your smoke detector goes off. For now, accepting such tracking systems is framed as an extra benefit that can save us some money. But when do we reach a point where not using them is seen as a deviation – or, worse, an act of concealment – that ought to be punished with higher premiums?

Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. "We propose 'payment by results', a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus," they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what's expected.

The unstated assumption of most such reports is that the unhealthy are not only a burden to society but that they deserve to be punished (fiscally for now) for failing to be responsible. For what else could possibly explain their health problems but their personal failings? It's certainly not the power of food companies or class-based differences or various political and economic injustices. One can wear a dozen powerful sensors, own a smart mattress and even do a close daily reading of one's poop – as some self-tracking aficionados are wont to do – but those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor. The devil doesn't wear data. Social injustices are much harder to track than the everyday lives of the individuals whose lives they affect.

In shifting the focus of regulation from reining in institutional and corporate malfeasance to perpetual electronic guidance of individuals, algorithmic regulation offers us a good-old technocratic utopia of politics without politics. Disagreement and conflict, under this model, are seen as unfortunate byproducts of the analog era – to be solved through data collection – and not as inevitable results of economic or ideological conflicts.

However, a politics without politics does not mean a politics without control or administration. As O'Reilly writes in his essay: "New technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes." Thus, it's a mistake to think that Silicon Valley wants to rid us of government institutions. Its dream state is not the small government of libertarians – a small state, after all, needs neither fancy gadgets nor massive servers to process the data – but the data-obsessed and data-obese state of behavioural economists.

The nudging state is enamoured of feedback technology, for its key founding principle is that while we behave irrationally, our irrationality can be corrected – if only the environment acts upon us, nudging us towards the right option. Unsurprisingly, one of the three lonely references at the end of O'Reilly's essay is to a 2012 speech entitled "Regulation: Looking Backward, Looking Forward" by Cass Sunstein, the prominent American legal scholar who is the chief theorist of the nudging state.

And while the nudgers have already captured the state by making behavioural psychology the favourite idiom of government bureaucracy –Daniel Kahneman is in, Machiavelli is out – the algorithmic regulation lobby advances in more clandestine ways. They create innocuous non-profit organisations like Code for America which then co-opt the state – under the guise of encouraging talented hackers to tackle civic problems.

...

For Silicon Valley, though, the reputation-obsessed algorithmic state of the sharing economy is the new welfare state. If you are honest and hardworking, your online reputation would reflect this, producing a highly personalised social net. It is "ultrastable" in Ashby's sense: while the welfare state assumes the existence of specific social evils it tries to fight, the algorithmic state makes no such assumptions. The future threats can remain fully unknowable and fully addressable – on the individual level.

Silicon Valley, of course, is not alone in touting such ultrastable individual solutions. Nassim Taleb, in his best-selling 2012 book Antifragile, makes a similar, if more philosophical, plea for maximising our individual resourcefulness and resilience: don't get one job but many, don't take on debt, count on your own expertise. It's all about resilience, risk-taking and, as Taleb puts it, "having skin in the game". As Julian Reid and Brad Evans write in their new book, Resilient Life: The Art of Living Dangerously, this growing cult of resilience masks a tacit acknowledgement that no collective project could even aspire to tame the proliferating threats to human existence – we can only hope to equip ourselves to tackle them individually. "When policy-makers engage in the discourse of resilience," write Reid and Evans, "they do so in terms which aim explicitly at preventing humans from conceiving of danger as a phenomenon from which they might seek freedom and even, in contrast, as that to which they must now expose themselves."

What, then, is the progressive alternative? "The enemy of my enemy is my friend" doesn't work here: just because Silicon Valley is attacking the welfare state doesn't mean that progressives should defend it to the very last bullet (or tweet). First, even leftist governments have limited space for fiscal manoeuvres, as the kind of discretionary spending required to modernise the welfare state would never be approved by the global financial markets. And it's the ratings agencies and bond markets – not the voters – who are in charge today.

Second, the leftist critique of the welfare state has become only more relevant today when the exact borderlines between welfare and security are so blurry. When Google's Android powers so much of our everyday life, the government's temptation to govern us through remotely controlled cars and alarm-operated soap dispensers will be all too great. This will expand government's hold over areas of life previously free from regulation.


...

What, then, is to be done? Technophobia is no solution. Progressives need technologies that would stick with the spirit, if not the institutional form, of the welfare state, preserving its commitment to creating ideal conditions for human flourishing. Even some ultrastability is welcome. Stability was a laudable goal of the welfare state before it had encountered a trap: in specifying the exact protections that the state was to offer against the excesses of capitalism, it could not easily deflect new, previously unspecified forms of exploitation.

How do we build welfarism that is both decentralised and ultrastable? A form of guaranteed basic income – whereby some welfare services are replaced by direct cash transfers to citizens – fits the two criteria.

Creating the right conditions for the emergence of political communities around causes and issues they deem relevant would be another good step. Full compliance with the principle of ultrastability dictates that such issues cannot be anticipated or dictated from above – by political parties or trade unions – and must be left unspecified.

What can be specified is the kind of communications infrastructure needed to abet this cause: it should be free to use, hard to track, and open to new, subversive uses. Silicon Valley's existing infrastructure is great for fulfilling the needs of the state, not of self-organising citizens. It can, of course, be redeployed for activist causes – and it often is – but there's no reason to accept the status quo as either ideal or inevitable.

Why, after all, appropriate what should belong to the people in the first place? While many of the creators of the internet bemoan how low their creature has fallen, their anger is misdirected. The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

...

Algorithmic regulation, whatever its immediate benefits, will give us a political regime where technology corporations and government bureaucrats call all the shots. The Polish science fiction writer Stanislaw Lem, in a pointed critique of cybernetics published, as it happens, roughly at the same time as The Automated State, put it best: "Society cannot give up the burden of having to decide about its own fate by sacrificing this freedom for the sake of the cybernetic regulator."

(http://www.theguardian.com/technology/2014/jul/20/rise-of-data-death-of-politics-evgeny-morozov-algorithmic-regulation)


Risks of Algorithmic Regulation

Tim O'Reilly:

"The use of algorithmic regulation increases the power of regulators, and in some cases, could lead to abuses, or to conditions that seem anathema to us in a free society. “Mission creep” is a real risk. Once data is collected for one purpose, it’s easy to imagine new uses for it. We’ve already seen this in requests to the NSA for data on American citizens originally collected for purposes of fighting overseas terrorism being requested by other agencies to fight domestic crime, including copyright infringement! (See Lichtblau & Schmidt, 2013.)

The answer to this risk is not to avoid collecting the data, but to put stringent safeguards in place to limit its use beyond the original purpose. As we have seen, oversight and transparency are particularly difficult to enforce when national security is at stake and secrecy can be claimed to hide misuse. But the NSA is not the only one that needs to keep its methods hidden. Many details of Google’s search algorithms are kept as a trade secret lest knowledge of how they work be used to game the system; the same is true for credit card fraud detection.

One key difference is that a search engine such as Google is based on open data (the content of the web), allowing for competition. If Google fails to provide good search results, for example because they are favoring results that lead to more advertising dollars, they risk losing market share to Bing. Users are also able to evaluate Google’s search results for themselves.

Not only that, Google’s search quality team relies on users themselves—tens of thousands of individuals who are given searches to perform, and asked whether they found what they were looking for. Enough “no” answers, and Google adjusts the algorithms.

Whenever possible, governments putting in place algorithmic regulations must put in place similar quality measurements, emphasizing not just compliance with the rules that have been codified so far but with the original, clearly-specified goal of the regulatory system. The data used to make determinations should be auditable, and whenever possible, open for public inspection.

There are also huge privacy risks involved in the collection of the data needed to build true algorithmic regulatory systems. Tracking our speed while driving also means tracking our location. But that location data need not be stored as long as we are driving within the speed limit, or it can be anonymized for use in traffic control systems.

Given the amount of data being collected by the private sector, it is clear that our current notions of privacy are changing. What we need is a strenuous discussion of the tradeoffs between data collection and the benefits we receive from its use."

(https://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/)


More Information