Scored Society

From P2P Foundation
Jump to navigation Jump to search

* The Scored Society: Procedural Due Process for Automated Predictions. Frank Pascuale and Danielle Keats Citron.

URL = http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209

Description

John Danaher:

"The article looks at the recent trend for using big data to “score” various aspects of human behaviour. For example, there are now automated “scoring” systems used to rank job applicants based on their social media output, or college professors for their student-friendliness, or political activists for their likelihood of committing crimes. Is this a good thing? Citron and Pasquale argue that it is not, and suggest possible reforms to existing legal processes. In short, they argue for a more robust system of procedural due process when it comes to the use of algorithms to score human behaviour." 9http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209)


Discussion

John Danaher:

"Citron and Pasquale recommend that we think of the scoring process in terms of four distinct stages: (i) the data gathering stage, in which personal data is gathered; (ii) the score-generating stage, in which an actual score is produced from the data; (iii) the dissemination stage, in which the score is shared with decision-makers; and (iv) the usage stage, in which the score is actually used in order to make some decision.

Once we are clear about the process, we can begin to think about the remedies. Citron and Pasquale suggest that new procedural safeguards and regulations be installed at each stage. I’ll provide a brief overview here.

First, at the data-gathering stage, people should be entitled to know which data are being gathered; and they should be entitled to challenge or correct the data if they believe it is wrong. The data-gatherers should not be allowed to hide behind confidentiality agreements or other legal facades to block access to this information. There is nothing spectacular in this recommendation. Freedom of information and freedom of access to personal information is now common in many areas of law (particularly when data are gathered by governments).

Second, at the score-generating stage, the source code for the algorithms being used should be made public. The processes used by the scorers should be inspectable and reviewable by both regulators and the people affected by the process. Sometimes, there may be merit to the protection of trade secrets, but we need to switch the default away from secrecy to openness.

Third, at the dissemination stage, we run into some tricky issues. Some recent U.S. decisions have suggested that the dissemination of such information cannot be blocked on the grounds that doing so would compromise free speech. Be that as it may, Citron and Pasquale argue that everyone should have a right to know how and when their information is being disseminated to others. This right to know wouldn’t compromise the right to free speech. Indeed, transparency of this sort actually facilitates freedom of speech.

Fourth, at the usage stage, Citron and Pasquale argue for a system of licencing and auditing whenever the data are used in important areas (e.g. in making employment decisions). This means that the scoring system would have to be licenced for use in the particular area and would be subjected to regular auditing in order to ensure quality control. Think of the model of health and safety licencing and inspection for restaurants and you’ve got the basic idea.


...


Criticisms and Reflections:

Citron and Pasquale go on to provide a detailed example of how a licencing and auditing system might work in the area of credit-scoring algorithms. I won’t go into those details here as the example is very US-centric (focusing on particular regulatory authorities in the US and their current powers and resources). Instead, I want to offer some general reflections and mild criticisms of their proposals.

In general, I favour their policy recommendations. I agree that there are significant problems associated with the black box society (as well as potential benefits) and we should work hard to minimise these problems. The procedural safeguards and regulatory frameworks proposed by the authors could very well assist in doing this. Though, as the authors themselves note, the window of opportunity for reforming this area may not open any time soon. Still, it is important to have policy proposals ready-to-go when it does.


Furthermore, I agree with the authors when they reject certain criticisms of transparency and openness. A standard objection is that transparency will allow people to “game the system”, i.e. generate good ratings when they actually present a risk. This may happen, but its importance is limited by two factors. First, if it does happen, it may just indicate that the scoring system is flawed and needs to be improved: reliable and accurate systems are generally more difficult to game. Transparency may facilitate the necessary improvements in the scoring system by allowing competitors to innovate and learn from past mistakes. Second, the costs associated with people “gaming the system” need to be considered in light of the costs of the present system. The current system did little to prevent the financial crisis in 2008, and its secrecy has an impact on procedural fairness and individual lives. Is the “gaming” worry sufficient to outweigh those costs?

Nevertheless, I have two concerns about the authors’ proposal. One is simply that it may be too idealistic. We are already drowning in information and besieged by intrusions into our personal data. Adding a series of procedural safeguards and rights to review data-gathering systems might do little to prevent the slide toward the algocratic society. People may not exercise their rights or may not care about the (possibly deleterious) ways in which their personal data are being used. In addition to this, and perhaps more subtly, I worry that proposals of this sort do little to empower the individuals affected by algocratic systems. Instead, they seem to empower epistemic elites and technocrats who have the time and ability to understand how these systems work. They will then be tasked with helping the rest of us to understand what is going on, advising us as to how these systems may be negatively impacting on us, and policing their implementation. In other words, proposals of this sort seem to just replace one set of problems — associated with a novel technological process — with an older and more familiar set of problems — associated with powerful human elites. But maybe the devil you know is better than the devil you don’t." (http://philosophicaldisquisitions.blogspot.be/2014/10/can-procedural-due-process-combat.html)


More Information

  • suggested citation: Citron, Danielle Keats and Pasquale, Frank A., The Scored Society: Due Process for Automated Predictions (2014). Washington Law Review, Vol. 89, 2014, p. 1-; U of Maryland Legal Studies Research Paper No. 2014-8. Available at SSRN: http://ssrn.com/abstract=2376209