How the public use of ADM systems does respect democratic values?

Md. Abdul Malek
Tech Doctrine
Published in
4 min readMay 7, 2021

--

Government interest in artificial intelligence (AI) and automated decision-making (ADM) systems are remarkably growing to transform the decision-making process and public services. The Public sector across the world is thus looking forward to determining public entitlements (licensing, social benefits, and regulatory investigations), prioritizing public services, predict policing, and informing courts’ decisions regarding bail and sentencing. These systems are used to reduce costs and promote speed, efficiency, and consistency in decision-making.

But such growth of interest in automated technology becomes controversial for many implications that confront conventional norms and values being practiced for a long time in society. A number of researches demonstrate the potential harms of these technologies. So the use by governments of AI and ADM systems provokes varied questions regarding bias, discrimination, democracy, legitimacy, and transparency. Here this attempt is going to highlight whether the public use of ADM systems confirm or confront democratic values, and how and why this happens in our society.

Modern democracy demands the democratic-decision-making device. When technology choice is made, and the public good is presumably preferred, it is required to look into how much transparency and public participation are ensured in the decision-making process. They are necessary for any transparent and participatory process to act as catalysts for mitigating risks, meeting urgencies, and involving all actors [1]. But it becomes difficult to do so in the real field.

In the process of public participation in the decision-making of technology choice, if significant numbers of people from public bodies are engaged represented in the process, the prospect of involving larger people in a given jurisdiction is simply not practical. Such difficulties generally raise doubts about the democratic legitimacy of public institutions in their decisions. But, are they ever ignoble in public institutions? Certainly not! These democratic values are to be promoted; because “democratic governance is not merely concerned with the aggregation of individual preferences; it is also concerned with reasoned and informed deliberation”. [2] Accordingly, for the decision of any public institution affects individuals, they should at the least strive to promote such deliberation.

Pertinently, public participation may also be ensured by the involvement of civil society organizations; as they usually articulate the interests of particular subsets of society. Hence, they are also to be represented while choosing automated-decision-making tools; because these organizations can contribute to democratic legitimacy by fostering more accountable and deliberative governance by ways of demanding public justifications for decisions and providing critical perspectives; provided that these organizations must be the bearers of the marginalized viewpoints and the concerns of those most vulnerable to the effects of such tools. That being the case, ‘the requirement for providing public justification for the action taken up by the public-decision-making authorities “not only facilitates the external evaluation of specific actions but also fosters institutional legitimacy”.[3] So it has dual benefits of promoting legitimacy and leading to better-informed decisions. Consequently, the automated decision-making system attracts skeptical reconsideration not to lead “diminishing its perceived effectiveness, democratic legitimacy, prestige, and inherent value”.[4]

Again, with regard to evolutionary justification, it is argued that such algorithm is chosen as a part of ‘solutionism’, whereby tech companies offer technical solutions to all social problems, including crime;[5] that’s why AI technologies have too quickly been given too much power to tackle and solve essentially social (and not technological) problems. Operating other than a social force, new technologies like ADM systems have thus discursive impacts on existing democratic norms, rules, and values.[6] As a result, the unintended consequences of ‘datafication’ and ‘over-emphasis on data reliance’ could destabilize the democratic and social order. [7] Hence, the choice of ADM systems by the government requires deliberate and considerate policy and legal guidance a priori. Outsourcing AI and ADM design does not absolve a public body from its legal obligation respecting democratic practice and process.

References

[1]See Albert C Lin, Prometheus reimagined: technology, environment, and law in the twenty-first century, University of Michigan Press; 180 (Reprinted. 2013).
[2] Albert C Lin, Prometheus reimagined: technology, environment, and law in the twenty-first century, University of Michigan Press; 143 (Reprinted. 2013). DOI: 10.3998/mpub.3252454
[3] Allen Buchanan & Robert O. Keohane, The Legitimacy of Global Governance Institutions, 20 Ethics & International Affairs 405–437 (2006). See also Albert C Lin, Prometheus reimagined: technology, environment, and law in the twenty-first century, University of Michigan Press; (Reprint ed. 2013).
[4] Richard M. Re and Alicia Solow-Niederman, “Developing Artificially Intelligent Justice” (2019) 22 STAN. TECH. L. REV 242
[5] Morozov E (2013) To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist. London: Allen Lane.
[6]See Bert-Jaap Koops, Criteria for Normative Technology: The Acceptability of ‘Code as Law’ in Light of Democratic and Constitutional Values, in REGULATING TECHNOLOGIES: LEGAL FUTURES, REGULATORY FRAMES AND TECHNOLOGICAL FIXES, 172.
[7]See also Danielle Keats Citron, Technological Due Process, 85 WASH. U. L. REV. 1249, 1252 (2008) (discussing cost savings as an argument made by proponents of automated agency decision-making).

Originally published at https://www.techlawdialogue.net on May 7, 2021.

--

--