Global Governance of Ethics and Human Rights of Artificial Intelligence and Big Data
By BC Stahl
Ethical issues and human rights violations of novel technologies are widely discussed. The pace of change and technical innovation seems to increase constantly, providing us with new services and products. At the same time, there are side effects and concerns that many people are worried about. Much of the debate of these questions at the moment focuses on artificial intelligence (AI), often related to the new capabilities of AI to analyse and make sense of large amounts of data.
Issues of concern range from specific issues related to particular technologies to broad societal changes. An example of a specific issue would be that of explainability, traceability and accountability of particular types of machine learning algorithms. Many of these are based on technologies such as artificial neural networks that make it difficult to understand how exactly input is performed into output, which makes it difficult to allocate responsibility for outcomes of the technology to particular individuals. This can also lead to questions of biases in data, algorithms and models which can be the basis of unfair discrimination.
Other issues are not as directly linked to particular technologies but raise broad concerns. The impact of AI on employment is one such concern. If AI allows for the automation of activities currently undertaken by humans, for example driving vehicles or doing legal research, then this can have a significant impact on the amount and distribution of employment. Similarly, the socio-economic distribution of AI which is concentrated in the big internet companies leads to worries about economic exploitation, but also about economic and political power differences. Access to health, protection of individual privacy, the ability to get redress where AI malfunctions are all further examples of these concerns.
The SHERPA project has contributed to this discussion by undertaking rigorous empirical, legal and philosophical research and working with a broad range of stakeholders to understand, develop and test possible interventions. Such interventions include policy and legislation initiatives, various organisational responses as well as professional standards or individual commitments.
One way of thinking about the issue is to see AI as constituted by an ecosystem of actors who sometimes collaborate, sometimes get into conflict but who collectively create the social and technical capabilities that make up AI. Such an ecosystem, which is made up of constituent sub-ecosystems is not subject to simple linear interventions. There is a global AI ecosystem, but this consists of regional, national, technical and other subsystems.
In order to address the ethical and human rights issues of AI and big data analytics, we need to find means of steering the overall ecosystem in ways that support human flourishing. This means that there needs to be a knowledge base, capacity development, but also incentives and governance structures that get members of the ecosystem to collaborate.
We look forward to discussing with stakeholders at the Paris Peace Forum how the SHERPA project thinks that these ideas can be put into practice to contribute to a global governance of AI for the public good.
Views expressed in this publication are the author’s and do not necessarily reflect the views of the Paris Peace Forum.
Bernd Carsten Stahl, Professor of Critical Research in Technology and Director of the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK
Prof. Stahl’s interests cover philosophical issues arising from the intersections of business, technology, and information. This includes ethical questions of current and emerging of ICTs, critical approaches to information systems and issues related to responsible research and innovation. He serves as Ethics Director of the EU Flagship Human Brain Project, Coordinator of the EU project Shaping the ethical dimensions of information technologies — a European perspective (SHERPA) and is Co-PI (with Marina Jirotka, Oxford) of the Observatory for Responsible Research and Innovation in ICT.