How to keep faith in AI and Global Governance: Singapore’s Approach
Today, there is a global renaissance in Artificial Intelligence (AI). While the term was coined 70 years ago, recent advances in processing power and sensor technologies have furthered AI’s ability to process data, extract insights, and deploy these to impact lives.
This article serves two purposes. First, crystallise why AI requires a global governance response. Second, show how the Paris Peace Forum (PPF) is an ideal platform for a project like Singapore’s Approach to Human-Centric AI to contribute to global governance.
AI’s benefits can be realised at the individual, national and global levels.
At the individual level, AI can significantly enhance our quality of life. For example, AI can provide insights on our personal attributes and preferences, enabling better control over living environments and better responses in health emergencies. At the national level, AI can transform businesses by enabling innovation and enhancing competitiveness. Governments can use AI to improve public service delivery, and enable better decision-making.
AI’s greatest relevance to global governance lies in its potential to help solve global challenges. For example, through real-time monitoring of crop risks, AI enables us to maximise output while lowering environmental impact. AI can also be used to study climate change, predict associated risks and identify potential solutions for international collaboration.
Given its many benefits, pervasive development in and use of AI are expected.
AI’s global governance challenge
As with any technology, AI is not without risks and challenges. These are further exacerbated given AI’s prevalence. Some common concerns include:
(a) Machine learning requires large amount of data for improved accuracy. However, data can reflect human biases. This is why, for example, some AI programs used for recruitment can still exhibit gender bias in hiring decisions. Widespread use of such AI programs could further entrench these biases and perpetuate discrimination and inequality.
(b) Some AI programs process data beyond our cognitive capabilities. This opacity in decision-making results in reduced transparency and accountability. Where AI’s decisions directly impact human lives, it also reduces our ability to seek recourse.
(c)AI’s automation of lower-level cognitive tasks in the workplace could result in task displacement. While this does not necessarily translate to job loss, this raises the significant challenge of re-skilling the world’s workers for an AI-driven economy.
The fundamental issue here is trust. If trust in AI is lost, we lose the opportunity to maximise the AI’s potential benefits for humanity.
It is this link between trust, benefit-maximisation and governance that girds Singapore’s Approach to Human-Centric AI (Singapore’s Approach). Singapore’s Approach consists of three initiatives that focus on building public trust while spurring innovation:
(a) Model AI Governance Framework: Issued in January 2019, this living framework provides businesses deploying AI at scale practical guidance that converts ethical AI principles of explainability, transparency, fairness and human-centricity into implementable practices.
(b) Advisory Council on the Ethical Use of AI and Data: Set up in 2018, the Council brings together international and multi-disciplinary leaders in AI to advise on issues requiring policy or regulatory attention. The Council’s work will be bolstered by feedback from industry and consumers.
(c) Research Programme on Governance of AI and Data Use: Recognising that AI is still evolving, we need to identify long-term issues concerning AI and data and explore viable solutions. Set up in 2018, the Research Programme will contribute to thought leadership and knowledge exchange.
Global literature currently indicates a deficiency in global governance for AI. The PPF presents a great opportunity to share Singapore’s Approach. We are grateful for this chance to contribute constructively, and to learn from international best practice in addressing common challenges.
Views expressed in this publication are the author’s and do not necessarily reflect the views of the Paris Peace Forum.
Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection Group), Infocomm Media Development Authority of Singapore;
Deputy Commissioner, Personal Data Protection Commission
Yeong Zee Kin oversees IMDA’s Artificial Intelligence and Data Industry development strategy. This is one of four frontier technology areas IMDA has identified for its transformational potential for a Digital Economy. The other three are cybersecurity, the Internet of Things, and immersive media. In his role as an AI and data analytics champion, Zee Kin’s work includes developing forward-thinking governance on AI and data, driving a pipeline of AI talent, promoting industry adoption of AI and data analytics, as well as building specific AI and data science capabilities in Singapore. As the Deputy Commissioner of PDPC, Zee Kin oversees the administering and enforcement
of Singapore’s Personal Data Protection Act 2012. His key responsibilities include managing the formulation and implementation of policies relating to the protection of personal data, as well as the issuing of enforcement directions for organisational actions. He also spearheadpublic and sector-specific educational and outreach activities to raise both awareness and
compliance in organisations and individuals in personal data protection.