|ICT Analytical Update | January 31, 2019
Authors: Shay Wester, Ella Duangkaew, Goh Jing Yi
On January 23, 2019, Singapore released the Proposed Model Artificial Intelligence Framework at the World Economic Forum (WEF) in Davos, Switzerland. The Framework builds upon the discussion paper on Artificial Intelligence (AI) and Personal Data released by the Singapore’s Personal Data Protection Commission (PDPC) and Infocomm Media Development Authority (IMDA) on 5 June 2018. IMDA seeks to drive industry adoption of the Model Framework in the recognition that AI is an enabler in the implementation of the Digital Economy Framework for Action.
The Model Framework is underpinned by two fundamental principles: (1) Decisions made by or with the assistance of AI should be explainable, transparent and fair to consumers, and (2) AI solutions should be human-centric. The framework constitutes a forward-looking policy response to the rapid advancement of AI and the consequential need to frame ethics-related issues vis-à-vis the corporate use of AI. It aims to mitigate risks of unintended discrimination potentially leading to unfair outcomes and enhance consumers’ knowledge about how AI is involved in making significant or sensitive decisions about them. The Model Framework is voluntary and serves as a broad, ready-to-use tool to enable organizations that are deploying AI solutions at a scale to do so in a responsible manner.
The Model Framework highlights the following caveats:
In essence, the Model Framework focuses on the following areas:
Organizations should adapt or implement internal governance structures and measures to ensure an oversight of the use of AI. This includes expert AI personnel, risk management systems, autonomous monitoring and reporting systems and ethics review boards.
Organizations should weigh their commercial objectives in using AI against the potential risks, taking into account both corporate-specific and societal values. This should be concretized in periodic risk impact assessment reviews. The Model Framework offers three broad decision-making models and matrix for organizations to employ in their assessments.
The Model Framework sets out that organizations should adopt good data accountability practices to ensure the quality of data, model training and model selection. This includes mitigating inherent selection bias and measurement bias in datasets.
Organizations should provide general information on whether AI is used in their products or services, so as to inculcate greater confidence in and acceptance of AI amongst consumers. Organizations could consider providing an opt-out option for consumers based on in-depth assessments on the degree of risk, reversibility of harm, availability and costs of alternative decision-making mechanisms, complexity of maintaining parallel systems and technical feasibility.
Moving forward, the IMDA and the WEF will be engaging organizations to discuss the Model Framework in greater detail and facilitate its adoption. Specifically, Singapore will be collaborating with the WEF’s Centre for the Fourth Industrial Revolution in this engagement. This will include developing a measurement matrix for the framework, which regulators and certification bodies can universally adopt and adapt for their use in assessing whether organizations are responsibly deploying AI. IMDA will further engage relevant stakeholders via the Advisory Council on the Ethical Use of AI and Data, as well as a Research Programme on the Governance of AI and Data Use.
The Model Framework comes at a crucial time amidst the exponential development in data and computing power that is fueling the advancement of AI. The crux of the Model Framework is for organizations to take responsibility for the impact of technologies on consumers, which sets the tone for future AI regulations in Singapore and potentially the other ASEAN countries.
It is prudent to note that the Model Framework is not a set of prescriptive rules; rather, the general guidelines are meant for companies to review and implement according to their discretion. As it is not designed to be backed by legal consequences and lacks any enforcement mechanisms, the Model Framework is not slated to affect the industry as a whole. Moreover, the adoption of the Model Framework is potentially limited in scale due to the varying nature and complexity of AI used by organizations. The Model Framework is more applicable to organizations employing big data AI models, rather than organizations using small data AI methods, or those deploying updated commercial off-the-shelf software packages that incorporate AI in their feature set. Nevertheless, the Model Framework serves as a useful starting point for future conversations between the private sector and policymakers on the governance of AI and other frontier technologies.
The Model Framework is the first of its kind in Asia to provide detailed and readily implementable guidelines to private sector organizations using AI. Development of similar frameworks is still nascent around the globe, as only Europe has developed a similar “draft ethics guidelines for trustworthy AI.” Many countries and multilateral institutions have developed strategies to foster AI growth and innovation, but not model frameworks like Singapore’s to guide the use of AI. In 2017, the Malaysian government announced its plans to develop the National Artificial Intelligence Framework, but there have not been any updates so far.
Thus, the guidelines in the Model Framework may form the basis or serve as a point of reference for the development of innovative technology regulatory frameworks in other ASEAN countries. A recent survey conducted by the IT market research and advisory firm IDC showed that the current AI adoption rates have increased from 8% in 2017 to 14% in 2018 across ASEAN, with Indonesia, Thailand and Singapore leading in the forefront of AI. While adoption rates in ASEAN are relatively slow compared to the North Asian countries, they are expected to increase with ASEAN’s growing pace of digitization.
The PDPC is encouraging organisations to review and provide feedback on the Framework until 30 June 2019, particularly by providing practical examples to aid in illustrating sections of the Framework; providing feedback on experiences in implementing the Framework, and how the Framework could be improved to ease implementation; and any other feedback on the Framework. Interested members should email feedback directly to the PDPC at firstname.lastname@example.org by 30 June 2019.
A particular point to note is that the Model Framework proposes the institutionalization of accountability measures, such as keeping a data provenance record, ensuring the traceability of algorithm and AI model via building an audit trail and implementing a black box recorder. However, the Model Framework is limited in providing guidelines on how organizations can ensure safety of data records. Thus, members could provide feedback on this area of data management share best practices in mitigating fraud and security risks. It is important to note that while some industry input was sought in the development of the initial document, a wide, public consultation was not conducted, and it will be important for industry to engage now that the PDPC is seeking broader feedback.