29 October 2024

In September 2024, the Ministry of Science, Technology and Innovation (“MOSTI”) published the National Guidelines on AI Governance and Ethics (“Guidelines”) to support the implementation of the National AI Roadmap 2021 - 2025 (“Roadmap”). The Roadmap outlines strategic initiatives aimed at creating a thriving national artificial intelligence (“AI”) ecosystem to harness the benefits of AI securely with the goals of economic prosperity and social well-being. In this regard, the Guidelines seek to promote responsible AI practices by encouraging stakeholders to develop and deploy AI in a safe, trustworthy, and ethical manner.

This alert provides an overview of the Guidelines.

In brief

The Guidelines are a further step in the development of Malaysia’s AI governance and ethics which began with the National AI Roadmap 2021 - 2025. The Guidelines target AI end users, policymakers, designers, developers, technology providers, and suppliers. As Malaysia continues to explore best practices and ethical principles in the AI realm, the approach introduced by the Guidelines represents a solid starting point.

While the Guidelines are not legally binding at this juncture, it is prudent for stakeholders to observe the recommendations provided therein to ensure ethical and responsible AI development and deployment.

Applicability of the Guidelines

There is presently no specific law regulating the use of AI in Malaysia. The Guidelines are not legally binding but encourage AI developers and deployers to voluntarily adopt the seven AI principles set out in the Roadmap, alongside existing laws. This approach is intended to foster innovation given that Malaysia is still an emerging player in the global AI landscape.

The Guidelines are developed for three main user categories:

  • AI end users;
  • Policymakers of Government, agencies, organisations and institutions; and
  • Designers, developers, technology providers and suppliers

(collectively, “Stakeholders”).

Seven AI principles

The Guidelines outline the following seven key principles (“AI Principles”) to guide the development and deployment of AI and which forms the basis for the Guidelines:

  • Fairness: To design AI systems that avoid biasness or discrimination against the users;
  • Reliability, safety and control: To ensure that AI systems are safe and secure, and perform as intended;
  • Privacy and security: To test AI systems regularly to ensure safety and reliability of the same;
  • Inclusiveness: To foster inclusivity and equal access to AI advancements;
  • Transparency: To promote transparency in AI algorithms for stakeholders to evaluate risks of AI especially in the situation where AI is being used in decision-making processes;
  • Accountability: To hold AI developers, owners and actors accountable for the proper functioning of AI systems; and
  • Pursuit of human benefit and happiness: To ensure AI technologies are human-centric and respect fundamental rights.

Guidelines

The Guidelines establish a clear shared responsibility framework which details different roles and responsibilities for Stakeholders.

End users

End users, ranging from individuals to organisations, use and interact with AI in various ways. In this regard, the Guidelines provide the following:

  • AI Principles should be cascaded to end users;
  • Consumer rights concerning AI products and services must be safeguarded and respected at all times, including the right of consumers to:
    • be informed when their personal information is used in an algorithm or for other purposes;
    • object to the use of their data and to be given an explanation for its use;
    • be forgotten (i.e. to request for the deletion of their personal data);
    • interact with a human instead of an AI system;
    • redress and be compensated for any damage;
    • collective redress (i.e. to go to court as a group if a company has not respected their rights); and
    • complain to a supervisory authority or take legal action;
  • To safeguard consumer protection in the context of the use of AI, including the following:
    • Any amendment to current laws should clearly define "generative AI" and its various applications to ensure clarity in the law. This definition should encompass AI systems that can create new content, such as text, images, or videos, based on patterns and data inputs;
    • Companies should be obligated to disclose the use of generative AI in its content and allow customers to make informed decisions by understanding the source of the information that they are consuming;
    • Legal liability for harmful or misleading content generated by generative AI systems belonging to companies should be imposed. This could include, for example, provisions holding companies accountable for damage caused by content generated by their AI systems, particularly in cases of defamation, infringement of intellectual property rights, or dissemination of false information;
    • Data protection and privacy provisions should be strengthened to safeguard consumer data used to train generative AI systems, including by requiring companies to obtain explicit consent from users before using their data for training AI models; and
  • Provide some “dos” and “don’ts” for end users when adopting AI technology. Some examples of “dos” include clearly identifying problems to be solved with AI and how AI can address those needs, protecting individual personal data, adopting ethical guidelines and checks to ensure that the AI system is fair and unbiased. “Don’ts” could include underestimating the complexity of AI, ignoring the human element of AI, neglecting data privacy, and overlooking safety and security of the AI system.

Policymakers

The Guidelines’ primary target audience are policymakers, planners, and managers responsible for AI workforce policy and planning at national and local levels, including policymakers in AI sub-areas and features, Government agencies, and sector players that regulate their own value chain.

Recognising that every policymaker’s space and scope are unique and there is no one-size-fits-all approach, the Guidelines provide guidance for the creation of bespoke comprehensive AI frameworks, including developing checklists to cater to their own specific needs.

The Guidelines encourage policymakers to:

  • translate and implement the AI Principles, including developing a regulatory and ethical framework that guides the development, deployment, and use of AI technologies;
  • embed and emphasise the human-centric approach throughout the life cycle of AI, which involves integrating ethical considerations, transparency, and inclusivity into the development, deployment, and use of AI technologies;
  • establish an AI governance system, which includes developing a structured framework that guides the ethical, legal, and responsible use of artificial intelligence technologies within their value chains;
  • measure AI performance and evaluation to ensure that AI systems align with ethical standards, legal requirements, and the intended goals of their deployment; and
  • manage AI risk and impact, which involves developing and implementing policies and regulations that mitigate potential risks, ensure ethical use, and maximise positive outcomes.

Designers, developers, technology providers, and suppliers

This category of stakeholders encompasses those that are involved in developing and designing AI products such as contractors, vendors, and consultants as well as developers and solution providers.

Developers, designers, technology providers, and suppliers are encouraged to, among other things:

  • embrace the AI Principles to demonstrate their commitment to ethical AI practices and build public trust in their products and services;
  • implement responsible AI algorithm development that covers certain key practices, such as ensuring that the data used to train AI models is diverse, representative, and free from bias;
  • conduct data sharing responsibly and in compliance with privacy and security regulations; and
  • incorporate clauses related to responsible AI practices (e.g. terms that require adherence to ethical guidelines, transparency requirements, and accountability mechanisms throughout the project lifecycle) in contracts and agreements with paymasters.

The Guidelines further recommend that developers, designers, technology providers, and suppliers incorporate the following key steps in their governance process to ensure ethical and responsible AI development and deployment:

  • Establish a data governance system which gathers relevant data from various sources to ensure that it is representative and unbiased, and annotating data accurately and defining what each data point represents;
  • Choose an appropriate machine learning model based on the nature of the problem and available data;
  • Ensure that users understand the AI system’s capabilities, limitations and potential biases;
  • Continuously monitor the AI system’s performance and behaviour in real time to detect and address data drift or biases in model predictions;
  • Design an emergency shut off mechanism to disable the AI system in case of unexpected behaviour or ethical concerns;
  • Develop a performance measurement index that includes metrics related to ethics and responsible AI;
  • Perform a comprehensive risk analysis to identify potential risks associated with the AI system’s use;
  • Establish a feedback loop that integrate users feedback, monitoring results, and risk analysis findings into the AI governance process;
  • Maintain comprehensive documentation of the AI governance process; and
  • Report on AI system behaviour, performance, and ethical standards adherence to relevant stakeholders, including users, regulators and the public.

The Guidelines also provide practical examples to demonstrate how the AI Principles have been introduced in sectoral industries. These examples illustrate how businesses and government entities can integrate AI technologies responsibly while aligning with ethical standards. Businesses can refer to these sector-specific case studies as practical guidance on how to integrate AI responsibly into their existing systems.

Conclusion

By adhering to the AI Principles, stakeholders across various sectors may leverage AI's potential while mitigating its associated risks. This balanced approach ensures that AI development and deployment remain aligned with societal values.

As Malaysia continues to explore best practices and ethical principles in the AI realm, the soft approach introduced by the Guidelines represents a solid starting point. While the Guidelines are not legally binding at this juncture, it is prudent for stakeholders to observe the recommendations provided therein to ensure ethical and responsible AI development and deployment.

Further information

This alert has been prepared with the assistance of Senior Associate Ng Hong Syuen and Associate Yung Jia Heng.

More