Singapore’s Approach to Responsible Artificial Intelligence

Singapore’s Approach to Responsible Artificial Intelligence

SINGAPORE – In handling the emergence of Artificial Intelligence (AI), the Singapore government is gearing towards in ‘AI Governance’ as an urgent priority, said, Josephine Teo, Minister of Communications and Information.

In 2019, they introduced the Model AI Governance Framework, and more recently a foundation was set up to guide the development of AI Verify, a testing framework and software toolkit that has been open sourced, to help industries be more transparent about their AI.

“While we take steps to strengthen AI Governance, we should also pay attention to how AI models are being developed,” she said during the opening remarks of Personal Data Protection Week in Singapore.

In the past year, Artificial intelligence (AI) has come to dominate the headlines and a lot of conversations. Much of the excitement is around how human-like and clever AI has become, with the ability to answer complex questions, compose essays, write code, or even produce amazing music, images, and videos.

Equally, there are many concerns about AI-generated content, including how it can be misused for disinformation or criminal activities like scams, she added.

Clarity on the use of personal data in AI helps companies innovate

In March, the Personal Data Protection Commission (PDPC), announced they would be launching the Advisory Guidelines on the use of Personal Data in AI Recommendation and Decision Systems.

This is part of our wider effort to lay the groundwork for a trusted ecosystem for AI development and deployment and the Advisory Guidelines will provide businesses with more clarity on the use of personal data to train or develop AI models.

“The Guidelines also encourage AI solution providers to support their clients in their compliance with the PDPA and include designing systems such that it is easy to extract information clients need for providing their explanations.”

She added the PDPC recognises the new concerns arising from the use of personal data in generative AI.

“For instance, the use of publicly available personal data to train large models, to produce synthetic media or ‘Deep Fakes’ and PDPC is looking into these issues as well as considering whether further guidance should be provided under the PDPA.”

Share This


Wordpress (0)
Disqus (0 )