subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Singapore — Booking a badminton court at one of Singapore's 100-odd community centres can be a workout in itself, with residents forced to type in times and venues repeatedly on a website until they find a free slot. Thanks to artificial intelligence (AI), it could soon be easier.

The People's Association, which runs the community centres, worked with a government tech agency to build a chatbot powered by generative AI to help residents find free courts in the city-state’s four official languages.

The booking chatbot, which could be rolled out shortly, is among more than 100 generative AI-based solutions spurred by the AI Trailblazers project, launched in 2023 to find AI-based solutions to everyday problems.

The project, backed by Singapore government agencies and Google, has also led to the development of tools to scan job applicants’ CVs, develop customised teaching curricula and generate transcripts of customer service calls.

It is part of the Southeast Asian nation’s AI strategy that is light on regulation and keen on “AI for all”, said Josephine Teo, minister for communications and information.

“Regulations are certainly part of good governance, but in AI, we have to make sure there is good infrastructure to support the activities,” she said at a briefing in January at Google’s Singapore office, where some of the new tools were demonstrated.

“Another very important aspect is building capabilities … [and] making sure that people not only have access to the tools, but are provided with opportunities to grow the skills that will enable them to use these tools well,” Teo said.

With an explosion in the use of generative AI globally, governments are racing to curb its harms — from election disinformation to deepfakes — without throttling innovation or the potential economic benefits.

In Singapore, the focus is on AI adoption in the public sector and industry, and building an enabling environment of research, skills and collaboration, said Denise Wong, an assistant CEO at Infocomm Media Development Authority (IMDA), which oversees the country's digital strategy.

“We are not looking at regulation — we see a trusted ecosystem as critical for the public to use AI confidently,” she told the Thomson Reuters Foundation.

“So we need an ecosystem where companies are comfortable, that allows for innovation and to deploy in a way that is safe and responsible, which in turn brings trust,” she said.

Responsible AI

With its stable business environment, Singapore consistently ranks near the top of the global innovation index, climbing to fifth place last year on the strength of its institutions, human capital and infrastructure.

On AI, Singapore was an early adopter, releasing its first national AI strategy in 2019 with the aim of individuals, businesses, and communities using AI “with confidence, discernment, and trust”.

It began testing generative AI tools in its courts in 2023, and uses them in schools and in government agencies, and released its second national strategy in December, with the mission “AI for the public good, for Singapore and the world”.

Also in 2023, Singapore set up the AI Verify Foundation to develop testing tools for responsible use, and a generative AI sandbox for trialing products. IMDA, along with technology companies IBM, Microsoft, Google and Salesforce, are among its primary members.

The toolkit, on code-sharing platform GitHub, has drawn the interest of dozens of local and global companies, Wong said.

“It provides users the means to test on parameters they care about, like gender representation or cultural representation, and nudges them towards the desired outcome,” she added.

In tests by tech firm Huawei, the toolkit highlighted racial bias in the data, while tests by UBS bank prompted reminders that certain attributes in the data could affect the model's fairness, according to IMDA.

“We want to enable everyone to use AI responsibly. But governments cannot do this on their own,” Wong said.

Goldilocks model

Worldwide, there are more than 1,600 AI policies and strategies from 169 countries, according to the Organisation for Economic Co-operation and Development (OECD).

The US has opted for a market-based model with minimal regulation, while Europe has embraced a rights-based approach, and China has prioritised sovereignty and security, said Simon Chesterman, a senior director at AI Singapore, the lead government programme.

Singapore has taken a different path.

“For small jurisdictions like Singapore, the challenge is how to avoid under-regulating — meaning you expose your citizens to risk — or over-regulating, meaning you might drive innovation elsewhere and miss out on the opportunities,” he said.

“In addition to this Goldilocks idea of regulation, there is a real willingness to partner with industry ... because industry standards and choices will always be the first line of defence against problems associated with AI,” he said.

“It also increases the chances that Singapore can reap the benefits of the new knowledge economy.”

The 10-member Association of Southeast Asian Nations' guide to AI governance and ethics, released earlier in February, recommends principles of transparency, fairness and equity, accountability and integrity, and “human-centricity”.

Yet member countries including Singapore, Cambodia and Myanmar have been criticised for using AI to enhance surveillance, including with facial recognition and crowd analytics systems, and patrol robots.

A second edition of the AI Trailblazers project will be launched in Singapore in 2024, and help up to 150 more organisations build generative AI solutions for everyday challenges, Teo said.

While these collaborations between the government, industry and academia can accelerate technological progress, there are risks, warned Ausma Bernot, a researcher at Griffith University in Australia.

“There is the possibility of becoming overly reliant on these corporations in the medium- to long-term,” she said.

“The challenge is striking a balance between co-operation and maintaining sovereign control over critical AI infrastructure.”

At the Trailblazers event, a short film on the People's Association's booking chatbot created a buzz of excitement.

There were more than 140,000 badminton court bookings in 2022, so a tool that can help do it easily is welcome, said Weng Wanyi, director of the National AI Office.

“It will save time and effort,” she said. “At the end of the day, it's about solving real problems with technology.”

Thomson Reuters Foundation

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.