Skip to content   Skip to footer navigation 

Op ed: This is what Australia needs to do to regulate AI

We need strong laws and well-resourced regulators to make sure consumers are protected from the possible harms of AI.

facial recognition technology being used at an event
Last updated: 08 September 2023

Need to know

  • As AI becomes increasingly mainstream, we need to be aware of the risks this technology poses
  • We are already seeing examples of harm resulting from the use of AI such as discriminatory outcomes from automatic pricing  
  • CHOICE has made a submission to government outlining our suggestions on how consumers can be protected from the risks of AI, including making laws risk-based and appointing strong regulators

Once confined to academic papers and science fiction, 2023 seems like the year artificial intelligence (AI) officially moved out of the research labs and into the consumer market. AI-based tools ChatGPT and DALL-E have become household names, and AI promises to deliver both productivity and fun. 

But although AI has its benefits, we shouldn't ignore the risks. Businesses are looking to AI to increase profitability, often at the expense of consumers.

The risks of artificial intelligence 

Our investigations over the past year have found that facial recognition technology has made its way into retail stores, pubs and clubs and stadiums. This technology lets businesses automatically refuse access to people based on identity databases, but experts have found a startling rate of inaccuracy, especially for people with disabilities, and people of colour (particularly women). 

AI is also being used to process more data than ever before. Businesses even use algorithms to make decisions about how much we should pay for things, from our groceries to insurance or subscription plans and even our home loans. 

Chatbots using ChatGPT can replicate false information in their answers or provide dangerous advice

But when pricing decisions are entirely automated it can lead to discriminatory outcomes, such as higher premiums for people from marginalised backgrounds or increased prices for older people. 

Generative AI like ChatGPT comes with its own set of hazards. Chatbots using ChatGPT can replicate false information in their answers or provide dangerous advice. The Federal Trade Commission, the US's competition watchdog, is currently investigating whether ChatGPT has harmed people by creating false information, and is also looking into its privacy practices.

What needs to be done to protect consumers?

person showing sales figures on tablet using a stylus

Businesses use algorithms to make decisions about how much people should pay for things which can result in unfair outcomes.

AI laws should be risk-based

Experts have been sounding the alarm on these risks for quite some time, but governments around the world are only just catching up. Australia is now running its own consultation on AI, and CHOICE has just submitted our suggestions on how the government can protect consumers from these risks. 

At the heart of our submission is the need for a risk-based approach to AI, just like the European Union is proposing. A risk-based framework categorises AI activities from those that are considered minimal risk and therefore require few limitations to those that are high risk that are restricted or even prohibited. 

We also suggested that our AI laws should codify consumer rights to safety, fairness, accountability, reliability, and transparency. 

The federal government should also strengthen existing laws like the Australian Consumer Law and the Privacy Act to ensure people are comprehensively protected from AI misuse or exploitation.

Strong regulators are essential

But making new laws isn't enough – we need strong regulators to enforce these laws. CHOICE is calling for a well-funded AI Commissioner with a range of regulatory powers including civil and criminal penalty powers. 

An AI Commissioner should leverage their specialist expertise in collaborating with existing regulatory bodies responsible for overseeing sectors of the economy that are impacted by AI, such as consumer rights, competition, privacy, and human rights.

Big tech wants to regulate itself, but history proves these businesses can't be trusted to write their own rules

Big tech wants to regulate itself, but history proves these businesses can't be trusted to write their own rules. Australia should follow the lead of the European Union and Canada and lay down the foundations for a fair market where businesses must guarantee safe, fair, transparent, reliable, and accountable AI systems before releasing them. 

Not only would this protect our community from harm, it would also encourage innovation and promote responsible AI use.

You can read our full submission to the government here.

We care about accuracy. See something that's not quite right in this article? Let us know or read more about fact-checking at CHOICE.

Stock images: Getty, unless otherwise stated.