Skip to content   Skip to footer navigation 

AI is increasingly invading our medical privacy as regulation struggles to keep up

Patient scans shared without consent, AI scribes recording doctor visits – medical professionals' AI use is raising big concerns.

stethoscope surrounded by medical ai scribing software logos
Last updated: 21 October 2025
Fact-checked

Fact-checked

Checked for accuracy by our qualified verifiers and subject experts. Find out more about fact-checking at CHOICE.

Need to know

  • Around 22% of GPs are now using AI scribes to record during patient visits, and some scribes are also proposing diagnosis and treatment
  • Profit-driven medical businesses have used large volumes of patient data to train AI tools without patients' knowledge or consent 
  • Though consent is a cornerstone of privacy law, the Privacy Commissioner says consent isn't always required when it comes to AI and patient data

It seems the artificial intelligence (AI) industry is grabbing our personal information as fast as it can – before regulations end up enforcing what the community actually wants. 

Among other indicators, the final report of the Australian Competition and Consumer Commission's (ACCC) Digital Platform Services Inquiry, released in March this year, confirmed that the vast majority of us don't want our personal data used to train AI tools without our consent. 

In a consumer survey that was part of the report, 83% of Australians said our approval should be mandatory. This aligns with research from the agency tasked with protecting our personal information, the Office of the Australian Information Commissioner (OAIC), which found that 84% of Australians want to have control over their personal information, including the right to demand that it be deleted. 

In mid-October, Roy Morgan published research showing that 65% of Australians think AI "creates more problems than it solves", though 12% hailed its ability to advance medical science. 

[The Privacy Act's requirements] are constraining innovation without providing meaningful protection to individuals

Productivity Commission

There is considerable tension between those seeking to protect our data and those that see it as the key to improving products and services – and boosting profits – especially the big tech companies.  

Agencies of the federal government are also on board with this notion. According to a recent report by the Productivity Commission (PC), our personal data shouldn't be locked away behind a wall of inflexible regulations. 

The Privacy Act's requirements "are constraining innovation without providing meaningful protection to individuals", the report says. It concludes that making it easier to harness data could add up to $10 billion a year to the country's economy.

To that end, the PC calls for exempting some businesses from the requirement to obtain informed consent before accessing a person's data, one of the key pillars of the Privacy Act. In the PC's view, some businesses shouldn't have to do this as long as they commit to acting in the person's best interests when it comes to handling their privacy.  

The report goes on to argue that giving consent has become a meaningless exercise, since no one has the time to read whatever they're consenting to. 

While businesses are required to have a privacy policy under the Privacy Act, these sprawling documents are all but impossible to understand. The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words. 

These consent protocols are clearly not working as intended, but Privacy Commissioner Carly Kind has taken issue with the idea of watering down privacy regulations, making the case that protecting privacy and boosting productivity are not mutually exclusive and that informed consent is a critical consumer right. 

Patient scans used to train AI without consent 

But there have been cases in which the OAIC – where Kind is one of three commissioners – has allowed businesses to surreptitiously harvest our data to their own ends. 

In September last year, for instance, Crikey reported on the case of Australia's biggest radiology chain, I-Med Radiology Network, entering a joint venture with the start-up AI platform Harrison.ai in 2019.

The plan was to use I-Med's patient scans to train an AI tool called Annalise.ai, which has now reportedly been approved for use in over 40 countries and heralded as a gamechanger.

The quantity and quality of the I-Med data – around 30 million patient scans and the resulting diagnostic reports from Australia and other countries – was key to the tool's success.

It is clearly a money-maker for the business. In April last year, the Australian Financial Review reported that the business was on track to record $1.35 billion in revenue for the financial year. 

(Harrison.ai's flagship product was an AI model that was trained on 800,000 chest x-rays sourced from one of I-Med's 270 clinics in Australia, Crikey reported. It owns Annalise.ai, which is now called Harrison.ai Radiology.) 

The Crikey article says there is no evidence that informed consent was obtained from the patients. Based on the above-mentioned surveys, this would likely have run counter to many of their wishes. Neither I-Med nor Harrison.ai have disputed this. 

Consent is only required in limited circumstances by the Australian Privacy Principles, and will not always be required when entities use personal information for AI training

Office of the Australian Information Commissioner spokesperson

But it appears there is some leeway on the informed consent requirement. In July this year the OAIC ruled that I-Med had not contravened the Privacy Act since the data had been de-identified and could no longer be defined as personal information.

The ruling suggests that, where privacy issues are not at play, it's not the OAIC's role to prevent businesses from grabbing our data to train AI, with or without our approval. 

An OAIC spokesperson tells CHOICE that "the issue of consent is always a highly relevant factor", but the protections of the Australian Privacy Principles (APPs) no longer apply when patient data is de-identified. 

"However, strong de-identification is challenging, and whether something is de-identified is context dependent. Data that is de-identified when subject to strict controls – as it was in the I-MED case – may not be de-identified in other contexts, such as if it is released publicly," the spokesperson says. 

The right balance between privacy regulation and the development of new AI tools remains a work in progress.

While industry guidance from the OAIC stipulates that people should be informed when their data is being used to train AI, "consent is only required in limited circumstances by the APPs, and will not always be required when entities use personal information for AI training", the spokesperson added. 

In other words, we have no copyright on the content of our bodies. Medical clinics are free to use our data for commercial purposes without telling us, and businesses can profit handsomely. 

three chest xrays

In a joint business venture, Australia's largest radiology chain I-Med Radiology made millions of patient scans available to the AI firm Harrison.ai without the patients' knowledge or consent.

The rise of AI scribes in GP consultations

All of this is pertinent to a related development – the growing use of AI tools by GPs and specialists to record patient visits, known as AI scribes. Currently around 22% of GPs are using AI scribes according to polling by the Royal Australian College of General Practitioners (RACGP). Popular models include Lyrebird, Heidi Health, Amplify+, i-scribe and Medilit, but there are many others. 

The RACGP has tentatively embraced some uses of AI scribes, especially where they can reduce the administrative burdens on doctors and free them up to concentrate more fully on preventative care. 

But the college has also expressed concern about AI tools being developed by profit-seeking tech firms without the oversight of medical clinicians. "Value to technology company shareholders might be prioritised over patient outcomes," the RACGP wrote in a position statement on the issue. 

Of particular concern is the reliability – and the legality – of AI in making medical recommendations. The Therapeutic Goods Administration (TGA) has recently gone on record regarding the issue, saying medical professionals report that AI scribes "frequently propose diagnosis or treatment for patients beyond the stated diagnosis or treatment a clinician had identified during consultations".

Such a functionality would mean AI scribes are medical devices that require pre-market approval, the TGA says. By dodging regulation, they are potentially being supplied to the medical industry in breach of the Therapeutic Goods Act. 

AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation

RACGP spokesperson

The RACGP hasn't taken a position on whether AI scribes should be regulated by the TGA, but it does instruct GPs to obtain consent from patients before using them, to double check that the information they record is accurate, and to have a backup plan in place in case there's a glitch with the AI scribe. 

"AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation," a RACGP spokesperson tells CHOICE. 

"GPs and other doctors must carefully check the output of an AI scribe to ensure its accuracy. Where an AI scribe performs its expected function – summarising information for independent GP review and decision making – it does not have a therapeutic use. Diagnosis, however, is outside the scope of an AI scribe." 

lyrebird health usage notice in doctors office

AI scribes are currently used by around 22% of GPs in Australia.

Market use shouldn't pre-date regulation

One very concerned citizen is AI expert Dr Kobi Leins, who was recently told to take her business elsewhere after asking that AI not be used during a specialist visit for her child.

Liens cancelled the appointment, not least because she was familiar with the AI model in use and wasn't impressed by its privacy and security features. She didn't want it capturing her child's data. 

"There is no need for many of these tools, and fundamentally, we need to ask why they are being pushed so hard and where money would be better spent in a healthcare system where doctors have time to listen to patients," Leins says. 

"Individuals do not have the skills nor the capacity to review these tools. Regulatory bodies need to review them in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use. It's about individual privacy, but also about group privacy. There are potentially grave harms based on racial and gender and other biases in the data these tools rely on." 

Leins points to a recent study funded by the UK's National Institute for Health and Care Research which found that when social workers used Google's popular AI model 'Gemma', it downplayed women's physical and mental health issues as compared to men's in its case note summaries.

In these cases, it's the AI companies in the driver's seat rather than healthcare workers, she argues. 

Regulatory bodies need to review [AI scribes] in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use

AI expert Dr Kobi Leins

Leins is not alone. One concerned parent told us they didn't trust the data handling practices of AI companies.

The privacy policy of one of the most popular AI scribes, Heidi Health, is not reassuring. It says the business may share the medical data it captures with employees, third party suppliers, related companies, anyone it transfers the business to, 'professional advisers, dealers and agents', government agencies, law enforcement and more. Your data may also be transferred overseas. 

How the data is managed "is dependent on your GP and the policy of whatever scribe they're using, which you're unlikely to know when they ask you to sign over consent," the parent says.

"And like with most companies, you have no control over what happens when they get hacked and all your personal health information ends up on the dark web. I generally consent for my data to be scribed, but not my kid's." 

Another parent recently encountered an AI scribe while taking her daughter to a specialist. She was told the data would be deleted after it had been reviewed, "but I only had her word to go on. What are the privacy implications here? And what mistakes can and does it make? Does the doctor look at the summary after each appointment at the end of the day while it is fresh in their mind, to make sure that the scribe accurately captured the info?" 

heidi health usage notice notice in doctors office

The privacy policy of Heidi Health gives the company wide latitude for sharing personal medical data.

Just like asking Dr Google 

Professor Enrico Coiera, who is the director of both the Centre for Health Informatics, Australian Institute for Health Innovation at Macquarie University and the Australian Alliance for AI in Healthcare, tells CHOICE that one of his biggest concerns is that product development is far outpacing regulatory oversight. 

"Generative AI products in particular are being updated constantly. This makes it very hard for the traditional safety guardrails we rely on, like regulation, to make sure these new technologies are safe.

Much of this kind of AI is never marketed as 'medical grade', but as a general-purpose tool. So it is never assessed for its safe and effective use in healthcare." 

Much of this kind of AI is never marketed as 'medical grade', but as a general-purpose tool

Australian Alliance for AI in Healthcare director, Professor Enrico Coiera

Coiera says this is much like asking your search engine a health question. If an AI scribe is suggesting diagnosis or treatment, it's a medical device that needs to be regulated by the TGA, he says. 

Referring to the I-Med case, Coiera says, "patients should be asked to consent to the use of their data for building AIs, and especially so if there is a risk their information is identifiable". 

He recommends that patients read any consent forms carefully before signing. 

"If they are uncomfortable with their data being used for AI development, they should discuss that with their care provider." 

Coiera is a believer in the capacity of AI to advance medical science, as long as people's privacy is protected. 

"As long as I am comfortable that my data is stored securely and is de-identified before use, I would agree to its use for non-commercial research purposes." 

We're on your side

For more than 60 years, we've been making a difference for Australian consumers. In that time, we've never taken ads or sponsorship.

Instead we're funded by members who value expert reviews and independent product testing.

With no self-interest behind our advice, you don't just buy smarter, you get the answers that you need.

You know without hesitation what's safe for you and your family. And our recent sunscreens test showed just how important it is to keep business claims in check.

So you'll never be alone when something goes wrong or a business treats you unfairly.

Learn more about CHOICE membership today

We care about accuracy. See something that's not quite right in this article? Let us know or read more about fact-checking at CHOICE.

Stock images: Getty, unless otherwise stated.