The Australian government is looking to regulate artificial intelligence applications, but which uses are concerning and what are the fears if it goes unregulated?
On Thursday, the industry and science minister, Ed Husic, released a consultation paper on measures that can be put in place to ensure AI is used responsibly and safely in Australia.
Husic noted that since the release of generative AI applications such as ChatGPT, there was a “growing sense” that it is in a state of accelerated development and a big leap forward in technology.
“People want to think about whether or not that technology and the risks that might be presented have been thought through and responded to in a way that gives people assurance and comfort about what is going on around them,” he said.
“Ultimately, what we want is modern laws for modern technology, and that is what we have been working on.”
The term is almost as old as electronic computers themselves, coined in 1955 by a team including legendary Harvard computer scientist Marvin Minsky. With no strict definition of the phrase, and the lure of billions of dollars of funding for anyone who sprinkles AI into pitch documents, almost anything more complex than a calculator has been called artificial intelligence by someone.
AI is already in our lives in ways you may not realise. The special effects in some films and voice assistants like Amazon’s Alexa all use simple forms of artificial intelligence. But in the current debate, AI has come to mean something else.
It boils down to this: most old-school computers do what they are told. They follow instructions given to them in the form of code. But if we want computers to solve more complex tasks, they need to do more than that. To be smarter, we are trying to train them how to learn in a way that imitates human behaviour.
Computers cannot be taught to think for themselves, but they can be taught how to analyse information and draw inferences from patterns within datasets. And the more you give them – computer systems can now cope with truly vast amounts of information – the better they should get at it.
The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.
What types of AI are they concerned about?
Generative AI underpins much of the public debate around the future of AI: that is, AI built on large datasets of information that generates text, images, audio and code in response to prompts.
The applications using generative AI include large language models (LLM) that generate text such as ChatGPT, or multimodal foundation models (MfM) for applications that can output text, audio, or images.
Applications that allow AI to make decisions, called automated decision making, are also within the scope of the review.
What are the fears?
Fake images, misinformation and disinformation are at the top of the pile of concerns.
The paper says there are fears generative AI could be used to create deepfakes – fake images, video or audio that people confuse for real – that could influence democratic processes or “cause other deceit”.
So far the way this has played out has been mostly innocent – an AI-generated image of the Pope in a Balenciaga jacket is the most cited – but last month an AI-generated image of an explosion next to the Pentagon in the United States circulated widely on social media, despite being debunked.
There is also concern about what is termed “hallucinations” from generative AI, where the output text cites sources, information or quotes that do not exist. Some generative AI firms are trying to prevent this from occurring by providing links to sources in generated text.
There is also a major fear that in areas where AI makes decisions, there could be issues with algorithmic bias leading to bad decisions being made. Where datasets used to train the AI are not comprehensive, it can lead to decisions being made that discriminate against minority groups or lead to male candidates being prioritised in recruitment over female candidates, for example.
How can we know if the AI is going wrong?
The paper suggests the best way to see how an AI might respond to something is to be as transparent as possible in how it works, including providing complete details on the dataset the AI is trained on.
Will new laws be needed?
The Australian government admits in the paper that many of the risks associated with AI can be covered by existing regulation, including privacy law, Australian consumer law, online safety, competition law, copyright law and discrimination law. The paper suggests any changes will need to close gaps once regulators have determined a gap exists within their existing powers.
For example, the Office of the Australian Information Commissioner had already used its powers under the Privacy Act to take action against Clearview AI for using people’s photos scraped from social media without permission.
The Australian Competition and Consumer Commission (ACCC) also won a lawsuit against travel booking site Trivago under existing Australian consumer law for misleading hotel booking results which were provided by an algorithm.
Is it all bad news?
While much of the discussion around AI at the moment seems geared towards the dangers, the paper does recognise that there will be benefits for society with the arrival of AI. The Productivity Commission has said that AI will be one technology that will help drive productivity growth in Australia. The paper states AI will also be used by hospitals to consolidate large amounts of patient data and analyse medical images and says AI can be used to optimise engineering designs and save costs in the provision of legal services.