Josh Taylor Technology reporter 

Australian federal court warns lawyers over ‘unacceptable’ use of AI

New guidance to legal profession ‘embraces’ use of technology but flags penalties for lawyers who ‘mislead the court’ with AI-generated errors
  
  

A barrister wearing his wig or peruke arriving at court in Sydney
Lawyers who use generative AI in ways that go against the new rules should expect consequences such as adverse costs orders, Australia’s federal court has stated. Photograph: Dean Lewins/AAP

The federal court of Australia has warned the legal profession about the dangers of using generative artificial intelligence in legal proceedings, issuing new rules for its use, with potential financial or legal consequences if AI errors frustrate court cases.

Amid an explosion in court filings in Australia and across the globe found to have included false citations generated by AI, the federal court on Thursday issued a new practice note on how the technology can be used in court cases.

The chief justice of the federal court, Debra Mortimer, said presentation of false or inaccurate information to the court is “unacceptable”.

“It is inconsistent with the responsibility on all persons to not mislead the court or other parties,” she said in the note.

Sign up for the Breaking News Australia email

“It is also likely to frustrate the just resolution of proceedings according to law and as quickly, inexpensively and efficiently as possible.”

Mortimer said users should be cautious in the application of AI in pleadings, written submissions and other documents lodged with the court, noting that it may generate fictitious cases, citations, quotes and factual errors.

Lawyers and solicitors should confirm if AI has been used in the preparation of documents, the legal authorities cited exist and support the proposition made, among other checks required, Mortimer said.

For affidavits and expert reports, if generative AI is used, it should still reflect their recollection, knowledge or experience.

Use of AI must be disclosed where tools are used to summarise or analyse information, make images, videos or sound recordings presented to the court and in any other manner that might affect the admissibility of that evidence.

The disclosure of the use of generative AI should be at the start of the document, Mortimer said, outlining where and how it has been used.

Mortimer also warned that caution should be taken when putting confidential, suppressed or private information into AI tools.

“There may be serious consequences for entering information into generative AI tools, even if sharing that information was not intended,” she said.

Mortimer said the federal court “embraces” the use of technology in proceedings, and generative AI has the potential to increase efficiency in the conduct of litigation, but generative AI “must be used appropriately and with due care”.

“Otherwise, generative AI poses risks to the proper administration of justice and public confidence in the legal system.”

Those who use generative AI in ways that go against the new rules should expect consequences such as adverse costs orders and issues with compliance with legal and professional obligations, the chief justice said.

There have been at least 73 identified cases in Australia where courts have discovered the use of generative AI had resulted in false citations, made up quotes or other errors.

A Victorian lawyer last year was the first in the country to face sanctions for false citations discovered, being stripped of his ability to practise as a principal lawyer. Similar investigations were launched by regulatory bodies in Western Australia and New South Wales in similar cases.

The courts have also been aware that false citations can lead to more false citations if not caught. In one full court judgment last year, it was discovered a case relied on by the appellant did not exist, with the judgment noting: “We apprehend that the reference may be a product of hallucination by a large language model.”

The chief justice of the high court, Stephen Gageler, said in November last year that judges in Australia were acting as “human filters” for legal arguments created using AI, and the use of AI-generated content had reached an “unsustainable phase”.

 

Leave a Comment

Required fields are marked *

*

*