Artificial Intelligence (AI) presents unique regulatory and other risks that need to be managed. The law in Australia today applies to AI, but regulatory changes will come.

The opportunity is greater than the risks. Learn to use AI now or risk losing your job in years to come.

Let’s get into the details, starting with some stats and a true story.

According to Deloitte, more than a quarter of the Australian economy will be disrupted by generative AI, which means nearly $600 billion of economic activity faces disruption. Also, more than two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations.

Gnarly downside

That’s great, but while generative AI produces opportunities you can seize today, there could be a gnarly downside.

For example, when UCLA Professor Eugene Volokh asked ChatGPT – a free tool built on OpenAI’s GPT-3.5 model – a question, here’s what it threw up.

Question: Has sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles”.

Answer: The generative AI program replied with an answer explaining that a law professor, Turley, of Georgetown University Law Centre, was accused of sexual harassment by a former student during a class trip to Alaska. The citation for the data was a Washington Post Article dated 21 March 2018.

However, Turley has never taught at Georgetown University. Also, the Washington Post article does not exist. Turley has never been to Alaska with any student, and he has never been accused of sexual harassment.

The point is that generative AI sometimes produces unreliable data. This is an example of poor system performance – where errors in an AI output have caused distress and reputational harm.

This is one of six harm categories identified by professor Nicholas Davis and Lauren Solomon in a recent report titled The State of AI Governance in Australia.

Those harm categories contribute to three organisational risks that are amplified by AI systems-commercial, reputational and regulatory.

Is Australian government legislation regulating AI specifically?

The Department of Industry, Science and Resources recently released a discussion paper titled Supporting Responsible AI.

We are not aware of any recommendations flowing from that specific consultation, but more consultation like this will come, as will regulation.

This is clear from the Federal Government’s $41.2 million commitment to support the responsible deployment of AI in the national economy in the 2023/24 Federal Budget.

New risks

Meanwhile, we think it’s time to identify two new risks:

  • Missing the opportunity that AI presents.
  • The regulatory risks associated with using.

For starters, you should consider developing an AI policy for representatives.

It should tell them not to do things like putting personally identifiable information or sensitive information into a search engine of AI system.

If you decide to use an AI system, think of monitoring and supervising through the analogy of a parent and child relationship.

In terms of supervising a healthy, grown-up AI system, you need to have ongoing monthly reporting,  measurement of error rates and evidence that staff are checking underlying assumptions (amongst other things).

Paul Derham is a financial services lawyer who helps firms meet their legal and compliance obligations.