The federal government has ramped up its efforts on the safe and responsible use of AI through the introduction of a number of regulatory proposals.
In brief:
- The government has been increasingly active in the regulation and governance of artificial intelligence (AI) in the last six to 12 months. In line within its January 2024 commitment, in September 2024, the government introduced, for consultation, proposed mandatory AI guardrails for high-risk AI use and a finalised Voluntary AI Safety Standard. Treasury is also consulting on whether AI necessitates amendments to the Australian Consumer Law.
- Other recent developments include the issue of Guidance on privacy and the use of commercially available AI products by the Office of the Australian Information Commissioner (OAIC) and the release of the Senate Select Committee on Adopting AI’s Interim Report.
- The AICD is playing a role in supporting and promoting best-practice governance and uplifting AI capability (i.e. Pillars 2 and 3) at the director and board level. Our program includes a Director’s guide to AI Governance resource suite, webinar program and an AI fluency for directors short course.
AI regulation in Australia – where are we up to?
While artificial intelligence has been developed and used by Australian organisations for many years, the rise of generative AI has prompted calls for greater regulatory intervention.
AI technologies have the potential to offer material productivity and economic gains through increased automation and improved decision-making and data analysis. However, alongside these benefits lie potential risks, including compliance with privacy laws, algorithmic discrimination and job displacement.
Australia currently has no AI-specific legislation, with AI currently regulated by existing privacy, data use, consumer protection, cyber security, anti-discrimination, duty of care and work health and safety legislation, as well as voluntary AI codes and high-level ethical principles.
In June 2023, the government launched consultation into Safe and Responsible AI in Australia to canvas views on steps Australia can take to mitigate AI risks. Broadly, responses express concern that not enough is being done to mitigate risks around the use of AI and also a broad view that the current regulatory system is not fit-for-purpose to respond to distinct risks posed by AI.
In January 2024, the Government issued its interim response, which committed to:
- the introduction of mandatory guardrails for AI deployment in high-risk settings;
- development of a voluntary risk-based AI Safety Standard;
- consideration of labelling and watermarking of AI in high-risk settings;
- clarifying and strengthening existing laws to address AI harms and risks; and
- supporting international engagement on AI governance and ensuring interoperability with Australian responses.
In September 2024, the government progressed two of these commitments by introducing, for consultation, proposed mandatory AI guardrails for high-risk AI use and issuing a finalised Voluntary AI Safety Standard).
In October 2024, Treasury opened a consultation on AI and the Australian Consumer Law, while the Senate Select Committee on Adopting AI, which was convened in March 2024, released its interim report focusing on the impacts of AI on democracy, with a final report due by the end of November 2024. The Office for the Australian Information Commissioner (OIAC) also issued Guidance on privacy and the use of commercially available AI products.
These developments are part of a broader five-pillar ‘Safe and Responsible AI work plan' which includes: (1) Delivering regulatory clarity and certainty; (2) Supporting and promoting best practice; (3) Supporting AI capability; (4) Government as an exemplar; and (5) International engagement.
Proposed mandatory AI guardrails
On 5 September 2024, the government released, for consultation, its proposal on mandatory AI guardrails. The consultation closed on 5 September 2024, with the AICD making a submission.
The guardrails focus on testing, transparency and accountability for both developers and deployers of "high-risk" AI and general-purpose AI systems (such as Chat GPT). Responsibilities under the guardrails are proposed to be allocated to the party (as between developer and deployer) best equipped to address risks, having regard to access to information (such as AI design and training data) and the ability to make changes to the AI system.
Rather than taking the EU’s approach of listing specific AI uses or whole sectors as "high risk”, it is proposed that organisations assess whether their AI use is “high risk” by reference to the risk of, and severity and extent of, adverse impacts to:
- human rights;
- physical or mental health or safety;
- legal effects;
- cultural groups and their collective rights; and
- broader Australian economy, society, environment and the rule of law.
The consultation paper also sought feedback on how the guardrails should apply to general-purpose AI systems.
The AICD’s submission focused on recommendations for fine-tuning aspects of the proposal, such as the definition of high-risk AI use, deployers and developers, and the need for AI regulation to be consistent with concurrent privacy and cyber security reforms.
Voluntary AI Safety Standard
To promote effective, safe and responsible AI governance practices in the interim (until the Mandatory Guardrails are finalised), the Government has also introduced the Voluntary AI Standard.
The first nine out of 10 principles comprising the Voluntary Standard are identical to the guardrails. However, Principle 10 differs, with the voluntary standard focused on stakeholder engagement, while the guardrails are focused on conformity assessments.
The board’s role and AICD’s work on AI governance
Directors are responsible for the oversight of organisational strategy and risk management processes, in line with their directors' duties. This includes managing AI risks and opportunities. To facilitate this, directors have an interest in developming AI regulation that is clear, proportionate and consistent with other intersecting privacy, data and cyber security obligations.
The AICD sees itself as playing a key role in supporting and promoting best-practice governance and uplifting AI capability (i.e. Pillars 2 and 3) at the director and board level.
In June 2024, the AICD, in collaboration with the Human Technology Institute (HTI), released the Director’s Guide to AI governance, which has been downloaded over 12,000 times[AG2] .
On AI education, the AICD’s AI Governance for directors webinar series features eight recorded webinars from topics ranging from AI regulation to generative AI and AI’s impact on the workforce.
The AICD has also partnered with the University of Sydney and Deloitte to present an AI Fluency for Directors Sprint - an intensive course designed to strengthen directors’ AI literacy, with the first (booked out) cohort commencing on 31 October 2024.
Latest news
Already a member?
Login to view this content