On 4 October 2024, the AICD lodged a submission to the Department of Industry, Science and Resource’s consultation on the introduction of mandatory guardrails for AI in high-risk settings (Guardrails).
The AICD recognises the significant opportunities of AI and the need to incentivise its development and use to remain competitive in the global market and boost national productivity. However, we agree that AI's far-reaching impact and the presence of unique risks require careful management.
In summary, the AICD makes the following key points:
- The scale and breadth of AI and the presence of explainability, bias, and hallucination issues within AI systems can lead to harmful outcomes. We agree that where there is a gap in existing laws, AI use that risks causing such harms to end-users needs to be subject to new regulation.
- Directors are responsible for the oversight of the organisation’s strategy and risk management processes in line with their directors' duties. This includes managing AI risks and opportunities. To facilitate this, directors have an interest in the development of AI regulation that is clear, proportionate and consistent with other intersecting privacy, data and cyber security obligations.
- The Guardrails need to further consider the different roles and responsibilities of developers and deployers, noting that the majority of Australian organisations are likely to be deployers and/or developers of AI applications (as distinct from developers of AI models).
- Broadly, we agree with a principles-based, rather than an- EU AI Act style ‘list-based’ approach to the definition of ‘high-risk’ AI. However, we are concerned that the proposed principles are too broad and may inadvertently capture low-risk AI uses. Guidance setting out both high-risk and low-risk AI uses may provide greater certainty.
- We broadly agree with the content of the Guardrails, noting that they appear consistent with other relevant Australian and international principles/frameworks such as the Australian AI Ethics Principles.
- Of the three regulatory implementation options, we prefer Option 2 (A framework approach). Given the early stage of AI governance and issues raised with the EU AI Act's strict regulatory approach, a standalone AI Act for the entire economy (Option 3) is not supported and could have unintended negative consequences. Coordination and alignment with concurrent reforms to privacy, data governance and cyber security should be prioritised.
- Regulation alone is not sufficient to achieve the policy objective of maximising AI benefits while minimising harms. Regulation must be accompanied by actions aimed at uplifting AI capability and governance skills and encouraging innovation. The AICD is committed to lifting awareness, education and competency on AI governance at the director and board level.
Latest news
This is of of your complimentary pieces of content
Already a member?
Login to view this content