Australia is lagging behind on effective use of genAI. To maximise opportunities, we need to pick up the pace.
Having evolved far from the realms of scary sci-fi for the past 50 years, artificial intelligence (AI) is certainly not new tech. But an unmissable technological big boom occurred two years ago when San Francisco-based research company OpenAI democratised generative AI (genAI) by making it available to everyone.
However, Australia is acting as if it didn’t get the memo. After initial excitement, many organisations seem so worried about potential risk that they could well miss the boat. A wait-and-see attitude won’t cut it.
GenAI’s revolutionary potential offers a once-in-a-century transformation to organisations, a true game changer. Most significantly, managers and leaders can work with genAI directly without needing technical skills, says Anthea Roberts, a professor at the Australian National University School of Regulation and Global Governance.
Since December 2022, awareness has been soaring, experimentation has been rife with an ever-growing number of use cases cited and proof of concepts realised. In parallel, the rising fear of the risks posed by uncontrolled AI or as a tool for bad, naïve or untrained actors has been palpable. Meanwhile, regulators and legislators worldwide have been working to keep up with the exponential developments.
Innovations abound. “We’re still at the starting line, but we’re seeing amazing examples globally of what genAI can deliver,” says Katherine Boiciuc GAICD, Oceania chief technology and innovation officer for EY.
Medical imaging using AI is producing higher rates of breast cancer detection. Healthcare and emergency services are reimagining experiences, while European car manufacturers have combined virtual and augmented reality with large language models (LLMs) enabling cars to observe the road rules in every country.
In Australia, the transitioning energy sector is leading with the autonomous management of energy distribution. Adoption is high for the consumer retail experience, in particular loyalty schemes and rewards, while financial services are “incredibly mature in terms of hyper-personalisation and fraud prevention”, says Boiciuc.
Where to begin?
Beneath the top line, statistics show most organisations in Australia are still on the starting blocks, figuring out how or where to dive in with genAI strategies. Recent EY reports show Australia lagging global adoption and sentiment, with 65 per cent of CEOs acknowledging the efficiency benefits of AI (CEO Outlook survey October 2023), and 37 per cent investing in it compared to 47 per cent globally (CEO Outlook survey April 2024).
Businesses broadly are still unfettered by Australia’s voluntary regulatory environment, although mandatory rules are looming. In tandem, there’s a massive skills gap — a 500 per cent shortfall on the additional 200,000 AI-proficient workers needed across the nation to 2030, according to the Tech Council of Australia.
Where do Australian organisations rate on AI maturity in Gartner’s much-vaunted hype cycle, which charts the journey of emerging technologies from breakthrough to productivity? That depends on who you ask. They’re either at the “peak of inflated expectations” or in the “trough of disillusionment”, it seems.
From hype to reality
Australia is in the early adoption phase, reports Marc Washbourne, CEO and co-founder of ASX-listed SaaS (software as a service) provider, ReadyTech. The company is using AI as a differentiator for itself and its clients, rolling out services for payroll, procurement and recruitment across higher education, aged care, government and justice sectors.
At ReadyTech, genAI supports customer-facing teams with AI agents — in its products and across the business — supplementing rather than replacing human judgement with faster, data-driven tools. “Our recruitment product provides insights that are beyond human capability for matching candidates with roles,” says Washbourne. “Enormous curiosity is coming from organisations, there’s a lot of appetite to get involved.”
But there’s also a challenge in knowing where to start. “Executives are saying, ‘we have to do artificial intelligence’, without being clear on what the overarching strategy is,” he says. “Companies may recognise the potential, but have a lack of ability to implement it.”
Plus, there is reluctance due to AI safety concerns and the need for maturity in managing risks responsibly. “Several government clients have told us they do not want a Robodebt scenario,” notes Washbourne.
Ready for action
The most crucial signal an organisation is ready to dive in is a clear understanding that AI isn’t a short-term trend, it’s transformational. “When the leadership from executive to the board is aligned on a strategic vision for AI, that’s key to knowing it’s serious,” says Washbourne. Critical also is an understanding of who owns AI in the organisation, he says. Is it IT or operations, or a mixture of both?
Another essential marker is the willingness to invest adequately and to bring on capability by prioritising education. Supporting its 500-strong workforce to be comfortable with AI has been central to ReadyTech’s own adoption strategy.
Likewise, Boiciuc sees the scope of corporate vision for AI as the predictor of future success. In Australia, 91 per cent of use cases are productivity-based (CEO Outlook October 2023), she notes. “They are focused on cost-out, but missing the art of the possible, the transformational potential and new growth opportunities.”
EY has an all-in approach. The firm used itself as “client zero” when it started to deploy AI almost a decade ago. Today, partners and employees at the firm work with one of the world’s largest private AI platforms (internally built) and with enterprise-grade Microsoft 365 Copilot off the shelf.
The firm runs a genAI maturity model that scans seven dimensions of an organisation’s operations to assess the status quo and spot scalable opportunities. Optimally, it’s done annually. To date, some 1000 client organisations have been assessed, according to the EY Client Zero story presented at the Gartner IT Symposium in September 2024.
Organisations need to understand their baseline, including strategies for infrastructure, data and learning.
“Overarchingly, our biggest lesson is that AI has to be people-led,” says Boiciuc, adding that the greatest eye-opener for clients surrounds the amount of training required to realise that. “For non-technical people using AI, the level of upskill takes ongoing investment, not an hour of training once a year.”
She notes this doesn’t fit with expectations in an era when people regularly hit a button to update their phone.
EY has established a skills university that runs all the way to a masters’ degree in AI. About 100,000 of its 400,000 people globally now have AI qualifications.
Prompt mastery is a prevailing hot topic. The firm runs weekly competitive “promptathons” where people showcase their favourite prompts — the best are pressure tested and filed to a library for wider use. There’s a skills store with a range of downloadable AI skills, including data analysis and language translation.
Boiciuc spends several hours weekly sharing the latest tactics and “cool new prompts” with EY Oceania CEO David Larocca — points he often demonstrates for all in a town hall meeting.
Wrangling risks
The intense training and confident leadership also works on AI’s greatest challenge, the daunting risks. Keeping responsible, constantly upskilled humans in control of the powerful disrupter that’s growing a technological “mind” of its own promotes confidence. “Building trust is the biggest challenge,” admits Washbourne.
And there’s no hiding from the risks. GenAI has a confluence of them around privacy and data protection and use, well-reported biases and inaccurate “hallucinations”, copyright and ethical missteps, plus new threat vectors for cybersecurity that experts say could poison a model.
Maturity on AI safety is pivotal for Australia’s future adoption. In the absence of mandatory rules, EY looks to regulatory settings in the UK, along with the recently released 10 voluntary guardrails from Minister for Industry and Science Ed Husic that align with international standards and serve as a precursor to mandatory guardrails.
Establishing minimum standards for each voluntary guardrail is a governance 101 essential for directors, says Boiciuc.
Countering risk with AI
While risk abounds in AI projects, the technology itself also mitigates risks. Risk is the raison d’être for Pioneera, the “conversational intelligence” company founded by its CEO Danielle Owen Whitford in 2018. The business sees the upside and downside of risk.
Set up to monitor employees’ email communications for signs of burnout, Pioneera deploys chatbot Indie to provide timely interventions. The bot is a real-time antidote not only for harried employees, but also for companies and their boards looking to actively manage wellbeing and compliance issues in the face of the recently enacted Safework legislation on psychosocial hazards — Work Health and Safety (Managing Psychosocial Hazards at Work) Code of Practice 2024.
Pioneera’s new LLM is also moving into voice monitoring via Microsoft Teams meetings, as it tackles the wider goal of tracking and boosting productivity.
Do employees’ have concerns about their communications being monitored? “One hundred per cent they do,” says Owen Whitford, noting people opt in to access Indie and can opt out, but rarely do. Many look forward to the interactions.
This fits with everyday Australians’ surprising enthusiasm for social AI. The latest research from EY shows 57 per cent are happy to have a virtual assistant that acts as a friend or companion, with similar numbers comfortable about having an AI life coach, financial adviser, therapist or counsellor.
Pioneera’s feedback goes to the managers of groups of five or above, allowing leaders to take action at a team level when signs of stress show. Group-level productivity scores also go to executives and directors.
A former leader of large-scale transformations at IAG, Owen Whitford says Pioneera was fortunate on the risk and governance front to have signed big four bank ANZ as its first customer, a relationship that endures. “They were very specific in terms of cyber, data and how information was housed and used.”
Seeing multiple perspectives
Understanding the impact of risks was part of the motivation for ANU School of Regulation and Global Governance professors Anthea Roberts and Miranda Forsyth to create Dragonfly Thinking. Named for the insect’s capability for almost 360-degree vision, their generative AI collaboration tool is being used to tackle complex problems through the interplay in its “risk, reward and resilience” (RRR) framework, which reveals different scenarios and outcomes.
Dragonfly Thinking has been workshopped with some of the world’s biggest companies at Harvard Law School’s Center on the Legal Profession, zooming in on confounding challenges like ESG, geopolitical risk, the impact of Donald Trump’s second US presidential term and, coincidentally, misuse of AI. In this model, there’s always a human in the loop who acts like a coach or editor, says Roberts.
“It’s designed to assist with thinking on complex problems without telling people what to do. It never tells you the decision, because we believe that’s a human answer.”
First users have been strategy consultants, while final contractual arrangements to run a pilot program with federal government policymakers are being finalised. It’s also in line for the Defence Trailblazer program, which aims to strengthen Australian defence capabilities with cutting-edge technologies and solutions.
Despite persistent arguments that AI will replace humans, Roberts counters that “the real challenge is for humans to evolve with AI, developing skills that balance technical acumen with human intuition”.
Most immediately, she’s contemplating the risks or missed opportunities for organisations of not adopting AI.
Go-to governance measures
AI governance involves much more than policies and procedures. AI advisory councils are the norm for corporate frontrunners to brainstorm, shape and challenge developments. ReadyTech and EY have them, while Pioneera, a business with just seven employees, has two.
Beyond technology experts, advisers must include people with the wisdom and perspective to challenge the notion that just because you can, doesn’t mean you should. Set levels for ongoing training are essential to the governance mix.
EY runs a “factory model” to keep checks and balances for responsible AI in place.
“You want it broadly deployed, but in the hands of newly trained people, you add a second person who is deeply skilled to the governance process,” explains Boiciuc.
Safeguards also need to be in place to catch the “drift” (changes in the statistical properties of LLM input data) because, in an LLM, effective prompts change over time.
Beyond compliance with safe and ethical AI guardrails or rules, more cascading considerations for boards await, such as transparency of reporting on new AI standards and adequacy of disclosures for customer consent. One thing is certain, our experts conclude, directors need to lean in and find opportunities to get hands-on.
“When you use AI, you start to see both its flaws and its brilliance,” says Boiciuc. “Done well, it actually feels like magic.”
Stimulated, not stifledThe AI regulation imperative is driven by a trust deficit which, if remedied, could boost Australia’s AI innovation potential.
The Australian government’s Introducing mandatory guardrails for AI in high-risk settings proposals paper sets out measures which aim to address regulatory gaps and enhance public trust in AI.
The AICD supported a principles-based regulatory approach, cautioning against overly prescriptive measures such as a standalone AI Act. It called for alignment with reforms in privacy, cybersecurity and data governance, while emphasising the need to build AI governance capabilities at board level.
In November 2024, the final report of the Select Committee on Adopting Artificial Intelligence endorsed the government’s approach, highlighting AI’s potential to drive economic growth and societal benefits in areas like healthcare. It also identified risks, such as bias and discrimination.
In its submission, the Productivity Commission stressed regulation must balance the safety imperative with the need to foster innovation. While low public trust in AI is a barrier to adoption, efforts should focus on enhancing trustworthiness rather than trust itself, as rational scepticism can drive more responsible AI adoption when businesses and consumers are equipped with the knowledge to evaluate AI technologies effectively.
The Commission sees better data sharing and integration, especially within the public sector, as critical to fostering high-quality AI applications. Fragmented public-sector data and limited cross-jurisdictional coordination remain challenges.
AI Governance Guide for Directors
The suite of AICD resources, developed with the Human Technology Institute (HTI) at the University of Technology Sydney, helps boards harness the power of AI responsibly.
As stewards of organisational strategy and risk management, directors should seek to seize the opportunities and mitigate the risks of AI, with its ethical use in the interests of customers being paramount.
Using AI for the sake of it, or because competitors are using it, should be discouraged in an organisation. An internal or external audit can review how data governance, analytics and AI are being used to see if the right sort of structures are in place for governance.
Trying to fit AI within existing IT governance frameworks is problematic, with HTI’s research finding that existing IT risk management frameworks and systems are largely unsuited for AI governance.
Addressing the unique characteristics of AI systems requires a robust governance framework, which incorporates eight elements of effective, safe and responsible AI use, as detailed in the Director’s Guide to AI Governance.
The guide provides an amber/red-light approach to risk considerations. Amber implies there may be some risk and directors should probe further. There is a list of manageable responses to things that present as amber-light issues. A red light indicates there are potential high risks and directors should be on guard, probe further and consider how to address these risks.
The guide details eight elements a board should be considering around AI governance:
- Roles and responsibilities
- People, skills and culture
- Governance structures
- Principles, policies and strategy
- Practices, processes and controls
- Stakeholder engagement and impact assessment
- Supporting infrastructure
- Monitoring, reporting and evaluation.
Download the Director’s Guide to AI Governance here.
This article first appeared under the headline ‘Imagine a world where businesses use AI to its full potential’ in the February 2025 issue of Company Director magazine.
Practice resources — supporting good governance
Examples of the AICD’s contemporary governance practice resources for members:
Climate Governance
- Our new Climate Governance Initiative guide to mandatory climate reporting details this substantive change to corporate reporting, offering a practical framework and advice for directors.
Best interests duty
- The AICD’s landmark legal opinion (Bret Walker AO SC and Gerald Ng MAICD) and practice statement guides directors on the duty to act in good faith in the best interests of their organisation.
Cybersecurity
- Developed by AICD and the Cyber Security Cooperative Research Centre, the Cyber Security Governance Principles guide good practice, outlining key questions and red flags.
Latest news
Already a member?
Login to view this content