Expert guidance on governance of the evolving AI technology

Saturday, 01 June 2024

Steph D'Souza & Narelle Hooper MAICD photo
Steph D'Souza & Narelle Hooper MAICD
    Current

    AI is not artificial, intelligent or new, say leading experts. Many organisations have been embedding the productivity benefits of AI for decades and employees are increasingly working on generative AI platforms, with research pointing to a surge in unauthorised or “shadow” use cases. 


    With AI, there is an urgent governance imperative for boards to either catch up with their competition or pave the way for their sector. Here, AI specialists and business leaders spell out how and why boards must keep their inevitable AI transformation human-centred.

    The governance context

    Professor Nicholas Davis, UTS Human Technology Institute

    In 2021, Microsoft principal researcher Kate Crawford pointed out that in reality, AI systems are neither artificial nor are they actually intelligent. They’re not artificial in the sense that they are made up of people, machines, hardware, electricity, relationships and value judgements.

    And they’re not intelligent, in the sense that they use smart statistics to create an output trained on data. That’s not what we think of as true “human intelligence”.

    The next misconception is that AI is something super-new. In fact, the field dates back to the 1956 Dartmouth Conference in the UK. There is a long history of work in the field of AI more generally.

    AI requires more and different forms of data, more frequent maintenance, more rigorous oversight — and is usually far less deterministic, less transparent and less explicable than other systems. Most Australian organisations already rely on AI systems. If you look at the surveys of senior executives and directors, they will tend to say that about two-thirds of Australian businesses are using or planning to use AI systems.

    I had the pleasure of talking to about 300 business leaders and non-executive directors over the past year about how their organisations use AI. Scratching beneath the surface, I struggled to find a single organisation that wasn’t relying on an AI system in some part of their value chain.

    Part of this is in the form of shadow IT — these are AI systems that are being used by your employees or by your teams without the clear knowledge or authorisation of senior management. A lot of AI is also sitting along your value chain. It’s being used by suppliers or it’s being acquired and simply embedded within software products and packages that you are subscribed to.

    Directors should consider where these instances would be in their organisation. Any type of chatbot, predictive text, language translation, any recommendation system that takes or gives something to your clients or customers — something that’s similar to what they’ve already seen or, obviously, anything doing text or code generation — is relying incredibly heavily on AI systems.

    The primary area where we see this emerging across Australian business today is in customer service, reportedly the highest area of uptake for organisations. Close behind — I’d say, potentially higher if you go deep into the data — marketing and sales is AI-driven, customer-targeting, advertising and marketing online. The third area, which is growing increasingly rapidly, is in the human resources space.

    I emphasise these three areas because they are where human beings are deeply involved in the use of AI systems inside organisations. It’s in these three areas where the greatest level of attention might be required from the board in order to make sure you’re both grasping opportunities and meeting your obligations.

    It’s interesting to note that the most rapid growth of AI systems is actually happening closest to the core of organisations’ value production — including in finance and governance — at the heart of how businesses are being run and also at the heart of their business model.

    When comparing the reaction of business leaders and non-executive directors about what they expect in terms of benefits from AI systems, it is apparent that business leaders are overwhelmingly focused on internal process efficiencies like cost savings and freeing up capital. However, while research shows that productivity is the second top priority for directors, their number-one expectation is better customer experience.

    If you’re sitting on the board talking and thinking about AI, the above could be an interesting insight into how your CEO or senior management team are thinking about their work.

    Workforce transformation

    Stela Solar MAICD, Director of the CSIRO Data61 National Artificial Intelligence Centre

    The scale of AI’s workplace transformation is underappreciated. The AI technology is impressive, but the scale of change it is catalysing is even more spectacular. What’s really driving this is the ease of access and the ease of use of generative AI tools. Which is why we’ve seen this exponential acceleration over the past six to nine months. We’re getting signals that generative AI adoption is anywhere between 30 to 40 per cent in the workplace. More importantly, 68 per cent [of employees] are not telling anyone about it.

    There are people in your organisations who think they can do things more effectively or have found the tools to make them more effective. But at the same time, when an organisation doesn’t know what those tools are, that leaves exposure. That means you may not have the same level of governance across those supply chains of the technologies that are being used, or across those providers and implementations of the technologies.

    A rhyme I really appreciate in this landscape is, “Roses are red, violets are blue, if the product is free, the product is you”. It really hits home that there are business models behind every technology out there. There’s quite a proliferation of free generative AI tools that may be using your data in questionable ways. It’s so important for your organisations to implement a generative AI policy to fully embrace this workforce transformation.

    It’s not only generative AI that has accelerated this. In fact, our ecosystem report found that about 80 per cent of the AI skills an organisation needs are outsourced. That outsourcing is an indicator of potential skills you may want to have in-house, but you’re forced to outsource. However, it also means additional exposure unless those providers are appropriately vetted.

    A recent Salesforce survey of 11,000 employees worldwide found that 97 per cent of them wanted to increase their AI skills. So they’re not ignorant of the changes that are coming. In fact, they’re incredibly informed and curious, and are looking for their organisation to empower them. It’s so important to proactively develop a skill strategy and implement a generative AI usage policy so you can retain the talent who actually want to learn, to help your workforce transform in this rapidly changing environment and manage the exposure to unknown tools.

    Board oversight

    Wendy Stops GAICD, Non-executive director Coles Group

    AI for a lot of people is a big black box. For a lot of directors, it’s a big black box. When they hear we’re using an AI model they are, in some respects, dependent on the executives helping them understand how they’re using it and what they’re doing. Directors are looking for guidance on what we should specifically be asking. Should we be talking about where the data is coming from? How are you storing that data? How are you ensuring it’s kept private if it has customer information attached to it? How, when you’re building these models, are we making sure we’re not building some sort of bias into it, particularly if there is machine learning attached to it?

    The more sophisticated a company is in its AI usage, the more the board would expect to see some sort of formal structure and governance in place, like a data council, AI council, or a privacy and data council. Some form of governance that is there to not only look at each opportunity as it comes up and discuss whether it’s something they really want to do, but also to make sure the responsible AI element is looked at.

    As a board, if a particular opportunity comes up and we see this as AI, then we’re going to want to know things like, how did you decide this made sense to do? What guardrails do you have around this? Are the decisions you’re making around AI or this particular opportunity aligned with the organisation’s values and principles?

    You often have to ask, can you explain to me how this model actually works? That’s not to say, give us the technical rundown on the whole thing. However, if you can’t explain it in simple terms, then that is probably a red flag. It tells us that something is probably churning away in the background in this model and they don’t necessarily know how it’s reaching its conclusions — therefore, it is not a transparent model.

    This is particularly important if it’s a machine learning-type model where it’s continually learning things. How do we make sure it’s not straying from its intent? How have human factors like biases been weighted? Ultimately, the decisions the council makes need to revolve around business-led AI. It should not be technology-led.

    Edited extracts from the AI Governance for Directors webinar series.

    Case Study: nib Group

    In February, ASX 100-listed health and travel insurer nib Group released an AI-driven symptom checker, the latest step in a nascent, company-wide integration of artificial intelligence. With recent NSW Health Department data showing one in 10 patients are waiting up to 11 hours in hospital emergency departments, the tool, accessed through the nib app, triages members on appropriate treatment options based on their symptoms. This helps them better navigate the health system.

    Mark Fitzgibbon FAICD, nib Group CEO and MD, says there are three core themes for boards — director awareness and curiosity of AI use; development of a framework for AI application and oversight; and a third pillar of ethics and appropriate consideration of its use. He says because AI is so pervasive, nib is developing a framework to provide insights and support oversight. This will become a regular and familiar report for the board, three or four times a year — “not a thesis, but a dashboard”.

    The framework covers eight areas — market and investor intelligence, product personalisation and customisation, marketing, business and financial forecasting, health risk and treatment decisions, company operations and processes, risk management and reporting insights to improve visibility and performance.

    “The starting point was to build a team,” says Fitzgibbon. “It’s absolutely about having the skills in the business and recruiting for that. A director has to have a reasonable understanding of what AI is and isn’t — and of how it’s being used in other industries. David Gordon MAICD (nib Group chair) is a great advocate.”

    Around the matter of ethics, Fitzgibbon says, “It’s the can we/should we debate. A whole range of issues with AI need consideration. There’s a real sense of adventure about the possibilities. In five or 10 years from now, we might look back and think we missed it, but it won’t be for want of trying.”

    This article first appeared under the headline 'Coming to Grips With AI’ in the June 2024 issue of Company Director magazine.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.