In the boardroom, humans and machines must work together. CSIRO Data61 principal scientist Stefan Hajkowicz discusses the value of including directors in the decision-making process.
At 1.50pm on 23 March 2019, en route from Tromsø to Stavanger in Norway carrying 1373 people, a newly built Viking cruise ship’s onboard sensors detected below-threshold levels of engine oil lubricant. The ship’s autonomous systems shut down the engines immediately, without giving the captain or crew a chance to intervene.
The cruise ship was passing through shallow water between rocky reefs in a storm and, without power, was at risk of grounding. A mayday was issued, followed by an emergency helicopter evacuation for around 460 people. By the next morning, three (of four) of the ship’s engines had been restarted. Escorted by tugboats, the Viking Sky made its way to safe harbour in Molde, where the remaining passengers and crew disembarked.
For artificial intelligence (AI) and autonomous systems, the Viking Sky situation illustrates human-in-the-loop (HITL) design issues. In most cases, the vessel’s autonomous systems would be doing the right thing by shutting down the engines. Insufficient engine lubricant is a serious problem; heat friction could cause a dangerous fire and costly mechanical damage.
However, a shutdown isn’t appropriate if the ship is very close to rocky shallows in heavy seas and strong winds. Under these conditions the risk of engine failure may be less than the risk of crashing into submerged rocks. If given the opportunity, the ship’s captain could have weighed the risks and chosen the best course of action — probably cruising for a while longer to a safer location to check the engines and, if needed and safe, then shutting down. But the autonomous system acted independently.
Similar HITL issues have arisen in aviation accidents where pilots have been wrestling with autonomous systems to regain control of the aircraft. Bear in mind, the engineers designing autonomous systems have a complex challenge. For many tasks, getting the human out of the loop substantially improves safety because the human is often the least reliable component of the system.
For example, in the early part of the 20th century, 80 per cent of aviation accidents were caused by machinery failures, with 20 per cent caused by pilot error. According to data from Boeing, today it’s the other way around — 80 per cent of failures are due to human error. Likewise, motor vehicle accidents. A recent statistical analysis by insurance company Budget Direct found that in Australia in 2016, the top four causes of fatal car accidents were speeding, alcohol, driver fatigue and inattention/distraction. Together, these accounted for 78 per cent of all fatal accidents. AI doesn’t usually speed, sleep or become distracted — and it definitely doesn’t drink alcohol. Automated systems already make travel safer in the air, on land and on the water — and they will continue to do so.
While the transport and safety sector is keenly focused on HITL issues, they’re relevant to practically all forms of AI application and development. AI is playing a bigger role in a wide range of organisational decisions. There are even moves to bring AI into the boardroom to advise and influence decision-makers.
A 2017 article published in the MIT Sloan Management Review by Barry Libert — CEO of US machine-learning company OpenMatters — and other authors, argues that “emerging intelligent systems will help boards and CEOs know more precisely what strategy and investments will provide exponential growth and value in an increasingly competitive marketplace”. AI won’t just be applied in operational decisions; it is likely to have widespread application informing, and helping to make, complex strategic decisions such as choices about capital investment and/or divestment.
The authors of the MIT Sloan article argue that comprehensively understanding the competitive landscape is a task too complex for many human directors on company boards. They cite a 2015 McKinsey study that found only 16 per cent of board directors “fully understood how the dynamics of their industries were changing and how technological advancement would alter the trajectories of their company and industry”. Business intelligence and strategy applications of AI will be a key source of competitive advantage.
This means HITL issues will be important. Organisations will need to identify the right level of involvement that AI has within decision-making processes. It will require developing processes, information flows, accountabilities, delegations and cultures that achieve harmonisation between human decision-makers and AI systems.
In the past few years, management scientists working in the field of digital strategy have published studies about how to achieve effective human-AI harmonisation. In their book, Machine, Platform, Crowd: Harnessing Our Digital Future, Andrew McAfee and Erik Brynjolfsson from MIT say give AI the 4D jobs: dirty, dull, dangerous and dear (expensive).
Meanwhile, researchers from Swiss university ETH Zürich, publishing in the California Management Review journal in July 2019, have examined the role of AI in organisational decision-making structures. They identify five key factors for AI-human coordination in decisions: “specificity of the decision search space, interpretability of the decision-making process and outcome, size of the alternative set, decision-making speed, and replicability”. More research is likely to be released on these topics.
As we delve further into the journey of AI-driven decision-making within organisations, we are likely to develop a much better understanding of how to achieve effective harmonisation. This emerging field of research — sometimes called human-computer interaction — is likely to be the next wave of AI technological
Harmonising Humans and AI
Principles to guide directors and senior management in decision-making.
- Assign the 4D tasks — dull, dirty, dangerous and dear (expensive) — to AI as much as possible and assign the human tasks — judgement, creativity, problem-structuring, communication, reasoning, logic and emotional intelligence — to humans.
- Ensure there’s human validation of the AI decision-maker and use AI to review human decision-making.
- Make sure there’s a well-understood, accessible and reliable “off” switch (procedure) for the AI, which reverts to human decision-making when something appears to be going wrong. Also ensure that humans have the skills to act and decide when the AI system is turned off.
- Develop skills and knowledge about the AI system so staff know how it’s working and how to interpret its recommendations. This will also help them recognise when a problem necessitates repairs or improvements.
- Create data-driven cultures in which staff and directors have the willingness and ability to use data to drive their decisions, along with their own intuitive approaches. Analytics and intuition play essential roles in good decision-making; the key is harmonisation.
- Build redundancy into mission-critical AI systems that are making or guiding important decisions, so one system can be checked off against the other.
Latest news
Already a member?
Login to view this content