AI needs accuracy, humanity and integrity to build trust

AI is reshaping the competitive landscape across all sectors of the economy, helping organisations make better predictions and more informed decisions, while lowering operating costs, facilitating productivity gains and driving new business models.

AI is helping us address some of humanity’s most complex problems yet, our recent research with UQ shows that trust in AI is currently low in Australia – almost half of us are unwilling to share our information with an AI system and 4o percent don’t trust its decisions or recommendations.

Trust underpins the acceptance and use of AI. To build public confidence, AI should be developed and employed in an ethical and trustworthy manner while considering its impacts on people across its whole life cycle. Without public trust its full potential will not be realised.

Trust, however, is a two-way process and there are inherent risks in the development and use of AI. AI can undermine human rights, such as privacy and autonomy by facilitating mass surveillance programs, including facial recognition. AI could also precipitate technological unemployment.

But it can also have very positive outcomes. In the fight against COVID-19, AI is assisting by simulating and predicting spread patterns to inform government responses, enhancing diagnosis and helping detect mutations in the virus. So how does an organisation go about achieving trustworthy AI? How can we navigate the risks and impacts to people from AI systems?

We believe trustworthy AI is underpinned by three key components.

Ability

AI systems are fit-for-purpose and perform reliably to produce accurate output as intended.

Humanity

AI systems are designed to achieve positive outcomes for end-users and other stakeholders, and at a minimum, do not cause harm or detract from human well-being.

Integrity

AI systems adhere to commonly accepted ethical principles and values (e.g. fairness, transparency of data collected and how it is used), uphold human rights (e.g. privacy), and comply with applicable laws and regulations.

These work together in a virtuous circle of lived experiences to gain and reinforce a person’s trust in the system. When people believe an AI system adheres to these components, they are more likely to trust the system. The priority is to ensure that any AI system being designed, procured or implemented is aligned with the organisation’s strategy, core purpose and values.

These concepts, humanity and integrity are not what we normally associate with something so ‘technical’ as AI, but AI systems are only a reflection of the way they are developed and ‘controlled’. And it is humans that build AI not robots.

Data underpins all AI systems. If an AI system is built on incomplete, biased or otherwise flawed data, the mistakes will likely be replicated at scale in its outputs.

Such trust failures can be prevented by following best practice in assessing the quality and traceability of the data used to build AI.

Data is vital to developing AI systems, but it can’t work in isolation if we are to build trustworthy systems. Digital empowerment and literacy will be critical to future-proof our society and fully embrace the potential of AI.

AI needs to be understood by all the stakeholders making decisions, so they’re comfortable that end consumers will receive the right outcomes. We need to collaborate with technical experts to develop guidelines and policies on how to open the ‘black box’ and make these systems and their logic understandable to all stakeholders. Transparency and understanding will assist to grow trust in what is often seen as unfathomable.

Having the right intent is not enough. We need to have the right governance and the right conduct to ensure AI systems don’t let us down.

In the end it is the organisations that adopt an integrated, cross-disciplinary approach to achieving trustworthy AI who will be positioned to manage reputational risk, lead the responsible stewardship of this technology and realise its benefits faster.

Our latest report gives practical help for developing trustworthy AI. Read the full report.

Share

Tags

Add a comment