Big data – future of business or an ethical minefield?

Big data and artificial intelligence (AI) technology have driven many investments over the last decade, and look to do so into the future. But there is a problem – community trust in technology and big business is at a low point. It is clear self-regulation is not enough to achieve desirable outcomes. Business needs to embrace new principles and standards, and appropriate legislation must be developed. Otherwise we will never build trust and achieve the benefits of data and AI.

Don’t get me wrong – I love big data. I love that my insurance company knows I have five policies with them and treats my family as a single customer; I benefit from Netflix recommending new shows based on past viewing and while they can be a little annoying, overall I prefer targeted advertising in my social feeds to random adverts. These are all examples of appropriate and valuable use of data and artificial intelligence.

However, there is growing noise around the ethics of data and a risk that the benefits gained through our use of data and AI may be diluted due to privacy and transparency concerns. Whether you are the CIO, CEO or CRO – your discussions in the next executive meeting should not be, “What can we do with our customer/employee/supplier data?” but rather, “What do our customers/employees/suppliers allow us to do with their data?”

The core concerns of this ethical debate are the security of data and whether the data collected is used for the purposes it was provided. Accordingly, a number of governments are banning or considering banning the use of AI technology such as facial recognition programs.

In my view these concerns are valid and need to be addressed. The way we address them should not be through blanket bans and quickly introduced legislation – they all feel like knee jerk reactions. Instead they should be addressed through consideration of appropriate principles, agreed standards and legislation.

Ethical Principles for the use of Data/development of algorithms and AI. In my view, every organisation should be discussing and documenting their principles for the use of data and AI. There are great examples of how large global companies are setting out clear principles that cover topics such as transparency, accountability and fairness. Microsoft are using these principles – staff are guided on how to develop AI solutions, customers know what their data is being used for and governments can identify areas where standards or legislation will ensure compliance and create community trust. Australia has made a start with the recently published Data61 AI Ethics Framework – it provides a good guide for any Australian business considering this topic.

Standards for algorithm and AI development are required to ensure consistency across industries and solutions and to address key issues such as data bias. Global standard setters are turning their mind to this topic – but so far addressing it largely around specific use cases such as ethics for autonomous vehicles. Standards need to be broad and build on the principles noted above and then dive deep into specific use cases as subsets of the broader set of standards where required.

Legislation to create community trust: Legislation concerning new technology will always lag behind development. The best way to accelerate the drafting of up to date legislation is to focus on those areas where principles or standards will not be enough. Facial recognition in law enforcement is a good example. Additionally, the use of AI in airplanes in light of the Boeing 737 Max debacle is something that has to be open to legislative scrutiny. But other uses for the technology like how Amazon uses AI to recommend products is of lesser importance. Legislation needs to be prioritised and focused.

Principles on their own are not enough. There are examples of companies claiming, ‘We have an Ethical Framework for AI’ to distract the discussion on standards and legislation. All three areas: principles, standards and legislation need to be addressed.

So where to next? We believe that a good starting point for every organisation is an ethical framework for the use of data and development of AI – every business should start drafting that document now.

Share

Add a comment