Artificial intelligence is here, and consumers want the technology to be used in a conscientious, ethical manner
by Bethan Rees
Take our Professional Refresher on Artificial Intelligence to earn one hour CPDOllie Buckley, executive director of the Centre for Data Ethics and Innovation, has warned that public trust in artificial intelligence (AI) is vital for the UK to continue to develop and implement AI systems, reports Ian Hall in an article for Global Government Forum.
Speaking at the launch of AI in financial services: Impact on the customer, a report published by law firm Pinsent Masons and Innovate Finance, Buckley explained that if people think AI is working against them instead of for them, “we will have a problem”. He also discussed some of the ethical issues facing emerging technologies with an audience of finance professionals.
He said: “What new trade-offs do we make in a world where AI makes things possible that simply weren’t possible before? If banks are able to use AI to identify vulnerable customers from their transaction data, to identify gambling addicts from their pattern of spend online, should they do that, and take steps to protect those people? Do they have a responsibility to do that, or is that a gross infringement of personal privacy? If better predictive power can lower insurance premiums for some but raise the cost for others to prohibitive levels, is that okay and in what circumstances?
He adds that the UK has an opportunity to extend its foundation of good governance to the use of AI, to “set the rules of the road that can set a standard for the rest of the world”.
Global Government Forum article
Consumers believe there should be a higher level of ethics when companies are using AI, and they should include ethical strategies in their business framework, Software Testing News reports. The article refers to a report on global adoption of AI published by the China-based data company International Data Cooperation (IDC), which surveyed almost 2,500 people.
According to the article, 60% of companies surveyed report that AI has had a positive effect on their business models and 50% say they are inspired “to add ethical and trust risks into their framework of using AI”.
The vice president of AI strategies at IDC, Ritu Jyoti, is quoted: “As AI accelerates toward the mainstream, organisations will need to have an effective AI strategy aligned with business goals and innovative business models to thrive in the digital era.”
Software Testing News article
The case for ethics
While it may seem obvious why ethics are good for business, companies that cultivate an ethical approach to AI enjoy better customer trust levels, according to Sead Fadilpašić for ITProPortal.
Referring to a report by the Capgemini Research Institute, Fadilpašić says that customers would recommend companies that use AI in an ethical way, would be more loyal to them, and potentially buy more products from them. The report also says that ethical use of AI is related to how a business collects personal data and the extent to which they depend on machines for crucial decisions.
Anne-Laure Thieullent, AI and analytics group offer leader at Capgemini, is quoted: “Consumers, employees and citizens are increasingly open to interacting with the technology but are mindful of potential ethical implications. This research shows that organisations must create ethical systems and practices for the use of AI if they are to gain people’s trust. This is not just a compliance issue, but one that can create a significant benefit in terms of loyalty, endorsement and engagement.”
Thieullent explains that firms could achieve this by concentrating on having the right governance structures and informing customers on how they are using AI.
What concerns you most about AI? Leave your comments in the box below.
Seen a blog, news story or discussion online that you think might interest CISI members? Email email@example.com.