Grey matters ethical dilemma: Jane Doe

When a firm replaces an employee with a chatbot, at first clients don't notice the difference. But when things start to go wrong, senior managers must decide what to tell clients and the regulator

October19JaneDoe

Jane has worked as a customer services administrator for Identity Finance, a medium-sized wealth management firm, for five years. The job is a step-down for Jane, who had previously been head of customer relations at a prestigious firm, but, at 64 years old, she had wanted to take on a less taxing role before she retired. Nevertheless, because of her experience and reputation for excellent customer service, Jane receives a higher than average salary.

Jane’s job involves liaising with clients via email and phone – everything from booking appointments to managing complaints. Over the years, clients have come to trust and rely on Jane, and she takes her responsibility as a brand ambassador seriously.

Knowing that Jane wishes to retire before turning 70, senior management at Identity task Sammy, the head of operations, with finding a solution for when she leaves. They cannot afford to hire another person of Jane’s calibre and sterling reputation, so Sammy looks for alternatives, one of which is a new chat software run by AI Made Simple. AI Made Simple’s director of development assures Sammy that the service its ‘chatbot’ function offers is as good as the real thing and that it can ‘learn’ Jane’s tone and preferred language, thereby providing a seamless transition for when Jane retires.
Clients say that Jane is unreliable, rude and dismissive

Sammy presents the idea of a chatbot to the Board, saying that it will not only replace Jane, but will also be cheaper, and will streamline all client communications, including messaging via a chatbot on the company’s website. The software can chat to more than one client at a time and will record all client satisfaction data, plus information about the types of queries received, response time, and how the problems are resolved. The chair of the Board is impressed, but some members express reservations about moving to AI. They cite the trust the clients have in Jane and wonder whether they will be happy speaking to a robot instead. The Board advises Sammy to ensure a smooth transition to the new programme and comes up with the idea of calling the bot Jane so clients don’t notice any difference in the service.

The software is added to the firm’s system and starts to learn from human Jane. AI Made Simple call this a digital handover, whereby the system can learn to replicate her turns of phrase and tone. Jane’s picture, with a friendly greeting message, is added to the chat function. Jane agrees to this, as she is eager to ensure a smooth transition for the clients after she retires.

Jane’s last day arrives, and she is bid a heartfelt farewell. The next day, Jane the bot takes over. The launch seems to go well, with Jane proving to be efficient and helpful for clients. Nobody seems to notice they are talking to a robot.

A couple of months later, however, several complaints are received by Alfie, the head of client services, about Jane. Clients say that Jane is unreliable, does not properly respond to their queries, and seems rude and dismissive. Unaware that they have been speaking to a chatbot, clients ruefully ask Alfie to take disciplinary action against Jane. Alfie apologises to the clients, informs them that the matter will be dealt with, and contacts AI Made Simple, who adjust the software to fix the problems.

At its next meeting, the Board reviews the usual papers, which include information about the number of complaints received. Company policy states that complaints about staff members must be escalated to the FCA. However, despite the numerous complaints about Jane, the paperwork says that none have been received or reported to the regulator. Alfie explains that because Jane is a computer programme, it does not fall under the category of staff member.

What should the chair recommend?

  1. If the firm does not wish to escalate complaints about an AI programme to the regulator, the complaints should be assigned to Sammy, the person responsible for finding and implementing the programme, or to Alfie, the head of client services, and escalated to the FCA.
  2. The chatbot should be amended to inform clients that they are talking to a robot. However, Alfie has dealt with the complaints to the satisfaction of the clients, so the FCA need not be informed on this occasion.
  3. All clients should be written to, informing them that for the past two months, they have not been speaking with Jane, but with a robot. A message will be put on the website noting that the Jane programme has been running for two months, and it will be made clear on all future communications that clients are speaking with a robot. The complaints will be logged, in case of a visit by an FCA supervisor.
  4. Jane is clearly a liability. This is a failed experiment, and the company should either stop using the programme, or hire a junior client services administrator to work alongside the bot, and monitor responses.

 

This dilemma appears in the October 2019 print edition of The Review, out soon. The CISI's opinion will be published in the Q4 2019 print edition of The Review
Published: 17 Sep 2019
Categories:
  • Fintech
  • Soft Skills
  • Career Development
  • Integrity & Ethics
Tags:
  • Jane Doe
  • grey matters ethical dilemma
  • chatbot
  • AI

No Comments

Sign in to leave a comment

Leave a comment