orangeblock

AI could scupper the best-laid TCF plans

20 March 2024 | People and Companies | Events | Gareth Stokes

Financial advisers and other financial services providers (FSPs) cannot ignore the impact of emerging technology on the financial sector, especially where it comes to their adherence with treating customers fairly (TCF) principles and ensuring financial consumer protection. This rather wordy introduction sets the scene for a day-one panel discussion at the inaugural Financial Sector Conduct Authority (FSCA) Industry-wide Conference, held recently in Johannesburg and online.

More computers doing your job, at scale

Nolwazi Hlophe, Senior Fintech Specialist at the FSCA set the scene for an hour-long debate on how the fast-paced adoption of artificial intelligence (AI) and similar technologies might influence consumer protection outcomes. “As a market conduct regulator, we cannot ignore the [impact that tech might have on our] consumer protection mandate,” she said. Her first task was to define AI in the present day. “Traditional AI hinges on getting a computer to do the job of a human,” Hlophe said. In other words, a traditional AI is capable of human-like pattern recognition, classification and prediction at scale. 

The generative AI deployed in large language models such as Chat GPT take AI to the next level, allowing for the creation of new content. Suddenly, all and sundry have begun training AI models, using reams of data to create seemingly-original audio, code, text or video. Nowadays, you will find examples of this creative-type AI in virtually every smartphone or web application. Case in point, LinkedIn, which describes itself as a business and employment focused social media platform, already offers a ‘generate AI text’ button alongside many of its content input boxes. 

Turning to how AI is disrupting the financial services sector, Hlophe noted that the technology was useful in navigating the reams of data, both structured and unstructured, that banks and other financial services providers (FSPs) businesses were generating through their digital interactions with consumers. She commented that AI made it possible for product providers to develop better products and services, and better-understand how consumers were interacting with same. AI has made it easier for financial sector firms to meet regulatory requirements; to automate data collection; and to improve customer service, to name a few. 

Using AI to boost compliance efficiencies

AI and generative AI are leaving a mark across financial institutions’ operating footprints including in areas like combating fraud; customer support; product development; and value creation. “South African consumers are looking forward to this technology growing, and are keen to see what will happen,” Hlophe said. Banks are leading the way, with insurers and start-ups in both Fintech and Insurtech following suit. The presentation mentioned FNB Risk for its use of AI systems to increase efficiencies in regulatory reporting, and Nedbank’s recent adoption of Microsoft 356 Co-pilot, an AI-backed efficiency tool. 

The benefits of rapid tech adoption are offset by significant risks to financial consumers. “We need to ensure that financial consumers are protected, so that they can have confidence and trust in the financial services sector,” Hlophe said. She identified bias, discrimination or unfair outcomes as a key risk in AI, due to the technology’s reliance on anonymised data. The concern here is that insurance premiums or bank lending rates can be influenced by biases baked into the data on which the AI model is trained. A second major risk is the inability of a financial institution to explain AI outcomes, especially in unsupervised learning models. 

Dr Mark Nasila, Chief Data and Analytics Officer at FNB Risk, said that AI and generative AI were ‘top of mind’ at board meetings presently. After offering an overview of global macroeconomic and risk complexities, he noted that “business leaders were seeing AI and generative AI as a mature technology that will help them not only navigate this current reality, but also to keep up with providing services and products that are relevant”. He also observed that the technology was disrupting all industries at the same time, and that the financial services sector was not alone in navigating the resulting benefits and risks. 

Avoiding knee-jerk reactions to AI

Ayanda Ngcebetsha, Director for Data and AI at Microsoft South Africa warned organisations against a knee-jerk reaction to AI. He noted that the fear of missing out on gains in customer engagement; customer retention; and increased wallet share in each customer has seen many firms adopt the technology without the necessary due diligence. His focus, however, was on data. “Data is the oxygen for AI,” he said. “There is no AI without data; even outside of generative AI where machine learning gained traction and became a de facto standard for predictive analytics simply because we increased the amount of data that we produced”. 

The proverbial elephant in the room is that AI amplifies imperfections in existing data sets. According to Ngcebetsha, organisations with a strong data culture underpinned by governance and trust-led data decision making are less likely to trip up over this amplification risk. “Generative AI creates net new content from what you give it; if your customer data or your transactional data has gaps, then you risk amplifying those gaps throughout your ecosystem,” he warned. Companies that paid their school fees insofar as ‘fixing’ their data platforms early on have been the most successful at adopting AI. 

How do we protect consumers from bad actors using AI, and from the unintended outcomes of rushed AI implementations? asked Phokeng Mogase, CIO at the FSCA, who served as moderator for the discussion. “Responsible AI ensures that organisations leverage AI to design products and services that are not only safe and inclusive for consumers, but also safe for the internal operation of the organisations,” Nasila said, before adding that regulators would have to urge financial institutions to adopt technology responsibly. He advocated for collaborative oversight on how firms adopt AI, including ways to stamp out bias in algorithms. 

Some tech tips for regulators

The panel suggested that regulators use AI to automate a lot of the regulatory compliance and oversight functions and to monitor and cache AI-related risks while freeing up staff for other human-centric tasks. “You need to leverage AI to produce leading indicators; to be proactive; and to have controls around data,” Nasila said. “You can then allow people to focus on thinking aspects, especially where bias- and ethics-related risks become complex”. Returning to the industry-wide focus, the panellists said it was important to acknowledge human biases. 

“If we do not catch our unconscious bias, it will be transferred into the models we build, and if we do not ensure inclusiveness in how we build models we risk perpetuation bias,” Ngcebetsha said. He reminded the audience that all generative AI, including large language models, were trained on data that reflected a point in time. For example, Chat GPT was initially trained on a sub-set of Internet data older than 31 December 2021. The question becomes: who produced this data? The panel took the bait, setting off on a “this content was generated outside of Africa” tangent. But that, dear reader, is a debate for another day. 

Bias, inclusivity are top considerations in AI deployments

Suffice to say, there are clear risks in the deployment of AI on ‘tainted’ data sets. Ngcebetsha concluded: “If you fail to solve for bias, you cannot ensure fairness”. He warned that by not doing data and due diligence correctly at the start of an AI-implementation, firms risked creating serious problems further down the value chain. 

Writer’s thoughts:

We are aware of poor consumer outcomes from algorithm- or AI-backed banking applications, most notably race-based interest rate or premium variances. Have you or your clients ever encountered financial services biases that you believe were ‘baked in’ by AI or automation? Please comment below, interact with us on Twitter at @fanews_online or email us your thoughts editor@fanews.co.za.

Comment on this Post

Name*

Email Address*

Comment*

quick poll
Question

How concerned are you that your clients might fall for deepfake or other AI-backed cybercrime scams, especially in financial or investment settings?

Answer