AI can stop human error in four ways for financial institutions

WeStan
7 min readMay 31, 2022

Written by Betsy Foresman | May 28, 2022

It’s only human to make mistakes. From spilling coffee, to accidental ‘’reply all” emails, employees have an occasional bad day at work. Unfortunately, some mistakes can have devastating effects — like a fat-finger error that turns one million into 10 million, resulting in large, often unrecoverable losses.

While banks aren’t required to report how much human error costs them each year, it no doubt makes up a significant portion of operational losses within financial institutions. From 2011 to 2016, major banks have suffered nearly $210 billion in operational losses, according to the Operational Riskdata Exchange. The cost of mistakes isn’t just financial. Banks can also tarnish their reputation when careless blunders hit the headlines, and trust is paramount in money matters.

Reducing the cost of human error provides a huge opportunity for banks to apply innovative technology like artificial intelligence and machine learning. Providing employees with intelligent tools like AI not only minimizes the chances of disastrous errors but also gives people the opportunity to focus their attention on more important tasks that still require the creative and intuitive eye of a human. In the areas of regulatory compliance, loan bias, cybersecurity risk, and data entry, there are rich opportunities for AI to diminish costly human error.

Avoiding mistakes by using AI for regulation compliance

Financial regulators require banks to track, manage and analyze detailed data. What’s more, regulatory change has increased nearly 500% since the 2008 financial crisis. Employees end up spending hundreds of hours reading laws like the updated Anti-Money Laundering Act, FRTB, and the EU’s PSD2.

These large and ever-changing areas of compliance can easily give occasion to costly human errors. In 2019 alone, there were a total of 1,059 enforcement actions against financial institutions from major regulatory agencies, the SEC, FINRA, CFTC, and the NFA. In total, banks paid nearly $6 billion in fines and penalties in 2019.

Being able to reduce human error in regulatory compliance is an optimal application for artificial intelligence. AI and ML’s powerful ability to process data and recognize patterns can shed light on blind spots and errors in regulatory compliance. Using intelligent software to create gated workflows that ensure each step of a regulatory process is complete, or monitor the regulatory horizon for new rules and amendments to existing rules, can help financial institutions reduce error-prone manual tasks and stay current in a changing environment.

Marketing departments are an excellent area for financial institutions to deploy AI-driven regulatory technology or RegTech. Banks that operate globally must navigate a maze of global regulations to protect consumers. Companies can pay steep fines for non-compliant advertising. Mistakes can also cost them hard-earned brand equity.

Manual checks can’t keep pace with the speed and scale of advertising, but by implementing RegTech with location-specific compliance rules, marketing teams can launch new campaigns much faster and with less risk of non-compliance. AI can detect and suspend a misfired tweet, catch a violation in a print advertisement, or find misleading claims in marketing materials.

Ultimately, it’s not a matter of AI replacing people, but partnering AI with people to help financial institutions reduce the tedious, error-prone manual compliance processes. This marriage of human and machine can also help financial institutions demonstrate accuracy and reliability to regulators.

Reducing human bias in credit decisions with AI

Cognitive bias is a systematic error in human thinking. It results in unfair judgments and prejudicial decision-making. Within financial institutions, these human biases can affect decisions on who gets credit and on what terms.

The 1974 Equal Credit Opportunity Act prohibited credit-score systems from using information such as gender, race, religion, marital status, or national origin in credit decisions. However, human bias still affects minority individuals. In 2017, the denial rate of mortgage applications for Blacks was 18.4%, 13.5% for Hispanics, 10.6% for Asians, and 8.8% for non-Hispanic whites. In 2020, when the Payment Protection Plan rolled out, entrepreneurs from minority ethnic groups also found it more difficult to get small business Covid relief loans. The data continues to show that lenders still make decisions influenced by cognitive bias.

Eliminating bias from credit decisions, while being the morally right thing to do, can also improve financial institutions’ bottom line. Removing bias from banking can also improve customer experience, increase market opportunities and yield valuable business insights.

A 2019 study from UC Berkeley found that financial technology (FinTech) algorithms discriminated 40% less on average than face-to-face lenders in loan pricing, and they did not discriminate at all in accepting or rejecting loans. In the loan approval processes, using FinTech platforms has increased the number of first-time and millennial homebuyers. There have also been increases in traditionally underrepresented home buyers, including people of color, single women, LGTBQ+ couples, and customers with student loan debt.

AI systems can increase the accuracy and objectivity of decisions. Instead of allowing subconscious bias to affect an outcome, AI can process data in adherence to a uniform set of rules. AI and ML can open up new opportunities for financial institutions to improve the fairness and quality of their services.

It is important to remember that while AI can help banks to avoid the traditional credit reporting and scoring systems that often perpetuate human bias, AI-based decisions can still contain bias. AI systems learn to make their decisions based on training data, and it’s this data that includes human bias. Financial institutions can achieve more equitable lending practices by training and testing algorithms on new data models rather than biased credit data of the past.

Preventing the next cybersecurity slip up with AI

Humans are often the most vulnerable link in the cybersecurity chain, and hackers know it. According to the World Economic Forum, human error accounts for 95% of security breaches. Social engineering attacks, like phishing emails, prey on this vulnerability and successfully trick employees into giving third parties access to internal networks by impersonating legitimate communication.

Financial institutions are some of the hottest targets for cyberattacks with their data-rich environments and large financial assets. In the first quarter of 2021, the financial sector was the most targeted industry for phishing attacks, according to the Anti-Phishing Working Group. The consequences of successful cyber attacks can result in tremendous financial losses, in the form of costly ransoms, lost data, and reputational damage. The average cost of a data breach in the financial sector in 2021 was $5.72 million per event.

When it comes to cybersecurity, small mistakes can have big consequences. Customers invest a lot of trust in financial institutions and expect banks to keep their sensitive data private and protected. Careless errors made by employees can break this trust in a flash.

In 2021, a large venture capital firm got hacked after an employee fell victim to a phishing email. The incident exposed the personal and financial data of the firm’s investors, and while they reported no information traded or otherwise exploited on the dark web, events like this are a PR nightmare for financial institutions that bank on trust and security.

As cyberattacks become more sophisticated and numerous, AI can help prevent the careless mistakes employees make from time to time, like clicking on a seemingly legitimate link or sending confidential information to a hacker who was masquerading as the VP of Accounting. AI and ML algorithms can detect and flag phishing emails to alert employees of a potential threat. The intelligent software combs through an email’s metadata and message content looking for potential red flags. It then analyzes its context to make a final judgment of the email’s legitimacy.

The cybersecurity threat level for financial institutions increases every day, but the strategic application of AI helps banks keep pace and stay ahead of hackers.

Assisting humans with AI to stop costly data entry errors

In 2001, a London-based trader for Lehman Brothers, which later filed for bankruptcy, sold £300 million in stock shares instead of £3 million.

In 2002, an employee for Bear Stearns, another investment bank no longer in business, accidentally entered $4 billion instead of $4 million on a single trade deal. This mistake led to a 100-point drop in the Dow Jones Industrial Average.

More recently, in 2021, a well-known financial institution mistakenly paid off a client’s $900 million loan due to a clerical error.

Whether these catastrophic mistakes happened because of a fat finger, tired mind, or rushed response, they are all uniquely human and entirely avoidable. Manual data entry is not only inefficient but there is a higher chance of human error. Analysts spend an estimated 40% of their time vetting, verifying, or fixing data. With that much time spent on data entry, there is a persistent opportunity to make any number of mistakes.

Not every error in data entry causes a multi-million dollar loss, but they do create significant, and often compounding problems that can be costly to fix. Bad data can cause unreliable loan reports, work delays, decreased productivity, inaccurate reporting, and damage to a bank’s reputation. Each year, bad, missing, or inaccurate data costs U.S. businesses around $600 billion.

Optical character recognition and machine learning technology can extract and process data reliably and quickly. The speed and accuracy of data processing are fundamental to financial institutions. Especially as the amount of data processed by financial institutions grows, swapping out manual data entry with automated processes reduces the chance for mistakes.

Financial institutions can also use AI to improve trading. An AI-powered trading assistant can analyze trading behaviors to determine if someone is about to make a bad trade deal based on current market data. It can even detect trade anomalies and spot when that $4 billion investment should actually be $4 million.

AI can serve financial institutions in more ways than one

People are still at the core of every financial institution. Their creativity and problem-solving abilities allow businesses to be flexible and dynamic. There is no doubt that the human touch is irreplaceable. However, people come with the potential to make mistakes. After all, to err is to be human. AI allows banks to reduce these errors. Not by replacing humans, but by assisting them in their work. By partnering man with machine, financial institutions gain the best of both worlds — the ingenuity of human creativity and reliability of AI works together to enhance each other. Choosing to make strategic investments in AI will give banks increased capabilities that will help them improve their bottom line and better serve their customers. The world is getting bigger, financial institutions are getting bigger, and partnering people with intelligent tools to improve their performance will pave the way for better and continued growth.

--

--

WeStan

For the most up-to-date news on the things that piss me off or make me laugh