Compliance officers like to use the latest and best technology, and for good reason. Technology is great and can do all sorts of labor for us — if we understand how they actually work, and how to govern them properly. So today let’s consider the intersection of compliance and AI (artificial intelligence).
In the fullness of time, AI probably will transform a vast range of human experiences. With that said, the fullness of time is also a long way off. Compliance officers have issues to confront today, and basic AI already exists that might be able to help. The key is to embrace AI wisely, rather than recklessly.
Defining Artificial Intelligence
First, start with what AI is not. AI is not the automation of business processes; that’s a separate technology known as robotic process automation (RPA).
A compliance program can use RPA without artificial intelligence, and at many companies that technology is the precursor to AI. For example, when we talk about automating due diligence background checks or automating the sending of training materials to third parties based on their due diligence risk profile — that’s RPA.
Put simply, RPA mimics human tasks. Artificial intelligence mimics human thinking.
Ultimately, then, compliance officers want to use RPA and AI together. The artificial intelligence identifies certain patterns or circumstances and then triggers some action that the RPA undertakes. That’s how you, the compliance officer, can spend more time thinking about sophisticated challenges that still require human analysis, while the technology does most of the scut work for you.
So what do compliance officers need to think through before diving head first into the AI world?
What CCOs Should Know About AI
First, determine what challenges you want AI to address. Notice we didn’t say “problems you want AI to solve.” That’s deliberate. Organizations have problems that need to be solved; AI just does work to help you solve them. It’s important to keep that distinction clear, so the AI projects you develop don’t give too much autonomy to AI — autonomy that might lead to results you didn’t expect or create new problems you didn’t have before.
For example, right now AI is best suited for pattern recognition among large sets of data. It’s a gigantic “if this then that” exercise, that can chew through far more variables, in far less time, than people ever could.
So you could use AI to find strange or anomalous situations that might need your attention. It could review business contracts for possible connections to suspicious third parties. It could find potential fraud sprinkled across a vast number of business transactions, or help to focus your training efforts based on data about employee performance.
The potential mistake is giving too much control to AI, where it makes decisions that bind the company to some course of action; actions that might bring legal liability, or regulatory scrutiny, or social media attention. For example, you could certainly use AI to help you find potential FCPA violations. You would not want that AI to decide which potential violations should be reported to the Justice Department.
Therefore a good exercise while scoping an AI project is to ask: What is the AI committing your organization to do? Are we comfortable with those commitments? (Which is, really, just a fancy way of asking what your organization’s risk tolerance around AI is.)
Compliance Data and AI
Second, AI needs as much data as you can give it. In the same way, that experience brings wisdom to the human mind, data brings wisdom to AI. Data helps AI to learn. Once you do define a project where AI can help, the next question is to ask: Where can you find the data that AI will need to do the job?
As a practical matter, AI is useful to compliance functions because can handle multiple data formats, including unstructured data. So it can work with bigger collections of information — emails, instant messages, PDFs, spreadsheet fields, Salesforce reports, and more — that would be too daunting for human minds.
And if AI can work with multiple data formats, that means it can work with the software solutions that generate all that data. That’s welcome news for large, complex organizations, where different parts of the enterprise might use different technology to do their jobs. Rather than revamp their IT and workflows, AI can scoop up that data and normalize it into something compliance officers can use.
So you’re allowing your organization to go about its business, while compliance achieves the insight and assurance the compliance officer needs. Couple that AI computing power with RPA and advanced data visualization or dashboards, and suddenly the CCO is getting somewhere.