Skip to content

Blog

Here comes AI enforcement

By Matt Kelly

Ever since last year, the U.S. Securities and Exchange Commission has been warning companies not to engage in “AI washing” — that is, making misleading statements to the public about how you use artificial intelligence at your business.

Now the agency made good on that warning with its first enforcement actions against AI washing. Corporate compliance officers should take note, because they certainly won’t be the last.

The sanctions were imposed against Global Predictions Inc., an online investment advisory firm based in San Francisco; and Delphia USA, the U.S. subsidiary of an investment advisory firm in Toronto. Both businesses settled civil charges that they made misleading statements about how they use artificial intelligence to make investment recommendations to customers.

“As today’s enforcement actions make clear to the investment industry, if you claim to use AI in your investment processes, you need to ensure that your representations are not false or misleading,” said Gurbir Grewal, director of the SEC’s Division of Enforcement. “And public issuers making claims about their AI adoption must also remain vigilant about similar misstatements that may be material to individuals’ investing decisions.”

These two specific cases are rather straightforward. Both firms promised in marketing materials that they used artificial intelligence to identify the best possible investment recommendations for customers… and then didn’t do that. Delphia even admitted to SEC examiners that it didn’t use AI, yet continued to say so in marketing materials; Global Predictions simply “could not produce documents to substantiate this claim,” as the SEC said with its typical understatement.

Don’t let the narrow circumstances and simplicity of these two cases fool you. The risks of AI washing are more widespread than they might seem, and compliance professionals need to think carefully about how to handle them.

Expand Your Risk Assessment Reach

For companies subject to SEC jurisdiction (that is, companies trading on U.S. stock exchanges), the risk of AI washing is that you’ll make some claim about how you use or govern artificial intelligence that subsequently turns out to be wrong.

For example, an insurance company might use AI to set premium prices and declare, “We take all prudent steps to ensure that our AI doesn’t discriminate against certain population groups.” If it’s later discovered that your AI did discriminate against certain groups, that could prompt the SEC to bring an enforcement action against the company.

The challenge for companies will be to assure that you:

  • Understand the risks posed by your use of artificial intelligence;
  • Impose appropriate safeguards and controls to govern those risks;
  • Disclose all that information clearly and accurately to the public.

At the abstract level, those three bullet points are nothing new. Companies should already be doing the same for, say, cybersecurity, ESG, and other risks; artificial intelligence is just the latest guest to that compliance and disclosure party.

In practice, however, compliance officers would want to assure that their companies have several fundamentals in place.

First, does the company have a policy in place about who uses artificial intelligence, and how? A company can’t properly disclose its risks of AI if management doesn’t know who within the company is actually using AI. So management might want to adopt a policy declaring that any employees who want to use AI must first submit their plans for management review.

Second, can the company properly assess the risks of AI? For example, say an employee wants to use AI to draft customized marketing copy. Can the IT department adequately assess the cybersecurity risks? Can the compliance or privacy team assess potential regulatory risks? Can internal audit help you test the quality of data feeding into AI, and the quality of the final product coming out of it?

Third, is the company disclosing all material information to investors? This is more a question for the company’s legal team or corporate secretary, in charge of filing reports to the SEC — but compliance officers will play an important assisting role here. You are the ones who know regulatory risks better than anyone else, and who help to assure that policies, procedures, and internal controls all align to keep regulatory compliance risks low.

Policies, Monitoring, and Governance

What we’re really talking about here is the compliance program’s ability to expand its reach to encompass the new risks that artificial intelligence brings to your company.

That is, since the company will need new policies to address AI risks, you’ll need strong policy management capabilities. Since the company will need to monitor the risks of AI gone wrong, you’ll need strong internal reporting and investigation capabilities. And since businesses use third parties for all manner of mission-critical tasks, you’ll need strong third-party risk management capabilities to be sure none of them are using AI in some way that might bring compliance, ethical, or reputation risk back to your own company.

Ultimately the answer here is that the company must have strong governance over its use of AI. Senior management will need to make AI governance a priority, and then the compliance officer will be one voice on a team of executives — IT, internal audit, finance, business operations leaders, and others — who must act in concert to keep your adoption of AI on the right path forward.

At this early stage of AI adoption, exactly how companies will get all that right remains unclear. But the Securities and Exchange Commission is already telling us that there will be consequences if you get it wrong.

Related reading

Join the E&C Community

Get the latest news from GAN Integrity in your inbox.

We respect your privacy. Your data will be kept confidential and will not be sold or shared with third parties. For more information, please see our Privacy Notice.