Acting Comptroller of the Currency Michael Hsu is calling on banks and their artificial intelligence partners to use a shared risk model.
Speaking June 6 at the annual Financial Stability Oversight Council conference, Hsu said federal agencies such as the OCC, FSOC and U.S. AI Safety Institute “can play a positive role in facilitating the discussions and engagement needed to build trust in order to maintain U.S. leadership on AI.”
“Companies could negotiate terms with their AI partners to ensure that the liability for bad outcomes was shared,” Hsu added. “Or companies could sue their AI partners under tort theories of strict liability, product liability, or negligence. But such efforts are unlikely to be successful in changing the landscape more broadly.”
Much of Hsu’s speech centered on the risks he sees in AI. Fraud enabled through the technology, though generally financially manageable, has the potential to become much more expensive as criminals grow more proficient. Hsu noted AI agents could take short positions in a series of banks and spread information to spark bank runs and destabilize the stocks.
“More importantly, an increase in AI-powered fraud could sow the seeds of distrust more broadly in payments and banking,” Hsu added. The heightened risk of fraud leaves high-trust bank platforms with a greater chance of securing and retaining customers over the long term.”
The threats posed by AI are worsened by its lack of “a clear accountability model,” Hsu added. AI makes it easier to evade responsibility for bad outcomes, potentially causing an erosion of trust. Hsu said a shared responsibility framework for fraud, scams and ransomware attacks could be a good starting point.
Hsu placed AI into three conceptual groups: inputs, in which AI provides information for bank personnel; co-pilots, when AI supports bank personnel in doing more tasks faster; and pilots, in which AI executes activities per instructions coded by programmers.
Nearly half of bank executives are implementing generative AI solutions as they cite fraud detection and prevention as their No. 1 application for the technology, according to a June 2023 survey from accounting firm KPMG. Sixty-nine percent identified generative AI privacy concerns as a high priority in June, up from 56 percent earlier in the year. Two-thirds said their organization will likely use the technology to enable more advanced visual assistants and chatbots, and 62 percent expect it will be used for customer service and personalization.
“The U.S. is the leader in AI innovation,” Hsu said. “To maintain this role, the U.S. needs to balance technological prowess with trusted use. Developing a robust accountability model can help.”