US Treasury warned against broad AI framework in financial services

The World Federation of Exchanges (WFE) has urged the US Treasury to ensure its AI regulatory framework isn't too broad or complex by seeking an "appropriate balance between innovation and protection".

On Tuesday, responding to the US Treasury’s consultation on the uses of AI in financial services, the global industry association for exchanges and central clearing said that while there are valid concerns about the uncertainties of AI technology, which require a "close look at regulation" to protect investors and other stakeholders, there must be cohesion and alignment amongst regulators and international standard setters.

The organisation, which represents the providers of more than 250 market infrastructures in the exchange and clearing industry, warned that if the US framework is too broad it could mean the benefits that AI brings to economic growth, productivity, automation, and innovation will “be at risk”.

“AI regulation must enhance protection whilst avoiding the curtailment of progress and modernisation,” said Nandini Sukumar, chief executive, WFE. “The definition of AI in the President’s Executive Order is overly broad and could create unnecessary complexity by imposing extensive compliance obligations if implemented for financial services.”

WFE has urged the Treasury to tailor the definition of AI to avoid including more than what is necessary, suggesting that a broad definition would create "onerous restrictions" and not be proportionate to the risks that different tools have.

It also said that the definition of AI should focus on computer systems with the ability to make decisions or predictions based on automated, statistical learning.

“While these technological innovations and the associated concerns about managing generative AI are significant, it is important to remember that, as trusted third parties providing secure and regulated platforms for trading securities, our members are already carefully scrutinising tools and establishing controls to govern AI use," added Richard Metcalfe, head of regulatory affairs, WFE. "The US Treasury should therefore take care to design an AI regulatory framework which is principles based, to maintain flexibility and encourage innovation."

The WFE assured the Treasury that while AI deployment by malicious actors is an emerging risk, financial services firms currently well aware of this and are tackling the issue.

It did say that even though traditional risk management techniques can be used to manage risk of AI systems, more work needs to be done to develop AI specific risk management tools.

The organisation also explained that while third parties will be valuable to help develop AI tools and risk management tools, the Treasury is "right to be cognisant of the risks around BigTech firms utilising their market dominance".



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.