Home Industry Systems Need to Monitor AI Deployment in Financial Sector and Identify Emerging Risks

Systems Need to Monitor AI Deployment in Financial Sector and Identify Emerging Risks

by CIOAXIS Bureau

AI has the potential to increase innovation and efficiency, but it may also pose risks to financial stability, said the Financial Stability Oversight Council of the US.

The use of AI by financial sector firms has been growing in recent years. AI has the potential to increase innovation and efficiency, but it may also pose risks to financial stability, said the Financial Stability Oversight Council of the US.

Established in 2010 under the Dodd-Frank Wall Street Reform and Consumer Protection Act, the Financial Stability Oversight Council provides comprehensive monitoring of the stability of US financial system.

The Council recommends monitoring the rapid developments in AI, including generative AI, to ensure that oversight structures keep up with or stay ahead of emerging risks to the financial system while facilitating efficiency and innovation.

To support this effort, the Council recommends financial institutions, market participants, and regulatory and supervisory authorities further build expertise and capacity to monitor AI innovation and usage and identify emerging risks.

The Council notes existing requirements and guidance may apply to AI. The Council also supports the international effort by the G7 Cyber Expert Group to coordinate cybersecurity policy and strategy across the eight G7 jurisdictions and address how new technologies, such as AI and quantum computing, affect the global financial system.

Artificial intelligence (AI) is a set of technologies that has been around for decades. Its use in financial services, however, has increased in recent years, thanks to more advanced algorithms, increased volumes of data, data storage and processing power improvements, and cost reductions among many of these dimensions. AI has the potential to increase efficiency and innovation, but it also introduces certain risks, the report said.

Service programmes that look after customers also belong to this risk class, with only minimal transparency rules to be applied.

Users must simply be made aware they are interacting with an AI application and not with humans. They can then decide for themselves whether or not to continue using the AI program, DW reported.

The US, the UK, and 20 other countries have issued data protection rules and recommendations for AI developers, but none of these are legally binding, the expectation being that big tech companies working on AI should voluntarily monitor themselves.

A ‘Safety Institute’ in the US is meant to assess the risks of AI applications, while President Joe Biden has instructed developers to disclose their tests if national security, public health or safety are at risk.

In China, the use of AI for private customers and companies is severely restricted because the communist regime is afraid that it will no longer be able to censor learning systems as easily as censored the internet.

ChatGPT, for example, is not available in China. Facial recognition, however, is already being used on a large scale on behalf of the state, DW reported.

– IANS

Recommended for You

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Close Read More

See Ads