Financial Stability Implications of Artificial Intelligence (AI)
AI, especially Generative AI (GenAI) and Large Language Models (LLMs), has seen significant adoption in financial services, primarily for improving operations and regulatory compliance.
(full report is attached if you want to read it)
Developments in AI
Advances in deep learning, big data, and computational power have driven AI adoption. Technologies like LLMs have transformed text and information processing capabilities.
AI tools are used across multiple areas such as fraud detection, credit underwriting, and regulatory compliance. However, many financial institutions (FIs) adopt AI cautiously, citing risks related to explainability, governance, and data quality.
Selected Use Cases
Industry Use Cases:
Customer-Focused: AI is used in credit scoring, insurance pricing, chatbots, and personalized marketing.
Operations: AI enhances capital optimization, model risk management, and trading strategies.
Regulatory Compliance: AI tools aid in anti-money laundering (AML) and fraud detection.
Regulatory and Supervisory Use Cases:
Supervisory authorities increasingly use AI for real-time economic analysis, inspections, and data management.
Financial Stability Implications
The report highlights the following vulnerabilities posed by AI:
Third-Party Dependencies:
Heavy reliance on specialized service providers like cloud computing and hardware vendors increases systemic risks.
Market concentration in AI supply chains (e.g., GPUs, cloud services) exacerbates vulnerabilities.
Market Correlations:
Widespread use of common AI models and data sources can lead to market stress and liquidity crunches during economic downturns.
Automated trading systems driven by AI may amplify volatility and risks of flash crashes.
Cybersecurity:
GenAI tools can enhance malicious actors' capabilities, increasing risks of cyberattacks, fraud, and data breaches.
Cyber incidents targeting financial infrastructures could undermine market confidence.
Model and Data Governance:
The complexity and lack of explainability in AI models make them difficult to validate and govern effectively.
Training data quality and transparency issues further increase model risks.
Other Risks:
GenAI may be misused for financial fraud, disinformation, and malicious activities, leading to market disruptions.
Policy Recommendations
To address these challenges, the FSB recommends:
Enhancing Monitoring: Collect more comprehensive data on AI adoption and usage in finance.
Evaluating Frameworks: Assess whether existing regulations adequately address AI-related vulnerabilities.
Improving Regulatory Coordination: Foster international cooperation and knowledge sharing on AI policies.
Strengthening Governance: Promote robust AI governance standards within financial institutions.
Long-Term Considerations
AI adoption could reshape the competitive landscape in finance, favoring larger firms with more resources.
Potential macroeconomic impacts include labor market shifts, changes in productivity, and increased energy consumption tied to AI technologies.