Good morning, it is a pleasure to join you today to discuss artificial intelligence (AI), and the critical role it plays in cybersecurity and risk management.1 This event is one of a series of roundtables convened by the Financial Stability Oversight Council (FSOC) to discuss AI, bringing together public and private participants to engage in discussion and share perspectives.
Previous discussions about the use of AI tools have debated their risks and benefits.2 Today, we are facing the rapid evolution of AI tools much earlier than many expected, and the risks and benefits are now more tangible and clear. Anthropic’s Mythos—an AI model that identifies cyber vulnerabilities—highlights the dynamic nature of this technology and the rapid pace that its capability can evolve. The improved ability to identify cyber vulnerabilities comes with the potential to address these weaknesses to enhance cybersecurity. And of course, we have already seen that AI has the potential to improve efficiency and effectiveness, particularly within the financial system.
AI has become an integrated part of our daily experience. Financial institutions are developing their own applications and implementing vendor-assisted tools. Banks of all sizes benefit from its greater efficiency, speed, and content generation. Whether used in targeted modeling or enterprise-wide tools, AI will become a force multiplier for the financial system, and in the broader U.S. economy.
Today, I will discuss the use of AI in the banking system, the Federal Reserve’s supervisory approach for financial institutions, and how the benefits and risks of AI are contributing to the international financial stability conversation.
AI in the Banking System
For nearly a decade, our supervisors have been engaging with banks to monitor their use of AI. Over that time, our approach has evolved to increase and enhance our understanding of its application and potential. An important part of our job as supervisors is to ensure that banks are aware of and attentive to the risks and challenges inherent in its use, so it can be deployed responsibly and effectively. And we need to ensure that there is a path for innovation, which includes the use of AI.
To mitigate and manage risk, we must understand the specifics regarding the use case for its deployment. Will it be used for material tasks? Is it broadly accessible to employees or limited? And does its use directly affect consumers and customers, as with credit determinations?
We regularly discuss AI with bankers at all levels of the Federal Reserve System. This includes direct conversations with individual banks, and broader conversations on principles for successful adoption. We recognize that smaller banks may not have access to the same resources as their larger peers but still need to innovate and provide the latest technology to their customers. Therefore, it is necessary to ensure that our supervisory guidance does not hinder access to and implementation of innovation. This includes emphasis on the flexibility to develop, implement, and manage AI to be consistent with their unique structure, business, and culture.
Supervisory Approach
Over the past year, the Federal Reserve has been working to shift our supervisory focus to identifying and remediating material financial risk. To ensure safety and soundness, we are prioritizing those matters that lead to a bank’s failure.
I take a similar approach when considering the use of AI in the banking system. The rapid adoption and evolution of its capability reinforces the need for adaptable supervisory guidance and expectations. How should we consider third-party risk-management expectations for vendor-provided AI tools or partnerships? What aspects of model risk management should apply to AI? AI presents clear risks but also has the potential to offer tremendous benefits for cyber security. How should regulators think about this balance of risks?
Our approach should support banks in implementing AI tools safely, effectively, and efficiently. Today, banks are relying on existing risk-management frameworks to guide their use of AI. While these supervisory tools are intended to support banks in applying sound governance and risk management, we should assess whether our supervisory guidance is fit for the future.
Together with the OCC and FDIC, the Fed recently amended our model risk management guidance to clarify that it does not apply to generative or agentic AI.3 Over time, supervisors expanded the scope of the previous guidance beyond its original purpose to apply it in unintended ways. We recognize that rapidly evolving and novel technologies like AI may require a different approach. The revised guidance now applies narrowly to traditional models and basic AI applications. Going forward, we expect other risk-management and governance practices to support adoption of generative and agentic AI in ways that will encourage ongoing innovation.
We are also working to update and simplify our third-party risk-management guidance to reflect actual and future risk. For too long, this guidance has been vague in its scope and application. Innovation is a necessary component of financial services, and supervisory guidance should not be a barrier for banks to engage with new and evolving tools and technologies. Supervisors must take a balanced approach to new and emerging risks and the expected benefits while preserving the safety of the financial system.
This brings me to the impact of Anthropic’s Mythos AI model. We know that this model accelerates the process of detecting cyber vulnerabilities. On one hand, this capability enables firms to address self-identified vulnerabilities thereby enhancing cyber security. But on the other hand, if used maliciously it could be deployed to identify and exploit weaknesses. As we learn more about this tool and others to be released in the coming weeks and months, we will continue to consider effective supervisory approaches for these and other emerging capabilities.
As we position ourselves to supervise emerging technology: First, we must continue to stay abreast of new developments and to coordinate efforts across government. Earlier this month, Secretary Bessent and Chair Powell convened the largest banks to discuss Mythos and the cybersecurity implications of the Mythos model. This type of discussion is extremely beneficial to ensuring the protection of the banking system.
Second, regular communication regarding the unique risks of novel and potentially broadly impactful innovation is necessary. Banks of all sizes have expressed concern about access to the Mythos model. Regulators will continue to focus on critical developments and communicating these risks to supervised institutions, as well as on refining our cybersecurity approach.
Finally, we need to recognize that any regulatory or supervisory response must accommodate this evolution, regularly reviewing our approach and expectations, and communicating with industry. Feedback from industry is an important part of this approach, including from banks, financial firms, service providers, and other experts. These views will be extremely valuable as we refine our supervisory approach and response.
As we work to support innovation, it is necessary to determine whether our framework is appropriate. Have we established reasonable and effective supervisory expectations? Are bankers comfortable discussing emerging risks and new technologies with supervisory teams? Have we successfully implemented a pro-innovation mindset that allows responsible innovation and AI adoption to occur within the banking system?
International Engagement
The global financial system is connected and integrated, with some of the largest U.S. banks expanding operations abroad, and with foreign banks expanding operations in the United States. While these connections support U.S. economic growth and U.S. interests abroad, they also pose risks to the global financial system.
One aspect of our regulatory work is ensuring consistency and a level playing field for our internationally active institutions. In this regard, in my role as the chair of the Financial Stability Board’s Standing Committee on Supervisory and Regulatory Cooperation we are working together to address financial stability issues related to supervisory and regulatory policies. When I assumed this role and established priorities, a primary focus is to identify sound practices for AI adoption, use, and innovation, and to publish our findings and conclusions in a report published for stakeholder comment. The report will also present a balanced analysis emphasizing both the potential benefits and challenges with the use of AI, including key principles and examples of successful AI deployment. To complement this work, we should also consider the value of consistency in AI use expectations internationally including cybersecurity and critical infrastructure, especially where international supervisory expectations are incompatible with home country expectations.
Our U.S. Treasury and SEC colleagues are working closely with us on this workstream. I expect the consultation draft of this report will be released in the third quarter, and I encourage you to review and provide feedback on the report once it is released.
Closing Thoughts
I’d like to again thank Paras and Christina for the invitation to participate in today’s round-table discussion.
The implications of AI extend far beyond the banking and financial systems.
I appreciate FSOC’s hosting this series of discussions and I look forward to learning the perspectives of today’s participants.
1. The views expressed here are my own and not necessarily those of my colleagues on the Federal Reserve Board or the Federal Open Market Committee. The remarks were delivered at an event April 27, 2026, and published May 1, 2026, following conclusion of the April 28-April 29 meeting of the FOMC. Return to text
2. See Michelle W. Bowman, “Artificial Intelligence in the Financial System (PDF),” remarks delivered at the 27th Annual Symposium on Building the Financial System of the 21st Century: An Agenda for Japan and the United States,” Washington, D.C., November 22, 2024. Return to text
3. See SR letter 26-2, Attachment, “Supervisory Guidance on Model Risk Management (PDF),” April 17, 2026. Return to text