Risk Analysis of the Future: Assessing Bank Credit with Large Language Models

Author: 
Cameron Abidi Freeman
Adviser(s): 
Stephen Slade
Abstract: 

Recent advancements in the field of large language models (LLMs) have allowed artificial intelligence researchers to build systems able to process more input data than ever before. Anthropic’s Claude 3 and OpenAI’s GPT-4 can quickly read and summarize thousands of lines of text, perform increasingly reliable math, and pick out numerical patterns not readily noticeable to humans. This project demonstrates the feasibility of employing these models to analyze risk from financial data—namely, to assess a bank’s risk of failure and credit score. The LLMs are provided with qualitative and quantitative data including anonymized descriptions of dozens of banks, their balance sheets, income statements, and banking industry metrics, along with macroeconomic data. The models return risk ratings and their motivating reasoning. The models’ assessments are checked against S&P credit rating data scraped from U.S. Securities and Exchange Commission disclosures. The report finds that the latest LLMs, particularly Claude 3 Opus, can draw meaningful conclusions about risk from large amounts of unfiltered financial data that are impressively accurate to expert ratings.

Term: 
Spring 2024