Introduction
After implementing a CECL framework and reviewing it, the next step for financial institutions is to have the model reviewed by a validation team for reasonableness and fitness for use. This can mean submitting it to a separate internal group or engaging a third party to perform the review. At this stage in the process, the CECL team can step into a support role, but the expectations for the validation can create additional work and uncertainty.
In this post, we would like to provide a brief guide on initial steps that a CECL team can take to make sure that a model validation can be conducted efficiently. At a glance, these steps are:
- Ask your vendor the right questions.
- Prepare model documentation
- Explain the model’s performance
- Showing the model’s context
- Describe model data
1. Ask your vendor the right questions
Many banks use a vendor model instead of developing the process in-house. While this frees up analyst time and avoids additional resources being used for model development, it doesn’t shift the requirement that the model owner take full responsibility for the model’s implementation and performance. Consequently, it’s important to ensure that as many workpapers are collected from the vendor as possible including:
- A document explaining the technical details (i.e., mathematical framework and its justification) of the model.
- Ongoing monitoring and testing used by the vendor to ensure that the model works appropriately.
- Any additional testing the vendor is using.
- Descriptions of the controls and procedures on the development data used by the vendor.
Recommended steps in the case of unsatisfactory model results as well as vendor contacts.
Different vendors have different levels of responsiveness to requests for additional information. Because of this, it’s helpful to ensure that the vendor is engaged and will be committed during the vendor selection/onboarding process. If the model is already implemented, the model owner should consider steps that can be taken to compensate for missing information on the model such as augmented testing.
You can find a detailed post we’ve done previously on the topic of managing vendor models here.
2. Prepare model documentation
After getting as much information on the model’s development as possible, it’s necessary to create a comprehensive description of the model. For all intents and purposes, the model document is the model to the user, and to anyone reviewing it. The model document serves as a reference to the model owner as well as any external reviewers. It also provides a way to train new users and provides a reference for other teams at the company. Special attention should be paid to creating a comprehensive model document which describes the business area, the justification for the model, how it was developed, and how it is being monitored.
High quality model documentation should allow a knowledgeable user to recreate or approximate the model using just the development data and the model documentation. A model document will provide information such as:
- A description of the business application and purpose of the model.
- Information on the data used to develop the model.
- A description of the steps used to ensure the model input data is correct.
- The modeling strategy used by the model developer including a description of any quantitative components, modeling choices, and assumptions made by the model owner.
- Performance monitoring and governance details.
Often, many of the findings and weaknesses that are observed in a model validation results from missing model documentation leading to implementation gaps for the model owner. To compensate for poor documentation, the model owner must compensate for the missing information using strategies such as augmented testing. While this can help relieve some concerns over the quality of the model, it’s important to observe that this doesn’t provide the theoretical grounding and interpretability necessary to completely remove the risk. We’ve discussed augmented outcomes analysis in context to vendor models in our previous post here.
3. Explain the model’s performance
Performance monitoring justifies that the model is fit for use based on standards created by the model owner. Being able to explain the model’s performance begins with fundamental details for how the model owner feels comfortable with the current results: key performance indicators (and their relevant thresholds), escalation paths and actions for threshold breaches, and analysis surrounding any unusual behavior observed in the performance monitoring testing period.
Being able to explain the model’s performance also extends to providing background on how the model has performed in the past and the ability to discuss the impact of recent or anticipated events. A common example in the past eighteen months has been a discussion of model inaccuracy introduced by the spike in unemployment rates during the onset of the pandemic in 2020. Often this led to models that utilized unemployment as a variable to project results that were very different from what financial institutions expected.
4. Showing the model’s context
A model’s results are more than the raw output of the model – they extend out to downstream models and processes that the model outputs are used for. These become key in informing internal and external parties on the significance of the model and the potential impact of unusual or inaccurate results. A simple example of downstream effects is the impact of changes in default projections on ACL calculations.
When reviewing the greater context of the model it’s also necessary to keep upstream processes that provide inputs to the model in mind as these can have a material impact on model results.
Once this work on the larger impact of the model is performed, it becomes much simpler to document and explain choices like model settings, performance monitoring thresholds, and the frequency of performance monitoring.
When preparing for a validation, it’s important to be able to show comfort with the way that a model is working. This includes the aspect of being able to explain the model’s performance as described above, but also extends to important factors in the model’s operation such as the model’s settings and their reasoning. Another aspect is related to being able to speak to the true impact of the model on the financial institution’s operation.
5. Describe model data
An institution can be using an extremely high-quality model with flawless ongoing monitoring and still have significant issues in its operation (and validation) if the model owner can’t speak to the data being used. To prepare for the data review portion of a validation the model owner should be able to explain:
- Model development data set or data used to train the model.
- The relevance of current data compared to the model development data.
- Data quality control procedures to ensure that the data being used by the institution doesn’t have any erroneous or unusual values (FICO scores of 200, for instance).
- All data definitions and transformations.
Conclusion:
CECL readiness doesn’t end with implementing a model framework – it continues into having the model validation to ensure that it meets risk management best practices. The CECL team can help make sure they are prepared for a validation using the simple steps that we outlined in this article. If properly prepared, model owners can show the documentation, testing, and procedures to ensure that validation teams reviewing the model can verify that the implementation is fit for use. At the same time, taking these steps can also ensure that the business line is organized to minimize the time spent initiating and supporting validation exercises.
We’ve provided other guides to model risk management and our validation process in our blog here. If you have a model which needs to be documented, validated, or the creation of a performance monitoring plan, you can contact our team today or send us an e-mail at connect@mountainviewra.com to discuss your institution’s needs.
Written by Peter Caya, CAMS
About the Author
Peter advises financial institutions on the statistical and machine learning models they use to estimate loan losses, or systems used to identify fraud and money laundering. In this role, Peter utilizes his mathematical knowledge, model risk management experience to inform business line users of the risks and strengths of the processes they have in place.