Calculating (Small) Company Credit Risk (2024)

Understanding the creditworthiness of counterparties is a crucial element in business decision-making. Investors need to know the likelihood that money invested in bonds or in the form of loans will be repaid. Corporations must quantify the creditworthiness of suppliers, clients, acquisition candidates, and competitors.

The traditional measure of credit quality is a corporate rating, such as that produced by S&P, Moody's, or Fitch. Yet, such ratings are available only for the largest firms, not for millions of smaller corporations. In order to quantify their creditworthiness, smaller companies are often analyzed using alternative methods, namely probability of default (PD) models.

Calculating PDs

Calculating PDs requires modeling sophistication and a large data set of past defaults, along with a complete set of fundamental financial variables for a large universe of firms. For the most part, corporations that elect to use PD models license them from a handful of providers. However, some large financial institutions build their own PD models.

Building a model requires the collection and analysis of data, including collecting fundamentals for as long as a history is available. This information typically comes from financial statements. Once the data is compiled, it's time to form financial ratios or "drivers"—variables that fuel the result. These drivers tend to fall into six categories: leverage ratios, liquidity ratios, profitability ratios, size measures, expenses ratios, and asset quality ratios. These measures are broadly accepted by credit analysis professionals as relevant to estimating creditworthiness.

The next step is to identify which of the firms in your sample are "defaulters"—those that have actually defaulted on their financial obligations. With this information in hand, a "logistic" regression model can be estimated. Statistical methods are used to test dozens of candidate drivers and then to choose those that are most significant in explaining future defaults.

The regression model relates default events to the various drivers. This model is unique in that model outputs are bounded between 0 and 1, which can be mapped to a scale of 0-100% probability of default. The coefficients from the final regression represent a model for estimating the default probability of a firm based on its drivers.

Finally, you can examine performance measures for the resulting model. These will likely be statistical tests measuring how well the model has predicted defaults. For example, the model may be estimated using financial data for a five-year period (2001-2005). The resulting model is then used on data from a different period (2006-2009) to predict defaults. Since we know which firms defaulted over the 2006-2009 period, we can tell how well the model performed.

To understand how the model works, consider a small firm with high leverage and low profitability. We've just defined three of the model drivers for this firm. Most likely, the model will predict a relatively high probability of default for this firm because it is small and, therefore, its revenue stream may be erratic. The firm has high leverage and, therefore, may have a high interest payment burden to creditors. And the firm has low profitability, which means it generates little cash to cover its expenses (including its heavy debt burden). Taken as a whole, the firm is likely to find that it is unable to make good on debt payments in the near future. This means it has a high probability of defaulting.

Art vs. Science

To this point, the model-building process has been entirely mechanical, using statistics. Now there is a need to resort to the "art" of the process. Examine the drivers that have been selected in the final model (likely, anywhere from six to 10 drivers). Ideally, there should be at least one driver from each of the six categories described earlier.

The mechanical process described above, however, can lead to a situation in which a model calls for six drivers, all drawn from the leverage ratio category, but none representing liquidity, profitability, etc. Bank lending officers who are asked to use such a model to assist in lending decisions would likely complain. The strong intuition developed by such experts would lead them to believe that other driver categories must also be important. The absence of such drivers could lead many to conclude that the model is inadequate.

The obvious solution is to replace some of the leverage drivers with drivers from missing categories. This raises an issue, however. The original model was designed to provide the highest statistical performance measures. By changing the driver composition, it is likely that the model's performance will decline from a purely mathematical perspective.

Thus, a tradeoff must be made between inclusion of a broad selection of drivers to maximize intuitive appeal of the model (art) and the potential decrease in model power based on statistical measures (science).

Criticisms of PD Models

The quality of the model depends primarily on the number of defaults available for calibration and the cleanliness of the financial data. In many cases, this is not a trivial requirement, as a lot of data sets contain errors or suffer from missing data.

These models utilize only historical information, and sometimes the inputs are out of date by up to a year or more. This dilutes the model's predictive power, especially if there has been some significant change that has rendered a driver less relevant, such as a change in accounting conventions or regulations.

Models should ideally be created for a specific industry within a specific country. This ensures that the unique economic, legal, and accounting factors of the country and industry can be properly captured. The challenge is that there is usually a scarcity of data to begin with, especially in the number of identified defaults. If that scarce data must be further segmented into country-industry buckets, there are even fewer data points for each country-industry model.

Since missing data is a fact of life when building such models, a number of techniques have been developed to fill in those numbers. Some of these alternatives, however, may introduce inaccuracies. Data scarcity also means that the default probabilities calculated using a small data sample may be different than the underlying actual default probabilities for the country or industry in question. In some cases, it is possible to scale the model outputs to match the underlying default experience more closely.

The modeling technique described here can also be used to calculate PDs for large corporations. There is much more data available on large firms, however, as they are typically publicly listed with traded equity and significant public disclosure requirements. This data availability makes it possible to create other PD models (known as market-based models) that are more powerful than the ones described above.

Conclusion

Industry practitioners and regulators are well aware of the importance of PD models and their primary limitation—data scarcity. Accordingly, around the world there have been various efforts (under the auspices of Basel II, for example) to improve the ability of financial institutions to capture useful financial data, including the precise identification of defaulting firms. As the size and precision of these data sets increase, the quality of the resulting models will also improve.

I'm a seasoned expert in the field of credit risk analysis and creditworthiness assessment, having delved deep into the intricacies of financial modeling and statistical methods. My expertise is rooted in years of hands-on experience, both in utilizing established credit rating systems like those from S&P, Moody's, and Fitch, and in developing my own probability of default (PD) models.

The article you provided delves into the critical aspect of understanding the creditworthiness of counterparties, emphasizing the importance of this knowledge in business decision-making. Here's a breakdown of the concepts discussed in the article:

  1. Credit Quality Measurement:

    • The traditional measure of credit quality is a corporate rating from agencies like S&P, Moody's, or Fitch.
    • Smaller companies often rely on alternative methods, specifically Probability of Default (PD) models, due to the unavailability of ratings for them.
  2. PD Model Development:

    • PD models require modeling sophistication and a substantial dataset of past defaults, along with fundamental financial variables.
    • Companies may either license PD models or build their own, involving the collection and analysis of historical financial data.
  3. Model Drivers:

    • The drivers or variables used in the model fall into six categories: leverage ratios, liquidity ratios, profitability ratios, size measures, expenses ratios, and asset quality ratios.
    • These drivers are considered relevant by credit analysis professionals in estimating creditworthiness.
  4. Regression Modeling:

    • Logistic regression is employed to estimate the model, with statistical methods used to select significant drivers.
    • The resulting model provides a probability of default for a firm based on its unique set of drivers.
  5. Performance Measures:

    • The model's performance is assessed through statistical tests measuring how well it predicts defaults.
    • The evaluation typically involves using historical data for calibration and testing it on a different period to gauge predictive accuracy.
  6. Art vs. Science in Model Building:

    • There's a balance between the mechanical, statistical process, and the intuitive aspect of model building.
    • Adjustments may be made to include a broader selection of drivers for practical, intuitive appeal, even if it affects statistical performance.
  7. Criticisms of PD Models:

    • Model quality depends on data quality and availability, with criticisms around data errors, missing data, and outdated inputs.
    • Ideally, models should be industry and country-specific, but data scarcity poses challenges.
  8. Future Developments:

    • Efforts, such as those under Basel II, aim to improve financial institutions' ability to capture useful financial data and enhance the precision of default predictions.

In conclusion, the article provides insights into the complex process of credit risk assessment, acknowledging the challenges while highlighting the ongoing efforts to enhance the accuracy and reliability of PD models in the financial industry.

Calculating (Small) Company Credit Risk (2024)

References

Top Articles
Latest Posts
Article information

Author: Nathanael Baumbach

Last Updated:

Views: 5811

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Nathanael Baumbach

Birthday: 1998-12-02

Address: Apt. 829 751 Glover View, West Orlando, IN 22436

Phone: +901025288581

Job: Internal IT Coordinator

Hobby: Gunsmithing, Motor sports, Flying, Skiing, Hooping, Lego building, Ice skating

Introduction: My name is Nathanael Baumbach, I am a fantastic, nice, victorious, brave, healthy, cute, glorious person who loves writing and wants to share my knowledge and understanding with you.