This is the accessible text file for GAO report number GAO-10-53 entitled 'Small Business Administration: Actions Needed to Improve the Usefulness of the Agency's Lender Risk Rating System' which was released on December 7, 2009. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. Report to Congressional Requesters: United States Government Accountability Office: GAO: November 2009: Small Business Administration: Actions Needed to Improve the Usefulness of the Agency's Lender Risk Rating System: GAO-10-53: GAO Highlights: Highlights of GAO-10-53, a report to congressional requesters. Why GAO Did This Study: The Small Business Administration (SBA) guarantees individual loans that lenders originate. The agency uses its Loan and Lender Monitoring System (L/LMS) to assess the individual risk of each loan, and SBA’s contractor developed a lender risk rating system based on L/LMS data. However, questions have been raised about the extent to which SBA has used its lender risk rating system to improve its oversight of lenders. GAO was asked to examine (1) how SBA's risk rating system compares with those used by federal financial regulators and lenders and the system’s usefulness for predicting lender performance and (2) how SBA uses the lender risk rating system in its lender oversight activities. To meet these objectives, GAO reviewed SBA documents; interviewed officials from three federal financial regulators and 10 large SBA lenders; analyzed SBA loan data; and interviewed SBA officials. What GAO Found: SBA’s lender risk rating system uses some of the same types of information that federal financial regulators and selected large lenders use to conduct off-site monitoring, but its usefulness has been limited because SBA has not followed common industry standards when validating the system—that is, assessing the system’s ability to accurately predict outcomes. Like the federal financial regulators and 10 large lenders GAO interviewed, SBA’s contractor developed lender risk ratings based on loan performance data and prospective, or forward- looking, measures (such as credit scores). Using SBA data, GAO undertook a number of evaluative steps to test the lender risk rating system’s predictive ability. GAO found that the system was generally successful in distinguishing between higher- and lower-risk lenders, but it better predicted the performance of larger lenders. However, the system’s usefulness was limited because the contractor did not follow validation practices, such as independent and ongoing assessments of the system’s processes and results, consistent with those recommended by federal financial regulators and GAO’s internal control standards. For example, the agency did not require a party other than the one who developed the system to perform the validation, and SBA’s contractor did not routinely reassess the factors used in the system as part of its validations. Further, SBA does not use its own data to develop alternate measures of lender performance that could be used to independently assess or supplement the risk ratings, citing resource constraints. Because SBA does not follow sound validation practices or use its own data to independently assess the risk ratings, the effectiveness of its lender risk rating system—the primary system SBA relies on to monitor and predict lender performance—may deteriorate as economic conditions and industry trends change over time. Although SBA’s lender risk rating system has enabled the agency to conduct some off-site monitoring of lenders, the agency does not use the system to target lenders for on-site reviews or to inform the scope of the reviews. Unlike the Federal Deposit Insurance Corporation and the Federal Reserve, which use their off-site monitoring tools to target lenders for on-site reviews, SBA targets for review those lenders with the largest SBA-guaranteed loan portfolios. As a result of this approach, 97 percent of the lenders that SBA’s risk rating system identified as high risk in 2008 were not reviewed. Further, GAO found that the scope of the on-site reviews that SBA performs is not informed by the lenders’ risk ratings, and the reviews do not include an assessment of lenders’ credit decisions. The federal financial regulators use the results of off-site monitoring to identify which areas of a bank’s operations they should review more closely. Moreover, their reviews include an assessment of the quality of the lenders’ credit decisions. Federal financial regulators are able to use review results to update their off-site monitoring systems with data on emerging lending trends. Regardless of the lender’s risk rating, SBA relies on a standard on-site review form that includes an assessment of lenders’ compliance with SBA policies and procedures but not an assessment of lenders’ credit decisions. According to SBA officials, it is not the agency’s role to assess lenders’ credit decisions. Without targeting the most risky lenders for on-site reviews or gathering information related to lenders’ credit decisions, SBA cannot effectively assess the risk posed by lenders or ensure that its lender risk rating system incorporates updated information on emerging lending trends. What GAO Recommends: GAO recommends that SBA ensure that its contractor, consistent with industry standards, follows sound model validation practices, use its own data to assess the lender risk rating system, develop a strategy for targeting lenders for on-site reviews that relies more on its lender risk ratings, and consider revising its on-site review policies and procedures. In responding to a draft of this report, SBA generally agreed with these recommendations and outlined some steps that it plans to take to address them. View [hyperlink, http://www.gao.gov/products/GAO-10-53] or key components. For more information, contact William B. Shear at (202) 512- 8678 or shearw@gao.gov. [End of section] Contents: Letter: Results in Brief: Background: SBA's Lender Risk Rating System Is Similar to Those Used by Federal Financial Regulators but Is Limited by Insufficient Validation: SBA Does Not Use Lender Risk Ratings to Target Lenders for On-Site Review or Tailor the Scope of the Reviews: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: Objectives, Scope, and Methodology: Appendix II: Comments from the Small Business Administration: Appendix III: Predictive Performance of the March 2007 and March 2008 Lender Risk Ratings: Appendix IV: Small Business Predictive Score: Appendix V: GAO Contact and Staff Acknowledgments: Tables: Table 1: Sources of Data Used to Calculate Lender Risk Ratings for 7(a) Lenders: Table 2: Comparison of Alternative Rankings and Rankings Based on 2007 Lender Risk Rating Raw Scores, 2007 Currency Rates, and 2008 Lender Risk Rating Raw Scores for 7(a) Lenders: Table 3: Comparison of Alternative Rankings and Rankings Based on 2007 Lender Risk Rating Raw Scores, 2007 Currency Rates, and 2008 Lender Risk Rating Raw Scores for 504 Lenders: Table 4: Results of Correlation Analysis: Table 5: Predictive Ability of SBPS for Loans below and above $150,000: Figures: Figure 1: SBA's Lender Risk Rating Process for 7(a) Lenders: Figure 2: Data Used for Off-Site Monitoring: Figure 3: Commonly Accepted Validation Practices and SBA's Practices: Figure 4: SBA On-Site Reviews, 2005 to 2008: Abbreviations: Basel Committee: Basel Committee on Banking Supervision: FDIC: Federal Deposit Insurance Corporation: Federal Reserve: Board of Governors of the Federal Reserve System: L/LMS: Loan and Lender Monitoring System: NAICS: North American Industry Classification System: OCC: Office of the Comptroller of the Currency: SBA: Small Business Administration: SBPS: Small Business Predictive Score: [End of section] United States Government Accountability Office: Washington, DC 20548: November 6, 2009: The Honorable Mary L. Landrieu: Chair: The Honorable Olympia J. Snowe: Ranking Member: Committee on Small Business and Entrepreneurship: United States Senate: The Honorable Richard J. Durbin: Chairman: The Honorable Susan M. Collins: Ranking Member: Subcommittee on Financial Services and General Government: Committee on Appropriations: United States Senate: In April 2003, the Small Business Administration (SBA) obtained a loan monitoring service from Dun & Bradstreet to help manage and oversee the lending and risk management activities of lenders that extend 7(a) and 504 loans to small businesses. The 7(a) and 504 loan programs, named after the sections of the acts that authorized them, are SBA's two major business loan guarantee programs.[Footnote 1] As of June 30, 2009, SBA had an outstanding portfolio of $67.6 billion in 7(a) and 504 loans. Because SBA guarantees the individual loans that lenders originate, it uses the Dun & Bradstreet service, now called the Loan and Lender Monitoring System (L/LMS), to monitor the individual risk that each loan poses to the agency in order to identify those lenders whose SBA loan operations and portfolios may require additional monitoring or other actions. In 2004, we reviewed the service and found that it was a positive and necessary step in improving SBA's oversight of lenders but determined that the agency needed to develop policies and procedures to ensure that it used the service in a way that resulted in improved oversight of lenders.[Footnote 2] Since we issued our report in June 2004, SBA has made progress in developing policies for using L/LMS and expanding its use. For example, SBA hired a contractor to develop a lender risk rating system (that is, an off-site monitoring tool that produces a risk score for each lender) based on L/ LMS data. This system enabled SBA for the first time to monitor the approximately 4,000 smaller lenders that it had not previously reviewed. However, questions have been raised about the extent to which SBA has used its lender risk rating system to improve its oversight of lenders--for example, to target lenders for on-site review. The SBA Inspector General reported in May 2008 that SBA had been unable to sufficiently mitigate the risk posed by lenders that it had identified as high risk and that SBA's 7(a) program had incurred a cumulative net loss for four lenders of $329 million as of September 2007.[Footnote 3] You asked us to review SBA's lender risk rating system and its effect on the agency's lender oversight program. Specifically, this report examines (1) how SBA's risk rating system compares with the off-site monitoring tools used by federal financial regulators and lenders and the system's usefulness for predicting lender performance and (2) how SBA uses the lender risk rating system in its lender oversight activities. To determine how SBA's lender risk rating system compares with off-site monitoring tools used by federal financial regulators and lenders, we compared SBA's system with common industry standards that we identified through interviews and document reviews. We interviewed officials from three federal financial regulators--the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Federal Deposit Insurance Corporation (FDIC)--five of the largest 7(a) lenders, and the five largest 504 lenders.[Footnote 4] We also reviewed relevant literature and analyzed procedural manuals and other related federal guidance to banks on loan portfolio monitoring. Although we interviewed federal financial regulators and reviewed agency documents explaining their off-site monitoring practices, we did not evaluate their practices, such as by testing their models. In addition, we compared the techniques that SBA and its contractor used to develop and validate the lender risk rating system to our internal control standards.[Footnote 5] To determine the usefulness of the lender risk ratings in predicting lender performance, we reviewed documents from SBA and its contractor that described the factors used in the risk rating system and the process for calculating the risk rating scores. We also obtained and analyzed the following SBA data: data on loans approved in 2003 through the end of 2007, the March 2007 and March 2008 lender risk ratings, and the currency rate for each lender.[Footnote 6] We assessed the reliability of these data and found them to be sufficiently reliable for our purposes. Using these data, we undertook a number of evaluative steps to test SBA's model. After we discussed SBA's modeling approach in detail with SBA officials and the agency's contractor to document the process used to develop the model, we developed statistical estimation techniques to assess how well SBA's risk rating system predicts lender performance. In particular, we compared the scores from the lender risk rating system to lenders' actual performance and alternate measures of lender performance that we developed using SBA data. To determine how SBA uses the lender risk rating system in its lender oversight activities, we compared SBA's practices for assessing and monitoring the risk of lenders and loan portfolios against (1) the industry standards we identified through our interviews and document reviews and (2) our internal control standards. We also obtained and analyzed SBA data on risk ratings and on-site examinations from 2005 through 2008 to determine the characteristics of lenders that received on-site exams. We conducted this performance audit from August 2008 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix I contains a full description of our objectives, scope, and methodology. Results in Brief: SBA's lender risk rating system uses some of the same types of information that federal financial regulators and selected large lenders use to conduct off-site monitoring. But the system's usefulness has been limited because SBA has not followed common industry standards when validating the system--that is, assessing the system's ability to accurately predict outcomes. Like the 3 federal financial regulators and 10 large lenders we interviewed, SBA's contractor developed the lender risk rating system using loan performance data and prospective, or forward-looking, measures (such as credit scores). We independently assessed the lender risk rating system and found that it was generally successful in distinguishing between high-and low-risk lenders, but it better predicted the performance of larger lenders. However, the system's usefulness was limited because the contractor did not follow validation practices, such as independent and ongoing assessments of the system's processes and results, consistent with those recommended by federal financial regulators and our internal control standards. For example, the agency did not require a party other than the one who developed the system to perform the validation, and SBA's contractor did not routinely reassess the factors used in the system as part of its validations. Further, SBA officials stated that resource constraints prevented them from using internally generated data to develop alternate measures of lender performance that could be used to independently assess or supplement the risk ratings. Federal financial regulator guidance and our internal control standards suggest that organizations should use their own data to assess the performance of risk rating systems developed by vendors. Because SBA does not follow sound validation practices or use its own data to independently assess the risk ratings, the effectiveness of its lender risk rating system-- the primary system SBA relies on to monitor and predict lender performance--may deteriorate as economic conditions and industry trends change over time. According to SBA officials, the agency's contractor is currently redeveloping the system because its performance has deteriorated in recent years. Although SBA's lender risk rating system has enabled the agency to perform some off-site monitoring of lenders, the agency does not use the system to target lenders for on-site review or to inform the scope of those reviews. FDIC and the Federal Reserve use their off-site monitoring tools to target lenders for on-site reviews. SBA uses its risk rating system to monitor lenders and portfolio trends but does not rely on it to target the riskiest 7(a) and 504 lenders for on-site review. Instead, SBA focuses on what it thinks is the most important risk indicator--portfolio size--and targets for review those lenders with the largest SBA-guaranteed loan portfolios--that is, 7(a) lenders with at least $10 million in their guaranteed loan portfolio and 504 lenders with balances of at least $30 million. Of the 477 reviews SBA conducted from 2005 through 2008, 380 (80 percent) were of large lenders that, based on its lender risk rating system, posed limited risk to SBA. The remaining 97 reviews (20 percent) were of lenders that posed significant risk to the agency. As a result, the vast majority of high-risk lenders were not reviewed. For example, in 2008, 97 percent of the 1,587 lenders identified as high risk were not reviewed. Of these lenders, 215 had an outstanding portfolio of at least $4 million. Because SBA relies on a lender's size to target lenders for on-site reviews, smaller lenders with high-risk ratings that may still have significant portfolios of SBA loans have been allowed to participate in SBA's loan programs with little or no oversight. In addition, SBA does not use the lender risk rating system to determine the scope of on-site reviews and does not assess lenders' credit decisions during these reviews. Federal financial regulators we contacted use the results of off-site monitoring to identify which areas of a bank's operations they should review more closely. Moreover, their reviews include an assessment of the quality of lenders' credit decisions. These practices provide information on emerging trends in lending that regulators can use to update their off-site monitoring tools. Finally, internal control standards require that all federal agencies identify and analyze risks and determine the best way to manage or mitigate them. However, regardless of lenders' risk ratings, SBA relies on a standard on-site review form that includes an assessment of lenders' compliance with SBA policies and procedures but not an assessment of lenders' credit decisions. For example, SBA examiners determine whether lenders have ensured that borrowers met eligibility requirements. SBA officials told us that it was not the agency's role to assess lenders' credit decisions. However, we believe that because SBA relies on lenders with delegated underwriting authority to make the majority of its loans, the agency should take a more active role in ensuring that these lenders are making sound credit decisions. Without targeting the riskiest lenders for on-site reviews or gathering information related to lenders' credit decisions, SBA cannot effectively assess lenders' risk or update its risk rating system based on emerging lending trends. This report contains four recommendations designed to improve SBA's use of its lender risk rating system and oversight of its lenders. We are recommending that SBA ensure that its contractor follows sound model validation practices, including testing of the lender risk rating system data, processes, and results; utilizing an independent party to perform its validations; and maintaining complete documentation of the validation process and results. We also are recommending that SBA use its own data to assess the lender risk rating system, develop a strategy for targeting lenders for on-site reviews that relies more on its lender risk ratings, and consider revising its on-site review policies and procedures. We provided SBA with a draft of this report for its review and comment. In written comments, SBA stated that it generally agreed with our recommendations and outlined some steps that it plans to take to address them. For example, the agency noted that it is currently undertaking a redevelopment of its lender risk rating system and plans to ensure that best practices are incorporated into the redevelopment validation process. SBA's comments are reprinted in appendix II. Background: In pursuing its mission of aiding small businesses, SBA provides them with access to credit, primarily by guaranteeing loans through its 7(a) and 504 loan programs. The 7(a) and 504 loan guarantee programs are intended to serve small business borrowers who could not otherwise obtain credit under reasonable terms and conditions from the private sector without an SBA guarantee. Under the 7(a) program, SBA generally provides guarantees of up to 85 percent on loans made by participating lenders that are subject to program oversight by SBA.[Footnote 7] Many of these participating lenders are preferred lenders that have delegated underwriting authority. Loan proceeds can be used for most business purposes, including working capital, equipment, furniture and fixtures, land and buildings, leasehold improvements, and certain debt refinancing. The 504 program provides long-term, fixed-rate financing to small businesses for expansion or modernization, primarily of real estate. Financing for 504 loan programs is delivered through about 270 certified development companies, nonprofit corporations that were established to contribute to the economic development of their communities. For a typical 504 loan project, a third-party lender provides 50 percent or more of the financing pursuant to a first-lien mortgage, a certified development company provides up to 40 percent of the financing through a debenture that is fully guaranteed by SBA, and a borrower contributes at least 10 percent of the financing.[Footnote 8] Although SBA's 7(a) and 504 loan guarantee programs serve different needs, both programs rely on third parties to originate loans (participating lenders for 7(a) loans and certified development companies for 504 loans). Because SBA generally guarantees up to 85 percent of the 7(a) loans and up to 40 percent of the financing for 504 loan projects, SBA faces the same kind of risk as the lenders if the loans are not repaid. The Small Business Programs Improvement Act of 1996 required SBA to establish a risk management database that would provide timely and accurate information to identify loan underwriting, collections, recovery, and liquidation problems.[Footnote 9] In 2003, SBA obtained a service from Dun & Bradstreet that would allow it to, among other things, predict the likelihood of a loan defaulting using a combination of SBA performance data and loan-level credit data. In 2004, we assessed the new service and found that the system was on par with industry best practices by providing a tool that could help SBA better assess the risk exposure of loans in its lenders' portfolios.[Footnote 10] For example, we reported that the Small Business Predictive Score (SBPS), which is provided through the Dun & Bradstreet service, appeared to be consistent with private sector best practices because it was based on sound models.[Footnote 11] The models used to score the loans rely on data managed by Dun & Bradstreet and are commercial, off- the-shelf risk scoring models developed by Fair Isaac and validated to SBA's 7(a) and 504 portfolios. We concluded that without the Dun & Bradstreet service, it was unlikely that SBA would be able to continue the same level of risk management of its overall portfolio, its individual lenders, and their portfolios. However, we also reported that SBA needed to make better use of the service in overseeing its lenders and recommended, among other things, that resources within SBA be devoted to developing policies for the use of the loan monitoring service. As a result, SBA contracted with Dun & Bradstreet to develop a system that would rate lenders based on risk. Dun & Bradstreet subcontracted with another company, TrueNorth, to develop the lender risk ratings--that is, custom scores calculated using L/LMS data. Work on the lender risk rating system started in 2004. The purpose of the lender risk rating system is to improve the way SBA monitors lenders. The lender risk rating system uses the following factors for 7(a) lenders: * past 12 months' actual purchase rate--a historical measure of SBA purchases from the lender in the preceding 12 months;[Footnote 12] * problem loan rate--the current delinquencies and liquidations in a lender's SBA-guaranteed portfolio;[Footnote 13] * 3-month change in SBPS--a score that was developed to predict the likelihood of severe delinquency (61 or more days past terms) over the next 18 to 24 months, including bankruptcies and charge-offs;[Footnote 14] and: * projected purchase rate--a measure of the amount of SBA guaranteed dollars in a lender's portfolio that is likely to be purchased by SBA. [Footnote 15] Most of the data used to calculate these factors are loan and lender performance information that come from SBA. The remaining data are SBPSs or related scores provided by the Dun & Bradstreet service (see table 1). Table 1: Sources of Data Used to Calculate Lender Risk Ratings for 7(a) Lenders: Factor: Past 12 months' actual purchase rate; Lender data: Total gross dollars of the lender's loans that were purchased during the past 12 months; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: Past 12 months' actual purchase rate; Lender data: Total gross outstanding dollars of SBA loans at the end of 12-month period; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: Problem loan rate; Lender data: Gross outstanding dollars of the lender's loans that are 90 days or more delinquent; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: Problem loan rate; Lender data: Gross dollars in liquidation; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: Problem loan rate; Lender data: Gross dollars outstanding; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: 3-month change in SBPS; Lender data: SBPS; Data sources: SBA: [Empty]; Data sources: Dun & Bradstreet: [Check]; Data sources: Fair Isaac: [Check]. Factor: Projected purchase rate; Lender data: Probability of loan purchase; Data sources: SBA: [Empty]; Data sources: Dun & Bradstreet: [Check]; Data sources: Fair Isaac: [Check]. Factor: Projected purchase rate; Lender data: Individual loans outstanding; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Factor: Projected purchase rate; Lender data: SBA-guaranteed dollars outstanding; Data sources: SBA: [Check]; Data sources: Dun & Bradstreet: [Empty]; Data sources: Fair Isaac: [Empty]. Source: GAO analysis of SBA data. [End of table] For 504 lenders, the risk rating is based on three factors: (1) the past 12 months' actual purchase rate, (2) the problem loan rate, and (3) the average SBPS on loans in the 504 lender's portfolio. The third factor replaced the third and fourth factors used for 7(a) lenders because it was found during the testing process to be more predictive of SBA purchases for 504 lenders. Some federal financial regulators and lenders rely on similar tools to conduct off-site monitoring. For example, FDIC relies on various off- site monitoring tools, including a system called the Statistical CAMELS Off-site Rating that helps the regulator identify institutions that have experienced noticeable financial deterioration since the last on- site exam. The Federal Reserve also relies on multiple tools to conduct off-site monitoring, including a system that enables the regulator to predict how the risk level of a bank likely will change in comparison to other banks that received similar ratings on on-site exams. OCC relies on a process called a core assessment that helps examiners assess the risk exposure for nine categories of risk, including quantity, quality, and direction of risk. Moreover, lenders frequently use models to summarize available relevant information about borrowers and reduce the information into a set of ordered categories, or scores, that estimate the borrower's risk of delinquency or default at a given point in time. Such tools are playing a progressively more important role in the banking industry. In general, the goal of these models-- whether they are generic or custom, developed internally or by third parties--is to obtain early indications of increasing risk. SBA's Lender Risk Rating System Is Similar to Those Used by Federal Financial Regulators but Is Limited by Insufficient Validation: SBA's Contractor Uses a Multistep Process to Assign Lender Risk Ratings: SBA's contractor takes four steps to assign lender risk ratings each quarter. First, the contractor separates lenders into peer groups based on the size of their SBA loan portfolios in order to compare similarly sized lenders. Second, for each lender, the contractor computes values for each of the factors. As discussed in more detail in the background, the four factors for 7(a) lenders are the (1) past 12 months' actual purchase rate, (2) problem loan rate, (3) 3-month change in the SBPS, and (4) projected purchase rate. Third, the contractor inputs the value for each of the factors into an equation to compute a score for each lender. Fourth, the contractor uses the scores to place lenders into one of five risk rating categories (1 through 5, with 1 indicating the least risk).[Footnote 16] Figure 1 illustrates this process for 7(a) lenders, and the shaded area represents a specific example. The process is generally the same for 504 lenders.[Footnote 17] Figure 1: SBA's Lender Risk Rating Process for 7(a) Lenders: [Refer to PDF for image: illustration] 1) Separates lenders into peer groups based on SBA loan portfolio size: $0 - $999,999, <1 loan disbursed; $0 - $999,999, >1 loan disbursed; $1M - $3.9M; $4M - $9.9M; $10M - $99.9M; $100M or more. 2) Computes value of each factor for each 7(a) lender: Projected purchase rate; Problem loan rate; 3-month change in SBPS; Past 12 months’ actual purchase rate. 3) Computes lenders’ scores (1 - 999) by inputting each lender’s value of each factor into an equation. 4) Places lenders, based on their scores, into five risk rating categories (1 through 5, with 1 indicating the least risk): $0 - $999,999, <1 loan disbursed (Lender risk score, 1-5, low to high); $0 - $999,999, >1 loan disbursed (Lender risk score, 1-5, low to high); $1M - $3.9M (Lender risk score, 1-5, low to high); $4M - $9.9M (Lender risk score, 1-5, low to high); $10M - $99.9M (Lender risk score, 1-5, low to high); $100M or more (Lender risk score, 1-5, low to high). Sample 7(a) lender: SBA loan portfolio: $7.8M: Loan portfolio size: $4M - $9.9M; Lenders' score: 250; Risk category: $4M - $9.9M (Lender risk score, 2). Source: GAO. Note: In step 2, the size of the symbols that represent each factor is illustrative and not necessarily to scale. [End of figure] According to SBA officials, this process for calculating lender risk ratings will likely change in the near future because its contractor is redeveloping the lender risk rating system. Several major changes are being contemplated. First, the contractor plans to use an updated version of the SBPS. Second, the contractor may use additional variables to calculate lender risk ratings. Finally, rather than varying the equation by peer group, SBA officials stated that they are considering a new variable that captures the size of the lender's portfolio and the age of its loans. The contractor is still in the process of designing, testing, and documenting the new risk rating system. SBA rarely overrides risk ratings, but it may do so for several reasons. These include early loan default trends; abnormally high default or liquidation rates; lending concentrations; rapid growth in SBA lending; inadequate, incomplete, or untimely reporting to SBA; and nonpayment of required fees to SBA.[Footnote 18] In addition, SBA may override a lender risk rating due to issues identified during an on- site review. For the quarter ending September 30, 2008, SBA overrode the risk rating assigned by the contractor in 20 cases; in each case, the risk rating increased. SBA's Lender Risk Rating System Uses Some of the Same Types of Data That Federal Financial Regulators and Selected Lenders Rely on to Conduct Off-Site Monitoring: SBA's lender risk rating system uses some of the same types of data that federal financial regulators and selected lenders rely on for off- site monitoring. The federal financial regulators we interviewed rely on lender information, performance data, and prospective measures to conduct off-site monitoring. Although the specific factors included in each regulator's off-site monitoring tools can vary, each regulator uses these three types of data. Much of the lender and performance information they use are from the call reports that banks submit quarterly and include data on equity, loans past due, and charge-offs. [Footnote 19] Prospective measures include--when available--borrowers' credit scores from lender files. One federal regulator is also working with a third party to obtain predictive scores, similar to the SBPS, to use as part of its off-site monitoring. The large lenders with whom we spoke also use performance data to rate loans, focusing on factors such as portfolio performance, delinquencies, and trends by state and industry type in order to forecast future losses. Lenders also incorporate prospective measures, such as FICO scores and SBPSs. [Footnote 20] Like federal financial regulators and large lenders, SBA uses performance data and prospective measures to calculate lender risk ratings. As we have seen, to calculate risk ratings for 7(a) lenders, SBA relies on performance data (the past 12 months' actual purchase rate and the problem loan rate) and prospective measures (the 3-month change in the SBPS and the projected purchase rate). The 3-month change in the SBPS is also a portfolio trend that has been incorporated into the rating system. However, unlike the federal financial regulators, SBA does not use lender information such as equity and loan concentrations as inputs into its lender risk rating system. Although the federal financial regulators and SBA both oversee lenders, their missions differ, and as a result they may choose to focus on different variables in conducting off-site monitoring. In general, the mission of the federal financial regulators is to maintain stability and public confidence in the nation's financial system. In contrast, SBA's mission is to aid, counsel, assist, and protect the interests of small business concerns, including guaranteeing loans to businesses in industries that lenders may avoid. Therefore, it is understandable that SBA might not include the same variables as federal financial regulators. In addition, while it is not an input into the lender risk rating system, SBA evaluates information such as equity and loan concentrations as part of other monitoring efforts. Figure 2 summarizes how the data that SBA uses in its lender risk rating system compare with the data included in the risk rating systems used by the federal financial regulators and lenders we interviewed. Figure 2: Data Used for Off-Site Monitoring: [Refer to PDF for image: illustrated table] Lender information: Loan concentrations; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [A]; SBA: [B]. Lender information: Income; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [A]; SBA: [B]. Lender information: Equity; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [A]; SBA: [B]. Performance measures: Portfolio trends; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [Check]; SBA: [Check]. Performance measures: Delinquency; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [Check]; SBA: [Check]. Performance measures: Default; OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [Check]; SBA: [Check]. Prospective measures: OCC: [Check]; FDIC: [Check]; Federal Reserve: [Check]; Selected lenders: [Check]; SBA: [Check]. Source: GAO. [A] The lenders we interviewed do not collect other lenders' information to rate their loans. [B] SBA evaluates loan concentrations during on-site reviews of lenders and income and equity during performance-based reviews of lenders. These reviews are discussed in detail later in this report. [End of figure] SBA's Lender Risk Rating System Better Predicted the Performance of Larger Lenders than Smaller Lenders: When we performed our own independent assessments of the reliability of the lender risk ratings, we found that they were more reliable at predicting the performance of the largest lenders. To perform this independent assessment, we assessed how well the lender risk ratings predicted the actual performance of lenders (that is, lenders' default rates).[Footnote 21] Because of data limitations, our analyses focused on lenders with larger SBA-guaranteed portfolios.[Footnote 22] Overall, we found that SBA's ratings were able to distinguish between high-and lower-risk lenders for a majority of the 7(a) and 504 lenders in our sample for 2007 and 2008.[Footnote 23] However, when we focused on the ratings' ability to predict the performance of different-sized lenders, we found that the ratings were more effective at predicting the performance of lenders with the largest SBA-guaranteed portfolios (that is, lenders with SBA-guaranteed portfolios of at least $100 million). (See appendix III for further discussion of how well the lender risk ratings predicted the performance of 7(a) and 504 lenders.) How the system was developed may have contributed to the lender risk ratings being more effective at predicting the performance of the largest lenders (that is, lenders with SBA-guaranteed portfolios of at least $100 million). In order to determine how SBA developed the risk rating system, we reviewed the available documentation of the development process and discussed the process with SBA officials and the contractor. According to the contractor, it considered 32 variables to determine those that were the most predictive for each peer group. SBA then made a policy decision to use the same factors across all of the peer groups. Although the documentation did not provide the justification for this policy decision, SBA officials stated that the decision was made so that every lender's risk rating was based on consistent information. Officials were concerned that lenders might be confused if the factors upon which the ratings were based varied by peer group, particularly since lenders do move between peer groups. The contractor ultimately selected four factors, each of which was a statistically significant predictor of lender performance for at least one of the peer groups. However, only for the largest peer group (those with guaranteed portfolios of at least $100 million) were all four factors statistically significant. According to SBA officials, in peer groups where a factor was statistically insignificant, it did not affect the lenders' risk ratings--that is, for some peer groups, the ratings are determined by less than four factors. Usefulness of SBA's Lender Risk Rating System Has Been Limited because SBA Does Not Ensure That Its Contractor Follows Sound Validation Techniques: The effectiveness of SBA's lender risk rating system has been limited because the agency's contractor does not follow sound validation practices. According to one federal financial regulator, the ability of models to accurately predict outcomes can deteriorate over time. For example, changes in economic conditions and industry trends can affect model outcomes. Validation--the process of assessing whether ratings adequately identify risks by, for example, comparing predictions to actual results--helps to ensure that models remain reliable. Federal financial regulators (OCC, FDIC, and the Federal Reserve) and the Basel Committee on Banking Supervision (Basel Committee) have developed a number of common principles that financial institutions should follow in validating the models they use to manage risk, whether the models are purchased from a vendor or developed in-house.[Footnote 24] Validating some aspects of models developed by vendors may be difficult because of the proprietary nature of the information. But the guidance from federal financial regulators and the Basel Committee states that organizations have a responsibility to ensure that vendors follow good model validation practices. We identified four key elements of a sound validation policy that federal financial regulators and our internal control standards recommend and that some lenders we interviewed implemented. First, all three parts of a model--the data, processes, and results--should be validated using multiple techniques. Second, validation should be done by an independent party. Third, validation should include an ongoing assessment of the factors used in the model. Finally, the validation procedures should be documented. We found, however, that SBA had not adhered to the guidance in validating its lender risk rating system. First, SBA's validation procedure does not include techniques to validate all parts of its model. Second, the model is not validated by an independent party. Third, SBA does not reassess which variables are the most predictive of lender performance on a routine basis. Finally, SBA's documentation of the validation procedures and the results of the validation is not complete. Figure 3 shows how SBA's practices align with commonly accepted practices. Figure 3: Commonly Accepted Validation Practices and SBA's Practices: [Refer to PDF for image: illustrated table] Model Validation: Validation of the model’s data inputs; OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Included in guidance or practices. Model Validation: Validation of the model’s processes; OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Partially included in guidance or practices. Model Validation: Validation of the model’s results; OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Partially included in guidance or practices. Independent validation; OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Not included in guidance or practices. Ongoing validation of factors used in the model: OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Partially included in guidance or practices. Documentation of validation procedures: OCC: Included in guidance or practices; FDIC: Included in guidance or practices; Federal Reserve: Included in guidance or practices; Basel Committee: Included in guidance or practices; GAO Internal Controls: Included in guidance or practices; SBA: Partially included in guidance or practices. Source: GAO. [End of figure] SBA's Validation Procedure Does Not Include Techniques to Validate All Parts of Its Model: Guidance from the federal financial regulators we interviewed and the Basel Committee states that each of the three parts of a model--the data, processes, and results--should be validated using a variety of techniques. According to FDIC guidance, validation should include ensuring that the data used in the model are accurate and complete, evaluating the model's conceptual soundness, and analyzing the estimates the model produces against actual outcomes. The Basel Committee also states the importance of assessing all the components of a model. In addition, OCC guidance prescribes three generic procedures that could be used for validating each part of a model--a review of logical and conceptual soundness, comparison against other models, and comparison against subsequent actual events. Further, guidance from the Federal Reserve states that financial institutions should use a variety of techniques when validating their models. For example, some lenders we interviewed compared their internal rating systems with other commercially available models or compared model predictions against historical information to test the reliability of their models. In addition, GAO's internal control standards specify that agencies should ensure the accuracy of data inputs and information system processing and results.[Footnote 25] For example, validation should be performed to verify that data are complete and to identify erroneous data. Furthermore, these standards state that management should establish controls over information processing and that output reports should be reviewed. Consistent with commonly accepted practices, SBA's contractor has a documented process for validating the data used in the lender risk rating system. On the basis of previous reviews and recent interviews with contractor staff, we found that the contractor's data quality control process, referred to as DUNSRight, appeared reasonable. In June 2004, we reported that the commercial data that Dun & Bradstreet collects go through a five-step quality assurance process that includes continuously updating databases and matching SBA records with Dun & Bradstreet records, with a 95 percent match of the data on critical pieces of information.[Footnote 26] In the same report, we also concluded that SBA's controls over the 7(a) and 504 data used in the models helped to ensure that the data inputs were sufficiently reliable. Appendix IV provides information on Dun & Bradstreet's procedures for ensuring the reliability of the SBPS and how well it predicts the likelihood that a loan will default. The contractor that developed the lender risk rating system also conducts periodic validations of the system that include using statistical tests to measure the model's predictive ability and comparing the results of the model against lenders' actual performance. For the years 2005 through 2007, SBA's contractor assessed whether the broad risk ratings were generally consistent with the actual performance of the lenders within each rating group. The contractor also determined whether each group of lenders (for example, those lenders rated as 1) performed better than other groups of lenders with lower risk ratings (that is, 2 through 5).[Footnote 27] However, we did not see evidence that the contractor validated the processes used to calculate the ratings. Specifically, neither SBA nor its contractor could provide documentation showing that the contractor had validated the theory behind the system or the logical and conceptual soundness of the model. For example, there was no documentation describing the processes followed or the link between the computer program and output that was used to produce the lender risk ratings. Therefore, we could not rerun the analysis to determine if we would have arrived at the same conclusion regarding the four factors used in the model. In addition, the contractor could not provide documentation showing that it had ensured that the mathematics and computer code were free of errors. According to officials from the contractor, they took steps to verify that the processes they followed were sound, including verifying the computer code they used; however, they did not document these steps. Further, the contractor's validation of the model's results was limited. Consistent with industry standards, SBA's contractor has used a variety of statistical measures to validate the risk rating system's results.[Footnote 28] But the documentation did not show that the contractor checked the model's results against available benchmarks (such as the default rate or the currency rate) to validate whether the risk ratings reliably predicted individual lender performance. Rather, the documentation indicated that the contractor focused its validation on whether the broad risk ratings were generally consistent with the actual performance of the lenders within each rating group--groups that can be comprised of over 2,000 lenders with a wide range of portfolio sizes and performance levels. Although this technique compares the model's results to actual performance benchmarks, as suggested by industry standards, it is limited because it does not provide information on individual lender performance. According to SBA officials, the contractor tested how well individual scores produced by the lender rating system predicted individual lender performance; however, the results of this analysis were not included in the documentation we received and were not provided to SBA. Because lender performance can vary widely within the broad risk categories, the results of a more refined analysis would allow SBA to identify specific lenders placed in incorrect risk categories. Because SBA has never requested documentation from the contractor on its validation of the model's processes, the agency cannot ensure that the processes used are sound. In addition, because the contractor does not document how well the lender risk ratings predict individual lenders' performance, SBA may not be able to identify which lenders within the broad risk rating categories are not being rated accurately. As a result, SBA may be relying on inaccurate ratings or missing out on opportunities to identify risky lenders and target them for closer monitoring. Validation Is Not Conducted by an Independent Party: Each of the regulators we interviewed (OCC, FDIC, and the Federal Reserve) recommends in its guidance that validation include an independent review of the model. For example, OCC guidance states that model validation should be done by a party that is as independent as possible from the personnel who constructed the model. In addition, FDIC guidance states that validation should include competent and independent review by a reviewer who is as independent as practicable. Further, Federal Reserve and Basel Committee guidance notes that the validation process should be independent from the model development and implementation processes. Our internal control standards also emphasize the importance of independent review. They state that to reduce the risk of error, no one individual should control all key aspects of an activity.[Footnote 29] For example, an individual who is responsible for developing a model should not be responsible for validating it. An independent party can be either inside or outside the organization--for example, the internal audit staff, a risk management unit of the institution, an external auditor, or another contracted third party. Some lenders we interviewed that had internal risk rating systems have had them validated by a separate group within the institution, and others have invited independent auditors to review their systems. Contrary to common industry practices and internal control standards, the same contractor staff that developed and maintain the lender risk rating system are the officials who validate it. We have previously reported on SBA's failure to ensure that independent parties routinely assess the reliability or integrity of its contractors' models. [Footnote 30] Specifically, we reported in June 2004 that third parties did not validate the SBPS model that another contractor maintained because SBA believed that the model was stable and that clients would inform the company if the models were not reasonably predicting borrower behavior. Similarly, SBA and its contractor thought it was sufficient for someone to review the validation conducted by the staff who developed the model and for Dun & Bradstreet and SBA officials to review the contractor's work. However, industry standards require that personnel other than those who developed the model validate it. Because SBA has not ensured that an independent party validates its lender risk ratings, certain systemic and structural issues with the design of the system may go undetected, and the predictive value of the risk ratings is more uncertain. SBA Does Not Perform Ongoing Validation to Ensure That the Factors Used in the System Are the Most Predictive: Guidance from federal financial regulators and the Basel Committee states that validation of the factors used in the model should be ongoing and should take into consideration changes in the environment (such as changes in economic conditions or industry trends) or improvements in modelers' understanding of the subject. For example, OCC guidance states that models are frequently altered in response to changes such as these. In addition, Federal Reserve guidance states that a model's methodology should be validated periodically and modified to incorporate new events or findings as needed. Further, the Basel Committee notes that validation is an ongoing, iterative process. Failure to do so could cause the model to become less predictive and lose its ability to rank order risk over time. According to FDIC guidance, characteristics of a model need to be validated and refined when necessary because if management does not select and properly weight the best predictive variables, the model's output will likely be less effective. Our internal control standards also specify that agencies that procure commercial software are responsible for ensuring that it meets the user's needs and is operated properly.[Footnote 31] These standards state that controls should be in place to ensure that computer systems are modified safely by reviewing and testing them before placing them into operation. The standards also specify that management should ensure that ongoing monitoring is effective and will trigger separate evaluations where problems are identified. SBA's contractor takes some steps to validate the lender risk rating system's ability to reliably predict lender performance but does not ensure that the variables used to calculate the risk ratings are the most predictive of lender performance. We reviewed the validations of the risk rating system that the contractor conducted in 2005, 2006, and 2007. These validation efforts included testing of the statistical importance of each of the four factors used in the lender risk rating system. However, these validations did not routinely include testing of other factors to account for changes in economic conditions or industry trends. The 2005 validation effort was the only one that tested additional factors. SBA's contractor tested three new variables to determine if they improved the model's ability to predict lender performance and found that they did not.[Footnote 32] Neither of the subsequent validations included assessments of additional variables, and SBA did not requested them. According to SBA officials, SBA and the contractor identified possible additional variables over the past several years that they did not test for use in the model because they wanted more experience with it and the data.[Footnote 33] They also noted that they always had plans to redevelop the model within 5 years but could not do so until the agency had signed a second contract with Dun & Bradstreet that provided funds for a redevelopment. However, if SBA had asked the contractor to test additional factors on a regular basis, the agency may have found that an earlier redevelopment effort or incremental adjustments could have improved the predictive ability of the model. Because new variables that might take into account economic changes or industry developments have not been routinely assessed, the ratings may not be as effective as they could be. In addition, according to the contractor's validation reports, the lender risk rating system's predictive ability for 7(a) lenders decreased from 2005 to 2007.[Footnote 34] This decrease led the contractor to suggest in 2007 that SBA redevelop the model to improve its predictive ability and prevent further deterioration. SBA officials agreed, and the contractor is currently redeveloping the model, including testing new variables, to keep up with changing economic conditions and to reflect SBA's and the contractor's experiences working with the data and the model over the last several years. It will be important for SBA to ensure that the contractor conducts sound testing as part of its redevelopment. SBA's Documentation of Validation Procedures and Results Is Incomplete: The federal financial regulators' guidance states that a sound validation policy should include documentation of the validation. For example, FDIC and OCC guidance states that model validation documentation should describe the model, how it is used, and its limitations. Federal Reserve guidance also notes that the validation process should be documented. In addition, FDIC and OCC have said that the procedures used to validate the model on an ongoing basis and the results of these validations should be documented, even if the institution uses a model developed by a vendor. For example, OCC guidance states that an institution should seek assurances that the vendor's model is defensible and works as promised. Further, the Basel Committee guidance notes that even vendors that are not willing to reveal proprietary information should provide information on the validation techniques they use. Complete documentation of the results of ongoing validations assists users in understanding the model and facilitates independent reviewers' assessments of the model's validity. Our internal control standards also specify the importance of documenting information systems.[Footnote 35] For example, these standards state that all significant events in developing and maintaining computer systems should be clearly and completely documented. This documentation should describe the system, how the data used in the system are handled, and other controls in place to maintain the system. SBA did not ensure that the contractor provided complete documentation of the results of its validations or documented its validation procedures. SBA provided us with some documentation of the contractor's process for validating the data used in the lender risk rating system, but documentation of the results of the validations was inconsistent and did not have information on the procedures for validating the model's processes. For example: * The validation reports we reviewed (2005 to 2007) did not always include information on the statistical measure the contractor used to describe the model's predictive abilities. The 2006 validation report did not contain this statistic for the 7(a) ratings, and only the 2007 report included it for 504 lender risk ratings. * The validation reports did not describe the contractor's validation procedures. As noted previously, SBA did not provide documentation showing that the contractor validated the mathematics and computer code used in the model. * The validation reports did not explain why in 2005 the contractor considered whether additional variables would improve the model's ability to predict lender performance but did not consider additional variables in other years. * The validation reports did not describe any limitations of the model that would have helped SBA to use the results accurately. Officials from the contractor explained that the documentation provided was typical of that seen in the private sector for such models, but stated that they would provide more detailed documentation in the future. Because SBA does not ensure that its contractor completely documents its validation procedures and results, it is difficult to assess the sufficiency of the validations performed. Further, as we noted previously, it is important for an independent party to validate a model's reliability. Without clear documentation explaining the model's limitations, the validation procedures, and the results of the validations, an independent reviewer would have difficulty conducting a thorough assessment of SBA's model. SBA Does Not Use Its Own Data to Assess or Supplement the Contractor's Validation of the Lender Risk Rating System: In addition to not ensuring that its contractor follows sound validation techniques, SBA does not conduct its own analysis of data to supplement the contractor's validation of the lender risk rating system. According to the Basel Committee guidance we reviewed, organizations must have clearly articulated strategies for regularly reviewing the results of vendor models and the integrity of the external data used in these systems. Further, OCC guidance states that vendor models should generally be held to the same minimum validation standards as internally developed models. When full and complete details concerning aspects of a vendor product are lacking, OCC and Basel Committee guidance states that organizations should rely more heavily on alternative validation techniques to compensate for the lack of access to full information. This guidance notes that in such cases, it is critical for organizations to test the results of the vendor's model at least once a year using their own data on actual performance to assess the model's predictive ability. This procedure helps to ensure that the models continue to function as intended and verifies the reliability and consistency of any external data used. Our internal control standards state that monitoring should be performed continually and that it should involve comparisons and reconciliations.[Footnote 36] For example, these standards specify that agencies should compare information generated from computer systems to actual records. Agencies should also analyze and reconcile any differences that might be found. SBA does not use its own data to independently assess the lender risk rating system's results. According to a 2007 SBA Inspector General report, SBA has previously rejected using its own data to develop lender performance benchmarks that could be used in lieu of or in conjunction with the risk ratings because doing so would be time- consuming and the benchmarks would have to be monitored and replaced as program and economic conditions changed.[Footnote 37] However, we found that SBA data could be useful for developing alternate measures of lender performance in order to independently validate the lender risk rating system's results. For example, SBA could perform analyses similar to those we performed by using its own data to compare risk ratings with actual lender default rates. Further, SBA could use its own data to develop alternate measures, such as currency rates, as performance benchmarks. As we did in our analyses, SBA could compare how well lender risk ratings predicted actual performance to how well an alternate measure demonstrated lender's actual performance. Because of data limitations, our analyses focused on lenders with larger SBA- guaranteed portfolios. As a result, we were unable to determine how well these alternate measures predict the performance of lenders with smaller portfolios, but SBA has more years of data available to facilitate such analyses. Without performing its own assessment, the agency may not be able to identify issues with the model's ability to reasonably predict lender performance and notify the contractor. As a result, SBA may miss opportunities to identify risky lenders and mitigate the risks they pose to SBA's portfolio. SBA Does Not Use Lender Risk Ratings to Target Lenders for On-Site Review or Tailor the Scope of the Reviews: SBA Has Used the Lender Risk Rating System to Conduct Some Off-Site Monitoring of Lenders and Their Portfolios: SBA uses its lender risk rating system to conduct off-site monitoring of lenders and their portfolios. In addition to routine on-site reviews, federal financial regulators and lenders use off-site tools to monitor lenders' performance and portfolio trends. As part of a comprehensive risk management strategy, federal financial regulators use risk ratings to conduct portfolio analysis and identify problem trends. FDIC relies on a number of off-site monitoring tools to perform horizontal analyses (that is, compare similar lenders) and analyze emerging lending trends. For example, when subprime lending first began, the agency tracked the amount of subprime lending that each of its lenders did. The Federal Reserve uses various off-site monitoring tools that focus on asset quality and credit risk to identify banks whose ratings appear to have deteriorated since their most recent on- site reviews. For example, it analyzes information related to nonperforming and performing loans and the changing composition of loan concentrations. OCC uses its core assessment process to assess how much risk lenders have taken on and the quality of their risk management to determine aggregate risk. Lenders also use off-site monitoring tools to oversee loan portfolios. For example, one 7(a) lender we interviewed uses various scoring models to determine, among other things, how each loan's risk rating has changed since the loan was originated. Other 7(a) lenders with whom we spoke use off-site monitoring tools that analyze factors such as geography, industry, management quality, company performance, and collateral to predict the risk of loans. Another 7(a) lender relies on several off-site monitoring systems to track portfolio performance-- including delinquencies and trends by state, industry, and North American Industry Classification System (NAICS) code--and forecast losses.[Footnote 38] In addition, bank officials we interviewed stated that they reviewed all troubled loans on a monthly basis. Similarly, SBA uses its lender risk rating system to obtain quarterly performance information on all lenders and determine portfolio trends. SBA officials stated that before they had the risk rating system, they were not able to analyze the performance of all lenders, especially lenders with the smallest volume of SBA-guaranteed loans. SBA has formed a Portfolio Analysis Committee that meets monthly to discuss portfolio trends identified by analyzing loan and lender performance data. Comprised of top SBA officials, the committee typically discusses delinquencies, liquidations, charge-offs, and purchase rate trends by delivery method (that is, various SBA loan programs) for the 7(a) and 504 portfolios. The committee also discusses changes in loans' SBPSs (from the end of the quarter in which the loan was disbursed to the most recent quarter) and the scores' performance in ranking loans. To date, SBA has taken some actions as a result of these meetings. For example, SBA officials told us that as a result of discussions about portfolio performance during these meetings, they discontinued an SBA program that allowed borrowers to provide limited documentation. SBA officials told us that the agency also recently began using the results of the lender risk rating system to conduct "performance-based reviews." According to SBA officials, the purpose of these reviews is to perform more in-depth, off-site monitoring that incorporates lenders' information, such as lender financial ratios from call reports, that is currently not part of the lender risk rating system. Specifically, SBA financial analysts are assigned lenders that they will monitor over time. Each year, the analysts will focus on lenders with outstanding balances on their SBA portfolios of at least $10 million that are not scheduled for on-site reviews and on all other preferred lenders regardless of size. With the remaining resources, they will review small problem lenders--for instance, those with guaranteed portfolios that are less than $10 million but that received a lender risk rating of 4 or 5. SBA had conducted 517 of these reviews as of August 2009. SBA Has Not Effectively Integrated Its Lender Risk Rating System into the On-Site Examination Process: Although SBA has begun some off-site monitoring using its risk rating system, it does not use the ratings to target lenders for on-site reviews. FDIC and the Federal Reserve use risk ratings as the primary tool for identifying lenders that need to be reviewed.[Footnote 39] For example, FDIC stated that they relied on off-site monitoring to determine the scope and frequency of on-site exams. Our internal control standards require that agencies assess and mitigate risks using quantitative and qualitative methods and then conduct a thorough and complete analysis of those risks. Although SBA identifies the risks that lenders pose, it does not mitigate these risks because it chooses not to target high-risk 7(a) and 504 lenders for on-site reviews. Instead, the agency targets lenders for reviews based on the size of their portfolios, focusing primarily on the largest lenders--that is, 7(a) lenders with at least $10 million in their guaranteed loan portfolio and 504 lenders with balances of at least $30 million. Only when prioritizing large lenders for review does SBA consider their risk ratings.[Footnote 40] We found that in calendar years 2005 to 2008, most of SBA's 477 on-site reviews were of large 7(a) and 504 lenders that posed limited risk to SBA. Ninety-nine percent (472 of 477) of the lenders reviewed were large lenders, and 80 percent (380 of 477) posed limited risk to SBA (that is, were rated as a 1, 2, or 3 by the lender risk rating system). The agency has increased the number of on-site reviews performed (from 69 in 2005 to 188 in 2008) because it can now charge lenders for them.[Footnote 41] However, SBA continues to conduct a limited number of reviews of high-risk lenders or those with a lender risk rating of 4 or 5 (see figure 4). In 2005, 20 percent (14 of 69) of SBA's on-site reviews were of lenders that posed significant risk to the agency. In 2008, that proportion was 22 percent (42 of 188 reviews). As a result, a substantial number of high-risk lenders were not reviewed each year. For example, in 2008, only 3 percent of the 1,587 lenders that posed significant risk to SBA were reviewed. Because SBA relies on lenders' size to target lenders for on-site reviews, smaller lenders that, based on their high-risk ratings, pose significant risk to SBA have not received oversight consistent with their risk levels. Figure 4: SBA On-Site Reviews, 2005 to 2008: [Refer to PDF for image: stacked vertical bar graph] Year: 2005; Frequency, Risk rating 1: 7; Frequency, Risk rating 2: 18; Frequency, Risk rating 3: 30; Frequency, Risk rating 4: 6; Frequency, Risk rating 5: 8; Total: 69. Year: 2006; Frequency, Risk rating 1: 3; Frequency, Risk rating 2: 7; Frequency, Risk rating 3: 37; Frequency, Risk rating 4: 3; Frequency, Risk rating 5: 13; Total: 63. Year: 2007; Frequency, Risk rating 1: 32; Frequency, Risk rating 2: 58; Frequency, Risk rating 3: 42; Frequency, Risk rating 4: 17; Frequency, Risk rating 5: 8; Total: 157. Year: 2008; Frequency, Risk rating 1: 61; Frequency, Risk rating 2: 44; Frequency, Risk rating 3: 41; Frequency, Risk rating 4: 26; Frequency, Risk rating 5: 16; Total: 187. Source: GAO analysis of SBA data. [End of figure] Our findings are similar to those of SBA's Inspector General. In a 2007 report, the Inspector General concluded that SBA had made limited use of lender risk ratings to guide its oversight activities.[Footnote 42] It observed that the agency reviewed large lenders regardless of their risk ratings and did not do on-site reviews of smaller lenders with high-risk ratings. The report recognized that some of the smaller lenders might not have a sufficient number of loans in their portfolio to warrant an on-site review but noted that others could have a significant number of loans. The Inspector General recommended that SBA develop an on-site review plan or agreed-upon procedures for all high- risk 7(a) lenders with guaranteed loan portfolios in excess of $4 million. We agree that although not all of the small lenders with high- risk ratings warrant more targeted monitoring, some do. Of the 1,545 high-risk lenders that we found were not reviewed in 2008, 215 lenders had an outstanding portfolio of at least $4 million. According to SBA officials, the agency is developing agreed-upon procedures for conducting additional reviews of smaller lenders in response to the Inspector General's recommendation. Lender Risk Ratings Do Not Inform the Scope of SBA's On-Site Reviews, and Reviews Do Not Include an Assessment of Lenders' Credit Decisions: Unlike federal financial regulators, SBA does not rely on its lender risk ratings to help focus the scope of on-site reviews, and the reviews do not include an assessment of the lenders' credit decisions. The federal financial regulators we interviewed rely on results from their off-site monitoring systems to identify which areas of a bank's operations they should review more closely. Using the results of the off-site monitoring, they are able to tailor the scope of their on-site reviews to the specific areas of lenders' operations that pose the most risk to the bank. In addition, during on-site reviews, the federal financial regulators often include an assessment of the quality of lenders' credit decisions. They told us that the results of their on- site reviews helped not only to assess the risk that lenders posed, but also to identify emerging lending trends and areas of banking operations that may pose significant, new risk to banks in the future. They are then able to use the results to inform their off-site monitoring systems. For example, regulators stated that when their on- site reviews showed an increase in subprime lending, they incorporated subprime lending data into their off-site monitoring tools. Although SBA's mission differs from the mission of the federal financial regulators, internal control standards require all federal agencies to identify and analyze risk, as well as to determine the best way to manage or mitigate it. According to SBA's Standard Operating Procedure for on-site reviews, the agency assesses a lender's (1) portfolio performance, (2) SBA management and operations, (3) credit administration practices, and (4) compliance with statutes and SBA regulations and policies. For the portfolio performance component, SBA uses L/LMS data to review the size, composition, performance, and credit quality of a lender's SBA portfolio. When assessing a lender's SBA operations, SBA evaluates, among other things, the lender's internal policy and procedural guidance on SBA lending; the competence, leadership, and administrative ability of management and staff who have responsibility for the SBA loan portfolio; and the adequacy of the lender's internal controls. For the credit administration component, SBA assesses the lender's policies and procedures for originating, servicing, and liquidating SBA loans. An SBA contractor then uses this information during file reviews to determine the degree to which lending policies and procedures are followed. For the compliance component, SBA's contractor performs file reviews that focus on the lender's compliance with SBA-specific requirements. When performing file reviews, contractor staff do not rely on results from the lender risk rating system to tailor the scope of the reviews. Instead, contractor staff rely on a standard form--the lender review checklist--to conduct all file reviews, regardless of the lender risk rating or other information available to SBA about the lender's portfolio. Moreover, these file reviews do not include an assessment of the quality of the credit decisions made by lenders. Rather, the lender review checklist focuses primarily on the lenders' adherence to SBA policies, including those based on statutes or regulations, when making SBA-guaranteed loans. The checklist includes questions related to, among other things, the determination of borrower eligibility (including whether the borrower had any other outstanding SBA loans that are not current), the calculation of collateral value, and evidence that all required forms were obtained and reviewed. According to SBA officials, the file reviews focus on compliance with SBA policy because it is not SBA's role to evaluate lenders' credit decisions. The officials did not believe that the agency should be setting policy or underwriting standards for lenders. However, because SBA relies on lenders with delegated underwriting authority to make the majority of its loans, we believe that SBA should take a more active role in ensuring that these lenders are making sound credit decisions. We originally reported on SBA's compliance-based reviews in 2002, when we found that SBA's automated checklist lacked the substance to provide a meaningful assessment of lender performance.[Footnote 43] We reported that SBA's on-site reviews were based on reviewers' findings from a lender questionnaire and a review checklist in order to ensure objective scoring. The lender questionnaire addressed organizational structure, oversight policy, and controls. SBA officials said that prior to the implementation of the automated worksheet scoring process, on-site reviews were done in a narrative format, and reviewers' assessments of lender performance were subjective. They noted that the worksheet format made the reviewers' assessments of lenders more consistent and objective. As previously mentioned, SBA has since expanded the scope of its on-site reviews to include more than just a compliance component and revised the checklist used to conduct file reviews. But, as noted previously, the revised checklist still focuses on compliance with SBA policies and procedures. An example from our February 2009 report on compliance with the credit elsewhere requirement illustrates SBA's emphasis on ensuring policy compliance rather than verifying lenders' credit decisions during on- site reviews.[Footnote 44] Because the 7(a) and 504 programs are intended to serve borrowers who cannot obtain conventional credit at reasonable terms, lenders making 7(a) and 504 loans must ensure that borrowers meet the credit elsewhere requirement. This statutory requirement stipulates that to receive loans, borrowers must not be able to obtain financing under reasonable terms and conditions from conventional lenders. During an on-site review, the contractor is to determine whether lender policies and practices adhere to SBA's credit elsewhere requirement. During the review, SBA's contractor explained that it checks to see that the lender documented its credit elsewhere determination and cited one of the six factors that SBA has determined are acceptable reasons for concluding that a borrower could not obtain credit elsewhere. However, it does not routinely assess the information lenders provide to support credit elsewhere determinations. Contract staff answer "yes" or "no" on the checklist that "written evidence that credit is not otherwise available on terms not considered unreasonable without guarantee provided by SBA" was in the file. Contractor officials stated that when the documentation standard is not met, the examiner will sometimes look at the factual support in the file to independently determine whether the credit elsewhere requirement was actually met. Because SBA officials choose not to rely on lender risk ratings to inform file reviews conducted during on-site reviews or assess lenders' credit decisions during the reviews, the agency does not have the type of information related to the quality of the underwriting standards and practices of lenders that is necessary to understand the risks that banks pose to SBA's portfolio. Without this information, the agency cannot make informed improvements to the lender risk rating system that would enable it to take into account new emerging lending trends. Conclusions: Because SBA relies heavily on its lenders to determine if loans are eligible for an SBA guarantee and to underwrite the loans, lender oversight is of particular importance. By working with a contractor to develop a lender risk rating system, SBA has taken a positive step toward improving its oversight of lenders. The lender risk rating system enables SBA for the first time to systematically and routinely monitor the performance of all lenders, including lenders with the smallest loan portfolios, which SBA had not routinely monitored. However, SBA does not ensure that its contractor follows sound practices when validating the system. Guidance from the federal financial regulators we interviewed states, among other things, that validation should be performed by an independent party and should routinely reassess the factors used to determine risk, taking into consideration changes in the environment (such as changes in industry trends). SBA did not require its contractor to ensure that personnel other than the staff who developed the model validated it or to routinely reassess the factors used in the system as part of its validations. Unless SBA ensures that its contractor follows sound model validation practices, the agency's ability to identify inaccurate ratings, detect systemic or structural issues with the design of the model, and determine whether the ratings are deteriorating over time as economic conditions change will be limited. SBA's contractor is currently redeveloping the lender risk rating system to improve its predictive ability. However, the benefits that may be achieved through the redeveloped lender risk rating system will be limited if SBA continues the practice of not ensuring that its contractor adopts sound validation practices. In particular, testing to ensure that the system effectively evaluates risk is an important element to improve a risk rating system, regardless of whether such testing occurs during routine validation efforts or during model redevelopment. In addition, contrary to federal financial regulator guidance and our internal control standards, SBA has not used its own data to conduct independent assessments of the risk rating system to help ensure the usefulness of the risk ratings. We found that SBA data could be useful for developing alternate measures of lender performance in order to independently validate the lender risk rating system's results. Without performing its own assessment, the agency may not be able to identify issues with the model's ability to reasonably predict lender behavior or to notify the contractor of any suspected deterioration. As a result, SBA may miss opportunities to identify risky lenders and mitigate the risks they pose to SBA's portfolio. If SBA improves its validation of the lender risk ratings, the agency could rely more on them to determine which lenders need an on-site review. Currently, unlike FDIC and the Federal Reserve, SBA does not take full advantage of its risk ratings to set the schedules for on- site reviews. The agency targets lenders for on-site reviews based on size rather than risk level. As a result, we found that SBA conducted on-site reviews of only 3 percent of the lenders that the lender risk rating system identified as high risk in 2008. Of these, 215 had an outstanding SBA portfolio of at least $4 million. Relying more on the risk ratings to target lenders for review would enable the agency to focus on the lenders that pose the most risk to the agency. Although SBA has made improvements to its off-site monitoring of lenders, the agency will not be able to substantially improve its lender oversight efforts unless it improves its on-site review process. Federal financial regulators rely on results from their off-site monitoring to tailor the scope of their on-site reviews. SBA does not rely on its lender risk ratings to inform file reviews conducted during on-site reviews but rather consistently uses a checklist to examine lenders. In addition, federal financial regulators routinely assess the quality of lenders' credit decisions as part of their on-site examination process. SBA fails to include this component but instead focuses more on compliance with SBA policies and procedures. For example, rather than assessing the quality of lender underwriting, contractor staff focus on whether lenders ensured that the borrowers met eligibility requirements, including whether borrowers had any other outstanding SBA loans that are not current. By including an assessment of lenders' credit decisions as a routine part of their on-site review process, SBA would be able to determine the quality of the lenders' underwriting standards and practices and make any necessary changes to its lender risk rating system to ensure that the tool is relevant and includes emerging lending trends. Recommendations for Executive Action: We recommend that the Administrator of the Small Business Administration take the following four actions: To ensure that the lender risk rating system effectively evaluates risk, when validating the system and undertaking any redevelopment efforts, the Administrator should: * ensure that SBA's contractor follows sound model validation practices. These practices should include (1) testing of the lender risk rating system data, processes, and results, including a routine reassessment of which factors are the most predictive of lender performance; (2) utilizing an independent party to conduct validations; and (3) maintaining complete documentation of the validation process and results. * use SBA's own data to assess how well the lender risk ratings predict individual lender performance. To make better use of the lender risk rating system in SBA's oversight of lenders, the Administrator should: * develop a strategy for targeting lenders for on-site reviews that relies more on SBA's lender risk ratings. * consider revising SBA policies and procedures for conducting on-site reviews. These revised policies and procedures could require staff to (1) use lender risk ratings to tailor the scope of file reviews performed during on-site reviews to areas that pose the greatest risk, (2) incorporate an assessment of lenders' credit decisions in file reviews, and (3) use the results of expanded file reviews to identify information, such as emerging lending trends, that could be incorporated into its lender risk rating system. Agency Comments and Our Evaluation: We requested SBA's comments on a draft of this report, and the Associate Administrator of the Office of Capital Access provided written comments that are presented in appendix II. SBA generally agreed with our recommendations and outlined some steps that it plans to take to address them. The agency also provided one technical comment, which we incorporated. SBA provided detailed comments on each of our four recommendations. In response to our recommendation to ensure that SBA's contractor follows sound model validation techniques, SBA noted that the agency is currently undertaking a redevelopment of its lender risk rating system and plans to ensure that best practices are incorporated into the redevelopment validation process. According to the agency, the redevelopment contract will give SBA greater flexibility to reassess the predictiveness of the factors used in the model and to refine the model if necessary. SBA stated that it is also developing an independent review process as well as increasing the level of documentation of the validation process. Regarding our recommendation to use its own data to assess how well the lender risk ratings predict individual lender performance, SBA stated that although it remains confident that the lender risk ratings provide accurate predictions, the agency will determine whether alternative measures would be useful to supplement the lender risk ratings. In response to our recommendation to develop a strategy for targeting lenders for on-site review that relies more on the lender risk ratings, SBA stated that it agreed with our finding that between 2005 and 2008 on-site reviews had been limited and primarily focused on the largest lenders, but pointed out that the agency had significantly increased the number of lenders reviewed since it began charging for on-site reviews late in fiscal year 2007. The agency also noted that the largest lenders account for approximately 85 percent of SBA's entire guaranteed portfolio, while the high-risk lenders that were not reviewed in 2008 represent 2 percent of SBA's total 7(a) and 504 portfolios. In our report, we recognize that while not all of the small lenders with high risk ratings warrant more targeted monitoring, some do. Of the 1,545 high-risk lenders that we found were not reviewed in 2008, 215 lenders had significant portfolios--that is, portfolios of at least $4 million. While SBA indicated that it plans to continue to focus on-site reviews on the largest lenders that account for the majority of the guaranteed portfolio, it stated that it will consider revising its internal policies to make better use of the lender risk ratings to prioritize on-site reviews. Regarding our recommendation to consider revising policies and procedures for conducting on-site reviews, SBA stated that the agency is in the process of reprocuring its on-site review contract. According to the agency, SBA included the ability to conduct on-site reviews that can be better tailored to specific concerns about individual lender performance as part of the reprocurement process. SBA also stated that the agency is in the process of evaluating our recommendation to include an assessment of lender credit decisions in the on-site review process and will investigate ways to use the results of the on-site reviews to inform the lender risk rating system. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Administrator of the Small Business Administration, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at [hyperlink, http://www.gao.gov]. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Signed by: William B. Shear: Director, Financial Markets and Community Investment: [End of section] Appendix I: Objectives, Scope, and Methodology: In this report, we examined (1) how the Small Business Administration's (SBA) risk rating system compares with the off-site monitoring tools used by federal financial regulators and lenders and the system's usefulness for predicting lender performance and (2) how SBA uses the lender risk rating system in its lender oversight activities. To determine how SBA's lender risk rating system compares with off-site monitoring tools used by federal financial regulators and lenders, we conducted interviews and reviewed documents to identify common industry standards. We interviewed officials from three federal financial regulators--the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (the Federal Reserve), and the Federal Deposit Insurance Corporation (FDIC)--five of the largest 7(a) lenders, and the five largest 504 lenders.[Footnote 45] We identified the largest lenders based on the size of their SBA- guaranteed portfolio in 2007, the most recent data available when we began our review.[Footnote 46] The documents we reviewed included relevant literature, procedural manuals and other related federal guidance to banks on loan portfolio monitoring, and lender procedural manuals. We then obtained and analyzed documents from SBA on its lender risk rating system and conducted interviews with agency and contractor officials responsible for maintaining the system to determine how the system was developed and validated. We assessed SBA's lender risk rating system against common industry standards and our internal control standards.[Footnote 47] In addition, we reviewed our previous work on SBA and guidance on model validation from the Basel Committee on Banking Supervision, which provides a forum for banking regulators from around the world to regularly cooperate on banking supervisory matters and develop common guidelines.[Footnote 48] To assess the lender risk rating system's usefulness for predicting lender performance, we performed independent statistical tests to determine how well it predicted individual lender performance. To perform these tests, we first obtained the following data from SBA: administrative data on loans approved in 2003 through the end of 2007 (including the date the loan was approved, the size of the loan, and whether and when the loan was purchased); the March 2007 and March 2008 lender performance reports containing risk ratings; and the currency rate for each lender.[Footnote 49] We assessed the reliability of these data by reviewing information about the data and performing electronic data testing to detect errors in completeness and reasonableness. We found that the data were sufficiently reliable for the purposes of this report. Using SBA's data, we undertook a number of evaluative steps to test the agency's model. First, we assessed how well the lender risk ratings predicted lender default rates (our measure of actual lender performance). In order to test how well the lender risk ratings predicted lender performance, we estimated how well a lender performed during either the year or 6 months after the score was developed (depending on the amount of data available) using a logit regression. A logit regression is a statistical technique that estimates how the odds of an outcome changes with an attribute of the unit of analysis. In our case, we estimated how the odds of a loan being purchased by SBA varied by the lender that made the loan. Additionally, we controlled for the age of loans and how default rates for all loans changed over the year or 6 months. To control for the age and changing default rates over time, we employed a methodology called a discrete time hazard model. We restructured the data so that there was a separate observation for every quarter that a loan was at risk of being purchased. Then we estimated a logit regression and predicted whether the loan was purchased that quarter. In that regression, we included a dummy variable for each lender, a dummy variable for each quarter, and a dummy variable for each quarter since that loan was approved, to capture the age of the loan.[Footnote 50] The following describes the regression equation we used: P(loan i was purchased at time t) = logit(al ,at ,ad): where the parameters of interest, al, can be transformed to express the relative odds of a loan being purchased or defaulting for each lender, with one lender excluded as a reference. We used the coefficients al as the measures of lender risk. In addition, the coefficients t control for the differential rate of default by time period, and the coefficients ad control for the age of the loans. Once we estimated the performance for each lender, we matched it with each lender's record in the lender performance report, which contained the risk rating. For 7(a) loans, we matched our performance measures with the lender risk rating using a "crosswalk" file obtained from SBA. [Footnote 51] Because the data we obtained from SBA only included loans that were approved from January 2003 to December 2007 and a lender had to have made at least 100 loans during that time period to make our analysis meaningful, we were only able to obtain measures for 308 of the 4,673 7(a) lenders in the March 2008 lender performance report. We were more likely to obtain measures for larger lenders.[Footnote 52] For example, we were able to obtain measures for 56 of the 60 lenders with more than $100 million in outstanding SBA-guaranteed loan balances. In all, the 308 lenders, plus the lender excluded as the reference case, represented approximately 79 percent of the outstanding balance and 85 percent of the outstanding loans reported in the March 2008 lender performance report. For 504 lenders, we were able to obtain measures for 86 of the 270 lenders. We were able to obtain 47 of the 48 lenders in the largest peer group--that is, those lenders with more than $100 million in outstanding SBA-guaranteed loan balances. To determine how SBA uses the lender risk rating system in its lender oversight activities, we reviewed agency documents and conducted interviews to document SBA's practices for assessing and monitoring the risk of lenders and loan portfolios. We then compared these practices against (1) the industry standards we identified through our interviews with federal financial regulators and lenders and reviews of their documents and (2) our internal control standards. We also obtained and analyzed SBA data on risk ratings and on-site examinations from 2005 through 2008 to determine the role that the lender risk ratings played in identifying lenders for an on-site review. To analyze the data on risk ratings and on-site examinations, we had to make a number of assumptions because the risk ratings were reported by quarter and we planned on reporting them by year. First, we assigned lender risk ratings in two different ways. For those lenders that were reviewed, we assigned them the risk rating that they received during the quarter that immediately preceded the on-site review. For those lenders that were not reviewed, we assigned them the lowest risk rating that they received during that given year. Second, we assigned lenders to peer groups in two different ways.[Footnote 53] For those lenders that were reviewed, we assigned them the peer group that they were in during the quarter that immediately preceded their on-site review. For those lenders that were not reviewed, we assigned them the peer group they were in when they received their lowest risk rating. Because lenders are assigned a risk rating four times in a given year, there were some instances when they received the same low-risk rating multiple times in a given year but were in different peer groups when these ratings were assigned. In these instances, we relied on the most recent, lowest-risk rating score. For example, a lender could have received a lender risk rating of 4 in the second, third, and fourth quarter of a given year. However, the lender was in the highest peer group during the second and third quarters and in the second highest peer group in the fourth quarter. We would rely on the most recent quarter's information and assign this lender a risk rating of 4 and the second highest peer group. Third, we determined the on-site review date in two ways. For on-site reviews completed in 2005 and 2006, we relied on the date that the final report for the on-site review was issued to determine when an on-site review was completed. For on-site reviews completed in 2007 and 2008, we were able to rely on an additional variable included in the data that identified the date the on-site review was completed to determine when the on-site review was completed. We conducted this performance audit from August 2008 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. [End of section] Appendix II: Comments from the Small Business Administration: U.S. Small Business Administration: Washington D.C. 20416: October 19, 2009: Mr. William Shear: Director, Financial Markets and Community Investment Issues: U.S. Government Accountability Office: 441 G Street, N.W. Washington, DC 20548: Re: Report on U.S. Small Business Administration's (SBA) Loan and Lender Monitoring System (L/LMS): Dear Mr. Shear: Thank you for the opportunity to respond to the draft report prepared by the Government Accountability Office (GAO) titled "Actions Needed to Improve the Usefulness of the Agency's Lender Risk Rating System," report number GAO-10-53. We would like to complement you and your staff on the work that went into the report. We are pleased by the draft report's finding that the system (L/LMS) was generally successful in distinguishing between higher- and lower- risk lenders", and that by developing the risk rating system, SBA has taken a positive step toward improving its oversight of lenders...the lender risk rating system enables SBA for the first time to systematically and routinely monitor the performance of all lenders. including lenders with the smallest loan portfolios, which SBA had not routinely monitored. We also appreciate GAO's recommendation that SBA should rely more on L/LLMS in its targeting of lenders for on-site reviews, as it further demonstrates GAO's belief that L/LMS is a useful tool in evaluating the relative risk of individual lenders to SBA. We understand that GAO was asked to compare SBA's risk rating system against those used by federal financial regulators. However, we agree with GAO's statement that "although the federal financial regulators and SBA both oversee lenders, their missions differ, and as a result they may choose to focus on different variables in conducting off-site monitoring." We would like to further note that the federal financial regulators oversee the majority of SBA's lending partners; therefore. SBA's lender oversight program is designed to provide effective monitoring of lenders' SBA operations while also avoiding duplication of the federal financial regulators' oversight efforts. We believe L/LMS is a critical component of this endeavor. SBA generally agrees with GAO's recommendations, which focused on two main issues: the L/LMS validation process and the use of L/LMS results in the on-site review process. SBA's response to each of the four recommendations follows. In addition, we have included a technical correction to GAO's draft report in an attachment to this letter. 1. Ensure that SBA's contractor follows sound model validation practices. These practices should include (1) testing of the lender risk rating system data, processes, and results, including a routine reassessment of which factors are the most predictive of lender performance; (2) utilizing an independent party to conduct validations; and (3) maintaining complete documentation of the validation process and results. SBA generally agrees with this recommendation and is already taking steps to address it. As noted in the report, SBA is currently undertaking a redevelopment of L/LMS; thus the timing of GAO's recommendations is helpful for SBA to ensure that best practices are incorporated into the redevelopment validation process. SBA and its contractors are currently working to increase the level of documentation of the validation process to be consistent with the more rigorous standards established by federal financial regulators. Furthermore, under the new L/LMS contract, SBA has greater flexibility to reassess the predictiveness of the factors used in the model and refine the model if necessary. Finally, in regard to the recommendation that SBA utilize an independent party to conduct validations, we appreciate GAO's statement that an independent party may include internal staff not involved in the development of the model, such as internal audit staff or a risk management unit. This provides SBA with a workable solution for achieving independent validation without violating the proprietary rights of our contractors. We are in the process of establishing an independent review process, which will be utilized in our current redevelopment. 2. Use SBA's own data to assess how well the lender risk ratings predict individual lender performance. SBA remains confident that the lender risk ratings provide an accurate prediction of lender performance. However, SBA will look into whether alternate measures would be useful to supplement the lender risk ratings. 3. Develop a strategy for targeting lenders for on-site reviews that relies more on SBA's lender risk ratings. SBA believes that its on-site review is an effective tool in the monitoring of 7(a) and 504 participants. While we agree with GAO's comments that between 2005 and 2008 on-site reviews were limited and primarily focused on the largest lenders, we wish to point out that we have significantly increased the number of lenders reviewed since SBA began charging for the cost of on-site reviews late in FY2007. As noted in the draft report, SBA generally conducts on-site reviews of 7(a) lenders with SBA loan portfolios of $10 million or more and 504 lenders with SBA-guaranteed debentures totaling $30 million or more. These lenders account for approximately 85 percent of SBA's entire guaranteed portfolio. Moreover, SBA notes that the lenders with high risk ratings that were not reviewed in the 2008 review cycle only represent approximately 2 percent of SBA's total 7(a) and 504 portfolio. SBA chooses to focus its resources on reviewing lenders that represent the greatest risk to taxpayer dollars; therefore it must consider both a lender's risk rating and the impact of that lender on the entire SBA portfolio. As noted in 13 C.F.R. 120.1051, SBA considers several factors in determining when to perform an on-site review, including the lender's risk rating, the size of the lender's portfolio, results of prior on- site reviews, responsiveness in correcting deficiencies noted in prior reviews, and other risk-related information. SBA will consider revising its internal policies to better reflect the use of these additional factors in prioritizing on-site reviews; however, we expect to continue to focus our on-site review resources on 7(a) lenders with SBA loan portfolios of $10 million or more and 504 lenders with SBA-guaranteed debentures totaling $30 million or more, as these lenders pose the greatest potential risk to the entire SBA portfolio. 4. Consider revising SBA policies and procedures for conducting on-site reviews. These revised policies and procedures could require staff to (1) use lender risk ratings to tailor the scope of file reviews performed during on-site reviews to areas that pose the greatest risk, (2) incorporate an assessment of lenders' credit decisions in file reviews, and (3) use the results of expanded file reviews to identify information, such as emerging lending trends, that could be incorporated into its lender risk rating system. SBA is in the process of reprocuring its on-site review contract. As part of the reprocurement process, SBA included the ability to conduct reviews that can be better tailored to specific concerns about an individual lender, including portfolio performance problems as evidenced by its risk rating. We are in the process of evaluating GAO's recommendations regarding the addition of assessment of lender credit decisions in the reviews to determine how to approach this recommendation. We will also investigate ways in which the results of on-site reviews can inform the risk rating system. Once again, thank you for the opportunity to comment on your report. Please contact Tiffani Cooper, GAO Liaison, at (202) 205-6702 should you have any questions. Sincerely, Signed by: Eric R. Zarnikow: Associate Administrator: Office of Capital Access: [End of section] Appendix III: Predictive Performance of the March 2007 and March 2008 Lender Risk Ratings: We performed two types of statistical tests to determine how well SBA's lender risk ratings predicted individual lender performance.[Footnote 54] For both tests, we focused on how well the March 2007 lender risk ratings predicted the performance of lenders for the following year and how well the March 2008 lender risk ratings predicted the performance of lenders for the following 6 months. First, we compared raw scores from SBA's lender risk rating system to actual default rates for 7(a) and 504 lenders to determine how well the lender risk ratings identified the best and worst performing lenders. We divided lenders into two groups--those with lender default rates in the top 50 percent of all lender default rates and those with default rates that were in the bottom 50 percent of all lender default rates. We found that SBA's risk ratings were generally successful at distinguishing the performance of about two-thirds of the 7(a) and 504 lenders in our sample (see tables 2 and 3). For example, table 2 shows that 96 of the approximately 300 lenders in our sample were in the top 50 percent based on the March 2007 lender risk ratings and actual lender default rates, while another 99 lenders were in the bottom 50 percent based on both rankings. We also compared how well an alternate measure of lender performance--the currency rate--divided lenders into these same two performance groups and found that overall, it also correctly separated about two-thirds of the lenders in our sample. Table 2: Comparison of Alternative Rankings and Rankings Based on 2007 Lender Risk Rating Raw Scores, 2007 Currency Rates, and 2008 Lender Risk Rating Raw Scores for 7(a) Lenders: Comparison of March 2007 lender risk rating and defaults between March 2007 and March 2008: Ranking based on March 2007 lender risk rating raw score: Alternative ranking based on defaults: Top 50%: Top 50%: 96; Bottom 50%: 55; Total: 152. Alternative ranking based on defaults: Bottom 50%; Top 50%: 55; Bottom 50%: 99; Total: 154. Alternative ranking based on defaults: Total; Top 50%: 151; Bottom 50%: 155; Total: 306. Comparison of March 2007 currency rate and defaults between March 2007 and March 2008: Ranking based on March 2007 currency rate: Alternative ranking based on defaults: Top 50%: Top 50%: 88; Bottom 50%: 64; Total: 152. Alternative ranking based on defaults: Bottom 50%; Top 50%: 62; Bottom 50%: 92; Total: 154. Alternative ranking based on defaults: Total; Top 50%: 150; Bottom 50%: 156; Total: 306. Comparison of March 2007 lender risk rating and defaults between March 2008 and September 2008: Ranking based on March 2007 lender risk rating raw score: Alternative ranking based on defaults: Top 50%: Top 50%: 87; Bottom 50%: 66; Total: 153. Alternative ranking based on defaults: Bottom 50%; Top 50%: 64; Bottom 50%: 91; Total: 155. Alternative ranking based on defaults: Total; Top 50%: 151; Bottom 50%: 157; Total: 308. Source: GAO analysis of SBA data. Note: The number of lenders in the March 2007 lender performance report that we were able to match with default rates we produced was two less than in the March 2008 lender performance report. [End of table] Table 3: Comparison of Alternative Rankings and Rankings Based on 2007 Lender Risk Rating Raw Scores, 2007 Currency Rates, and 2008 Lender Risk Rating Raw Scores for 504 Lenders: Comparison of March 2007 lender risk rating and defaults between March 2007 and March 2008: Ranking based on March 2007 lender risk rating raw score: Alternative ranking based on defaults: Top 50%: Top 50%: 23; Bottom 50%: 19; Total: 42. Alternative ranking based on defaults: Bottom 50%; Top 50%: 14; Bottom 50%: 30; Total: 44. Alternative ranking based on defaults: Total; Top 50%: 37; Bottom 50%: 49; Total: 86. Comparison of March 2007 currency rate and defaults between March 2007 and March 2008: Ranking based on March 2007 currency rate: Alternative ranking based on defaults: Top 50%: Top 50%: 28; Bottom 50%: 14; Total: 42. Alternative ranking based on defaults: Bottom 50%; Top 50%: 13; Bottom 50%: 31; Total: 44. Alternative ranking based on defaults: Total; Top 50%: 41; Bottom 50%: 45; Total: 86. Comparison of March 2007 lender risk rating and defaults between March 2008 and September 2008: Ranking based on March 2007 lender risk rating raw score: Alternative ranking based on defaults: Top 50%: Top 50%: 24; Bottom 50%: 18; Total: 42. Alternative ranking based on defaults: Bottom 50%; Top 50%: 17; Bottom 50%: 27; Total: 44. Alternative ranking based on defaults: Total; Top 50%: 41; Bottom 50%: 45; Total: 86. Source: GAO analysis of SBA data. [End of table] We used the same data to perform the second statistical test: determining the correlation between the rankings based on lender default rates and (1) the lender risk ratings and (2) the alternate measure--currency rate. We found that for both 7(a) and 504 lenders, there was a positive correlation between actual performance (lender default rates) and the lender risk ratings and currency rate. For the largest 7(a) lenders (that is, those lenders with SBA-guaranteed portfolios of at least $100 million), the lender risk ratings were more correlated to the lender default rates than was the currency rate. For 504 lenders, we found that both measures--the lender risk rating and the currency rate--performed about the same (see table 4). Table 4: Results of Correlation Analysis: Measure: Raw rating score from March 2007; Comparison: Lender's relative odds of default from March 2007 through March 2008; 7(a): $100 million or more: .48; (50); 7(a): Between $10 million and $100 million: .34; (183); 7(a): Total: .31; (308); 504: $100 million or more: .42; (39); 504: Between $30 million and $100 million: .42; (47); 504: Total: .40; (86). Measure: Gross currency rate from March 2007; Comparison: Lender's relative odds of default from March 2007 through March 2008; 7(a): $100 million or more: .17; (50); 7(a): Between $10 million and $100 million: .37; (183); 7(a): Total: .35; (308); 504: $100 million or more: .48; (39); 504: Between $30 million and $100 million: .42; (47); 504: Total: .42; (86). Measure: Raw rating score from March 2008; Comparison: Lender's relative odds of default from March 2008 through September 2008; 7(a): $100 million or more: .54; (56); 7(a): Between $10 million and $100 million: .21; (187); 7(a): Total: .23; (308); 504: $100 million or more: .32; (47); 504: Between $30 million and $100 million: .44; (39); 504: Total: .38; (86). Measure: Gross currency rate from March 2008; Comparison: Lender's relative odds of default from March 2007 through September 2008; 7(a): $100 million or more: .30; (56); 7(a): Between $10 million and $100 million: .16; (187); 7(a): Total: .16; (308); 504: $100 million or more: .34; (47); 504: Between $30 million and $100 million: .37; (39); 504: Total: .34; (86). Source: GAO analysis of SBA data. Note: The numbers in parentheses represent the number of lenders in each category. [End of table] [End of section] Appendix IV: Small Business Predictive Score: The Small Business Predictive Score (SBPS) predicts loan performance. Specifically, it predicts the likelihood of severe delinquency (61 or more days past terms) over the next 18 to 24 months, including bankruptcies and charge-offs.[Footnote 55] It is an off-the-shelf product that was developed by Fair Isaac using consumer and business credit bureau data. The model is able to produce scores--ranging from 1 to 300, 1 being highest risk and 300 being lowest risk--using either a mix of consumer and business data, only data from the consumer credit bureaus, or only business data from Dun & Bradstreet. According to SBA officials, approximately 74 percent of its 7(a) loans and 83 percent of its 504 loans are scored using both consumer and business data. Approximately 17 percent of its 7(a) loans and 8 percent of its 504 loans are scored using consumer data only, while 9 percent of its 7(a) loans and 504 loans are scored with Dun & Bradstreet data only. As we reported in 2004, Dun & Bradstreet collects these data from various sources and processes them through a five-step quality assurance process.[Footnote 56] First, Dun & Bradstreet collects data from more than 150 million businesses globally and continuously updates its databases more than 1 million times daily based on real-time business transactions. Second, it matches SBA records with its records and achieves at least a 95 percent match of the data on 11 critical pieces of information used to identify the borrower. Third, Dun & Bradstreet assigns a unique identifier to each company. Fourth, Dun & Bradstreet identifies the corporate linkage of a business's branches or subsidiaries with their parent entity to help SBA understand their complete corporate exposure between borrowers and their parent entities. Finally, Dun & Bradstreet generates predictive indicators of a business's potential inability to repay a loan. Dun & Bradstreet officials refer to this process as the DUNSRight process. We performed independent tests to determine how well the SBPS predicted the performance of 7(a) loans. Specifically, we used a logit regression to determine how well the SBPS at loan origination predicted the default of loans with disbursement amounts above and below $150,000. [Footnote 57] We examined loans that were approved between 2003 and 2007 and default rates over the period of January 2007 to September 2008. We found that the origination SBPS was predictive for loans that were both less than $150,000 and more than $150,000. However, the SBPS was estimated to have a larger effect on the performance of loans that were less than $150,000. Table 5 shows the coefficients from the logistic regression we ran. The coefficient estimated for the sample of loans that were less than $150,000 is more negative than that for loans that were more than $150,000, indicating that an increase in the SBPS (which represents a decrease in the predicted risk of the loan) lowers the rate of default by a greater increment. Additionally, as shown in the last column, the difference in the coefficients between the two groups is statistically significant. Table 5: Predictive Ability of SBPS for Loans below and above $150,000: SPBS score: Subset of data: SPBS score: Below $150,000: -0.0243; (0.000297); Subset of data: SPBS score: Above $150,000: -0.0189; (0.000760); SPBS score: Difference between the effects: 0.00555; (0.000815). Source: GAO analysis of SBA data. Note: Standard errors of the logit regression are in parentheses. The logistic regressions corrected for the age of the loans and economic conditions. Expressed in terms of a change in odds, a one-point increase in the origination SBPS will lower the odds of default in a specific quarter by 2.4 percent for loans below $150,000 and 1.9 percent for loans above $150,000. [End of table] [End of section] Appendix V: GAO Contact and Staff Acknowledgments: GAO Contact: William B. Shear, (202) 512-8678 or shearw@gao.gov: Staff Acknowledgments: In addition to the contact named above, Paige Smith (Assistant Director), Triana Bash, Ben Bolitzer, Tania Calhoun, Emily Chalmers, Marc Molino, Jill Naamane, Anh Nguyen, Carl Ramirez, and Stacy Spence made key contributions to this report. [End of section] Footnotes: [1] The proceeds of 7(a) loans may be used for working capital and other general business purposes, while the proceeds of 504 loans may be used for fixed capital. Section 7(a) of the Small Business Act, as amended, codified at 15 U.S.C. § 636(a); Section 504 of the Small Business Investment Act of 1958, as amended, codified at 15 U.S.C. § 696. [2] GAO, Small Business Administration: New Service for Lender Oversight Reflects Some Best Practices, but Strategy for Use Lags Behind, [hyperlink, http://www.gao.gov/products/GAO-04-610] (Washington, D.C.: June 8, 2004). [3] SBA, Office of Inspector General, Oversight of SBA Supervised Lenders, Report no. 8-12 (Washington, D.C.: May 9, 2008). [4] The federal financial regulators we selected have policies and procedures for monitoring credit risk that are relevant to SBA. We focused on the largest lenders because they would be the most likely to have off-site monitoring tools similar to SBA's lender risk rating system. According to SBA, there are approximately 5,000 SBA lenders. Although our sample of 10 large lenders is nongeneralizable, it offers perspectives on how some lenders conduct off-site monitoring. [5] GAO, Standards for Internal Control in the Federal Government, [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] (Washington, D.C.: November 1999) and Internal Control Management and Evaluation Tool, [hyperlink, http://www.gao.gov/products/GAO-01-1008G] (Washington, D.C.: August 2001). [6] The currency rate is the sum of the dollar balance of guaranteed loans that are less than 30 days past due divided by the dollar balance of the total portfolio of guaranteed loans outstanding. [7] The American Recovery and Reinvestment Act of 2009 authorized SBA to temporarily increase the maximum 7(a) guarantee from 85 percent to 90 percent. SBA lenders consist of private banks, credit unions, and small business lending companies. Small business lending companies are nondepository institutions licensed by SBA that are not subject to state or federal supervision or examination other than oversight conducted by SBA. [8] A debenture is an unsecured debt backed only by the creditworthiness of the borrower. Debentures have no collateral, and SBA takes a junior lien position on the project property. The yields may vary from high to low, depending on who backs the debenture. [9] Public Law No. 104-208, Div. D, § 102, 110 Stat. 3009-724, 3009- 725, codified at 15 U.S.C. § 633, as amended. [10] [hyperlink, http://www.gao.gov/products/GAO-04-610]. [11] The SBPS predicts the likelihood of a loan becoming severely delinquent. [12] When a loan defaults, the lender asks SBA to honor the guarantee (that is, purchase the loan). The 12 months' actual purchase rate is calculated by dividing total gross dollars of the lender's loans purchased during the past 12 months by the sum of total gross outstanding dollars of SBA loans at the end of the 12-month period and total gross dollars purchased during the past 12 months. [13] The problem loan rate is calculated by dividing the sum of total gross outstanding dollars of a lender's loans that are 90 days or more delinquent and gross dollars in liquidation by total gross dollars outstanding. [14] According to SBA officials, the SBPS was validated to be predictive of loan purchases, as well as delinquencies. [15] The projected purchase rate is calculated by multiplying the amount of a lender's guaranteed loan dollars outstanding by the probability of their purchase. This total is then divided by the lender's total SBA-guaranteed dollars outstanding. [16] According to SBA, lenders with a risk rating of 1 are considered strong in every respect and typically score well above their peer group averages for all or nearly all of the rating factors. The SBA operations of an SBA lender rated as a 2 are considered good and typically are above average for all or nearly all of the rating factors. Similar to lenders rated as a 2, lenders rated as a 3 are considered about average for all or nearly all of the rating factors but have room for improvement, should monitor their portfolios closely, and should consider methods to improve loan performance. Lenders rated as a 4 or 5 are considered below or well-below average, respectively, for all or nearly all rating factors that are used to calculate the lender risk ratings. [17] The process for assigning lender risk ratings to 504 lenders differs from the process for 7(a) lenders in two ways. First, the 504 lender risk ratings are based on three factors: (1) past 12 months' actual purchase rate, (2) problem loan rate, and (3) average SBPS of each lender's portfolio. Second, the peer groups are sized differently. The 504 peer groups consist of lenders with portfolios of (1) $100,000,000 or more; (2) $30,000,000 to $99,999,999; (3) $10,000,000 to $29,999,999; (4) $5,000,000 to $9,999,999; and (5) less than $5,000,000. [18] SBA lenders are required to report monthly to SBA on the status of their SBA-guaranteed portfolio. To offset some of the costs of the 7(a) program, SBA assesses lenders two fees on each 7(a) loan, an up-front guarantee fee that may be passed on to the borrower and an annual servicing fee. 15 U.S.C. §§ 636(a)(23), (18). [19] Call reports are quarterly reports that collect basic financial data on commercial banks in the form of a balance sheet and income statement (formally known as Report of Condition and Income). [20] A FICO score is a credit score derived from the credit model developed by the Fair Isaac Corporation. The FICO score is calculated by all three of the major credit bureaus from reported payment information. A higher FICO score indicates better credit, and a FICO score below 600 is considered poor. [21] Our measure of defaults is the purchase rate. [22] In order to estimate default rates, we needed a meaningful number of loans for each lender. Therefore, we excluded from our sample 7(a) and 504 lenders that had less than 100 loans approved between January 2003 and December 2007. As a result, our sample of lenders does not generally include lenders with smaller guaranteed portfolios (such as portfolios of less than $10,000,000). [23] We identified 308 7(a) lenders in our sample that had at least 100 loans approved between January 2003 and December 2007. These 308 lenders' loans represented about 79 percent of the total outstanding portfolio balance and about 85 percent of the total outstanding SBA- guaranteed loans, based on the March 2008 lender performance report. For each of these lenders, we determined performance by estimating the relative odds of a loan in that portfolio being purchased (or defaulting), correcting for the age and current economic conditions. For more information on the method used, see appendix I. [24] The Basel Committee on Banking Supervision provides a forum for banking regulators to regularly cooperate on banking supervisory matters. Its objective is to enhance understanding of key supervisory issues and improve the quality of banking supervision worldwide. It seeks to do so by facilitating the exchange of information on national supervisory issues, approaches, and techniques with a view to promoting common understanding. At times, the committee develops guidelines and supervisory standards in various areas--for example, the Basel Committee's Accord Implementation Group has developed guiding principles on the validation of rating systems. [25] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] and [hyperlink, http://www.gao.gov/products/GAO-01-1008G]. [26] [hyperlink, http://www.gao.gov/products/GAO-04-610]. [27] For example, the contractor determined whether those lenders that were rated as a 1 had lower rates of purchases than those groups of lenders that were rated as 2, 3, 4, or 5. The SBA contractor focused on two variables--purchase rates and cumulative net cash yields--to assess how well the risk ratings rank ordered lenders by group. [28] In particular, the contractor used the K-S statistic, which tests whether the distribution of a variable from a sample matches some other probability distribution. For example, the K-S statistic can test whether purchases follow a pattern based on a lender's risk rating or whether they follow a random distribution. Guidance from federal financial regulators states that this statistic is commonly used in the banking industry. [29] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] and [hyperlink, http://www.gao.gov/products/GAO-01-1008G]. [30] [hyperlink, http://www.gao.gov/products/GAO-04-610]. [31] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] and [hyperlink, http://www.gao.gov/products/GAO-01-1008G]. [32] According to the 2005 validation report, the contractor performed a stepwise regression to determine if using last 24-month purchases, last 12-month charge-offs, or a modified problem loan rate would increase the model's ability to predict future purchases among 7(a) lenders. The contractor found that there would be no benefit to using these variables. [33] These additional variables included the age of the portfolio and type of loan product. [34] The K-S statistic for 7(a) lender ratings decreased from 36 in 2005 to a range of 27 to 29 in 2007. [35] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] and [hyperlink, http://www.gao.gov/products/GAO-01-1008G]. [36] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] and [hyperlink, http://www.gao.gov/products/GAO-01-1008G]. [37] SBA, Office of Inspector General, SBA's Use of the Loan and Lender Monitoring System, Report no. 7-21 (Washington, D.C.: May 2, 2007). [38] NAICS was developed as the standard for federal statistical agencies in classifying business establishments for the collection, analysis, and publication of statistical data related to the business economy of the United States. NAICS was developed under the auspices of the Office of Management and Budget and adopted in 1997 to replace the old Standard Industrial Classification system. [39] According to OCC officials, they review all lenders on a regular schedule. [40] Lender risk ratings are used to prioritize reviews for lenders within the same peer group. [41] According to SBA, it implemented fee-based reviews in late fiscal year 2007. [42] SBA, Report no. 7-21. [43] GAO, Small Business Administration: Progress Made but Improvements Needed in Lender Oversight, [hyperlink, http://www.gao.gov/products/GAO-03-90] (Washington, D.C.: Dec. 9, 2002). [44] GAO, Small Business Administration: Additional Guidance on Documenting Credit Elsewhere Decisions Could Improve 7(a) Program Oversight, [hyperlink, http://www.gao.gov/products/GAO-09-228] (Washington, D.C.: Feb. 12, 2009). [45] The federal financial regulators we selected have policies and procedures for monitoring credit risk that are relevant to SBA. We focused on the largest lenders because they would be most likely to use off-site monitoring tools similar to SBA's lender risk rating system. [46] According to SBA, there are approximately 5,000 SBA lenders. Although our sample of 10 large lenders is nongeneralizable, it offers perspectives on how some lenders conduct off-site monitoring. [47] GAO, Standards for Internal Control in the Federal Government, [hyperlink, http://www.gao.gov/products/GAO/AIMD-00.21.3.1] (Washington, D.C.: November 1999) and Internal Control Management and Evaluation Tool, GAO-01-1008G (Washington, D.C.: August 2001). [48] GAO, Small Business Administration: New Service for Lender Oversight Reflects Some Best Practices, but Strategy for Use Lags Behind, [hyperlink, http://www.gao.gov/products/GAO-04-610] (Washington, D.C.: June 8, 2004). [49] The currency rate is the sum of the dollar balance of guaranteed loans that are less than 30 days past due divided by the dollar balance of the total portfolio of guaranteed loans outstanding. For comparison purposes, we subtracted the currency rate from 100, so that lower currency rates would be consistent with higher default rates. [50] This regression was weighted by the guaranteed amount of the loan at the time of approval. [51] We tested the crosswalk file obtained from SBA by comparing the outstanding balance in the March 2008 lender performance report to the amount disbursed by lenders in the administrative data for the lenders that we matched. The correlation was .95 for 7(a) lenders and .99 for 504 lenders. We also compared the number of loans in the lender performance report and the number of loans in the administrative data. The correlation was .99 for 7(a) lenders and .99 for 504 lenders. [52] Note that for one 7(a) and one 504 lender, we did not obtain a ranking because that lender was the reference category to which the other lenders' odds were relative. [53] SBA assigns lenders to different peer groups based on their portfolio size. [54] In order to estimate default rates, we needed a meaningful number of loans for each lender. Therefore, we excluded from our sample 7(a) and 504 lenders that had less than 100 loans approved between January 2003 and December 2007. As a result, our sample of lenders does not generally include lenders with smaller guaranteed portfolios (such as portfolios of less than $10,000,000). [55] According to SBA officials, the SBPS was validated to be predictive of loan purchases, as well as delinquencies. [56] GAO, Small Business Administration: New Service for Lender Oversight Reflects Some Best Practices, but Strategy for Use Lags Behind, [hyperlink, http://www.gao.gov/products/GAO-04-610] (Washington, D.C.: June 8, 2004). [57] Our measure of default is the purchase rate. [End of section] GAO's Mission: The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's Web site [hyperlink, http://www.gao.gov]. Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to [hyperlink, http://www.gao.gov] and select "E-mail Updates." Order by Phone: The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s Web site, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]: E-mail: fraudnet@gao.gov: Automated answering system: (800) 424-5454 or (202) 512-7470: Congressional Relations: Ralph Dawn, Managing Director, dawnr@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, D.C. 20548: Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, D.C. 20548: