This is the accessible text file for GAO report number GAO-10-782 entitled 'Securities And Exchange Commission: Action Needed to Improve Rating Agency Registration Program and Performance-Related Disclosures' which was released on September 22, 2010. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. Report to Congressional Committees: United States Government Accountability Office: GAO: September 2010: Securities And Exchange Commission: Action Needed to Improve Rating Agency Registration Program and Performance-Related Disclosures: GAO-10-782: GAO Highlights: Highlights of GAO-10-782, a report to congressional committees. Why GAO Did This Study: In 2006, Congress passed the Credit Rating Agency Reform Act (Act), which intended to improve credit ratings by fostering accountability, transparency, and competition. The Act established Securities and Exchange Commission (SEC) oversight over Nationally Recognized Statistical Rating Organizations (NRSRO), which are credit rating agencies that are registered with SEC. The Act requires GAO to review the implementation of the Act. This report (1) discusses the Act’s implementation; (2) evaluates NRSROs’ performance-related disclosures; (3) evaluates removing NRSRO references from certain SEC rules; (4) evaluates the impact of the Act on competition; and (5) provides a framework for evaluating alternative models for compensating NRSROs. To address the mandate, GAO reviewed SEC rules, examination guidance, completed examinations, and staff memoranda; analyzed required NRSRO disclosures and market share data; and interviewed SEC and NRSRO officials and market participants. What GAO Found: SEC’s implementation of the Act involved developing an NRSRO registration program and an examination program. As currently implemented and staffed, both programs require further attention. * The process for reviewing NRSRO applications limits SEC staff’s ability to fully ensure that applicants meet the Act’s requirements. While SEC had registered 10 of 11 credit rating agency applicants as of July 2010, some staff memoranda to the Commission summarizing their review of applications described concerns that were not addressed prior to registration. According to staff, the 90-day time frame for SEC action on an application and the lack of an express authority to examine the applicants prior to registration prevented the concerns from being addressed prior to approval. Unlike other registration application programs that have built in greater authority and flexibility for their staff to clarify outstanding questions regarding applications before approval, the NRSRO registration program requires SEC to act within 90 days of receiving the application. As a result, staff recommended granting registration with ongoing concerns about NRSROs meeting the Act’s requirements. * With its current level of staffing for NRSRO examinations, SEC’s Office of Compliance Inspections and Examinations (OCIE) would likely not have been able to meet its routine examination schedule of examining the three largest NRSROs every 2 years and others every 3 years. OCIE has requested additional resources to fully staff the NRSRO examination program. While the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) requires SEC to establish an Office of Credit Ratings to conduct annual examinations of each NRSRO and staff the office sufficiently to carry out these examinations, SEC may face challenges in meeting the required examination timetable and providing quality supervision over NRSROs unless it develops a plan that clearly identifies staffing needs, such as requisite skills and training. While SEC has increased the amount of performance-related data NRSROs are required to disclose, the usefulness of the data is limited. First, SEC requires NRSROs to disclose certain performance statistics, increasing the amount of performance information available for some NRSROs. However, because SEC does not specify how NRSROs should calculate these statistics, NRSROs use varied methodologies, limiting their comparability. Second, SEC issued two rules requiring NRSROs to make certain ratings history data publicly available. However, the data sets do not contain enough information to construct comparable performance statistics and are not representative of the population of the credit ratings at each NRSRO. Without better disclosures, the information being provided will not serve its intended purpose of increasing transparency. In July 2008, SEC proposed amendments that would have removed references to NRSRO ratings from several rules. While SEC removed references from six rules and two forms, it retained the use of the ratings or delayed further action on two rules. These rules govern money market fund investments and the amount of capital that broker- dealers must hold, and use NRSRO references as risk-limiting measures. GAO reviewed SEC’s proposals to remove NRSRO references from these two rules and identified concerns with how SEC examiners would have evaluated compliance with the proposed alternative credit standards and whether it had staff with the requisite skills. Going forward, the Dodd-Frank Act requires SEC to remove NRSRO ratings from its rules. SEC’s previous experience with proposals to remove credit rating references highlights the importance of developing a plan to help ensure that (1) any adopted alternative standards of creditworthiness for a particular rule facilitate its purpose and (2) that examiners have the requisite skills to determine that the adopted standards have been applied. Without such a plan, SEC may develop alternative standards of creditworthiness that are not effective in supporting the purpose of a particular rule. Since the implementation of the Act, the number of NRSROs has increased from 7 to 10; however, industry concentration as measured by NRSRO revenues, the number of entities rated, and the dollar volume of new asset-backed debt rated remains high. Several factors likely have contributed to the continued high concentration among NRSROs. First, relatively little time has passed since SEC implemented the NRSRO registration program and NRSRO rulemaking. Second, the three new NRSROs have not had much time to build market share. Finally, the recent financial crisis occurred soon after the Act’s implementation, substantially slowing certain sectors of the securitization markets. Moreover, there are barriers to entering the rating industry and to becoming an NRSRO. For example, establishing a reputation as a credible provider of credit ratings can take years. The reference to specific NRSROs in private contracts and investment guidelines also acts as a barrier. As part of an April 2009 roundtable held to examine oversight of credit rating agencies, SEC requested perspectives from users of ratings and others on whether it should consider additional rules to better align the raters’ interest with those who rely on those ratings, and specifically, whether one business model represented a better way of managing conflicts of interest than another. GAO identified five unique alternative models for compensating NRSROs that have been proposed by roundtable participants and others, although they vary in the amount of detail available. To assist Congress and others in assessing these proposals, GAO created an evaluative framework of seven factors that any compensation model should address to be effective. By applying these factors, users of the framework can identify the potential benefits of the model consistent with policymakers’ goals as well as any tradeoff. Table: Framework for Evaluating Alternative Models for Compensating NRSROs: Factor: Independence; Description: The ability for the compensation model to mitigate conflicts of interest inherent between the entity paying for the rating and the NRSRO. Key questions include: What potential conflicts of interest exist in the alternative compensation model and what controls, if any, would need to be implemented to mitigate these conflicts? Factor: Accountability; Description: The ability of the compensation model to promote NRSROs’ responsibility for the accuracy and timeliness of their ratings. Key questions include: How does the compensation model create economic incentives for NRSROs to produce quality ratings over the bond’s life? How is NRSRO performance evaluated and by whom? Factor: Competition; Description: The extent to which the compensation model creates an environment in which NRSROs compete for customers by producing higher- quality ratings at competitive prices. Key questions include: To what extent does the compensation model encourage competition around the quality of ratings, ratings fees, and product innovation? To what extent does it allow for flexibility in the differing sizes, resources, and specialties of NRSROs? Factor: Transparency; Description: The accessibility, usability, and clarity of the compensation model and the dissemination of information on the model to market participants. Key questions include: How transparent are the model’s processes and procedures for determining ratings fees and compensating NRSROs? How would NRSROs obtain ratings business? Factor: Feasibility; Description: The simplicity and ease with which the compensation model can be implemented in the securities market. Key questions include: What are the costs to implement the compensation model and who would fund them? Who would administer the compensation model? What, if any, infrastructure would be needed to implement it? Factor: Market acceptance and choice; Description: The willingness of the securities market to accept the compensation model, the ratings produced under that model, and any new market players established by the compensation model. Key questions include: What role do market participants have in selecting NRSROs to produce ratings, assessing the quality of ratings, and determining NRSRO compensation? Factor: Oversight; Description: The evaluation of the model to ensure it works as intended. Key questions include: Does the model provide for an independent internal control function? What external oversight does the compensation model provide to ensure it is working as intended? Source: GAO. [End of table] What GAO Recommends: time frames and authorities it needs to review NRSRO applications, develop a plan to help ensure the NRSRO examination program is sufficiently staffed, improve NRSROs’ performance-related disclosure requirements, and develop a plan to approach the removal of NRSRO references from its rules. SEC generally agreed with these recommendations. View [hyperlink, http://www.gao.gov/products/GAO-10-782] or key components. For more information, contact Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. [End of section] Contents: Letter: Background: NRSRO Application Review Process Limits SEC Staff's Ability to Ensure That Applicants Meet the Act's Requirements and NRSRO Examination Program Faces Staffing Challenges: SEC Has Increased the Amount of Performance-related Data NRSROs Are Required to Disclose, but These Data Have Limited Usefulness: As SEC Works to Remove NRSRO References from SEC Rules, It Will Need To Ensure It Has the Staff with the Requisite Skills to Evaluate Compliance with Any Alternative Creditworthiness Standard: The Number of NRSROs Has Increased since the Act Was Implemented but Industry Concentration Remains High: Models Proposing Alternative Means of Compensating NRSROs Intend to Address Conflicts of Interests in the Issuer-Pays Model: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: Objectives, Scope, and Methodology: Appendix II: Other Registration Processes Provide Greater Flexibility to the Regulators: Appendix III: Comments from the Securities and Exchange Commission: Appendix IV: GAO Contact and Staff Acknowledgments: Tables: Table 1: SEC Rules Pertaining to NRSROs, 2007 and 2009: Table 2: Hypothetical 1-Year Transition Matrix and 1-Year Default Rates Relative to Beginning-of-Year Ratings for 2009 Cohorts: Table 3: Methods Used by NRSROs to Calculate SEC-required 1-, 3-, and 10-year Transition Rates: Table 4: Methods Used by NRSROs to Calculate SEC-required 1-, 3-, and 10-year Default Rates: Table 5: NRSROs Registered by Asset Class, 2010: Table 6: HHI for NRSROs Based on Total Revenues, 2006-2009: Table 7: HHI for NRSROs Based on Number of Issuers Rated, 2006-2009: Table 8: HHI Based on Dollar Value of Newly Issued U.S.-ABS, January 2004-June 2010: Figures: Figure 1: Years the Current NRSROs Have Produced Credit Ratings and Have Been Recognized as NRSROs: Figure 2: NRSRO U.S. Annual Market Coverage by ABS, Dollar Volume, 2004-June 2010: Figure 3: NRSRO U.S. Annual Market Coverage by Traditional ABS, Dollar Volume, 2004-June 2010: Figure 4: NRSRO U.S. Annual Market Coverage by Prime RMBS, Dollar Volume, 2004-June 2010: Figure 5: NRSRO U.S. Annual Market Coverage by Nonprime RMBS, Dollar Volume, 2004-June 2010: Figure 6: NRSRO U.S. Annual Market Coverage by CMBS, Dollar Volume, 2004-June 2010: Figure 7: NRSRO U.S. Annual Market Coverage by CDO, Dollar Volume, 2004-June 2010: Abbreviations: ABS: asset-backed securities: Act: Credit Rating Agency Reform Act: CMBS: commercial mortgage-backed securities: CDO: collateralized debt obligations: Dodd-Frank Act: Dodd-Frank Wall Street Reform and Consumer Protection Act: Enforcement: SEC's Division of Enforcement: Exchange Act: Securities Exchange Act of 1934: FINRA: Financial Industry Regulatory Authority: Fitch: Fitch Ratings: HHI: Herfindahl-Hirschman Index: Investment Company Act: Investment Company Act of 1940: Investment Management: SEC's Division of Investment Management: DOJ: Department of Justice: Moody's: Moody's Investors Service: NRSRO: Nationally Recognized Statistical Rating Organization: OCIE: Office of Compliance Examinations and Inspections: ORA: SEC's Office of Risk Analysis: RMBS: residential mortgage-backed securities: SEC: Securities and Exchange Commission: Trading and Markets: SEC's Division of Trading and Markets: [End of section] United States Government Accountability Office: Washington, DC 20548: September 22, 2010: The Honorable Christopher J. Dodd: Chairman: The Honorable Richard C. Shelby: Ranking Member: Committee on Banking, Housing, and Urban Affairs: United States Senate: The Honorable Barney Frank: Chairman: The Honorable Spencer Bachus: Ranking Member: Committee on Financial Services: House of Representatives: A credit rating is an assessment of the creditworthiness of an obligor as an entity or with respect to specific securities or money market instruments.[Footnote 1] In the past few decades, credit ratings have assumed increased importance in the financial markets, in large part due to their use in law and regulation. In 1975, the Securities and Exchange Commission (SEC) first used the term Nationally Recognized Statistical Rating Organization (NRSRO) to describe those rating agencies whose ratings could be relied upon to determine capital charges for different types of debt securities (securities) broker- dealers held.[Footnote 2] Since then, SEC has used the NRSRO designation in a number of regulations, and the term has been widely embedded in numerous federal and state laws and regulations as well as in investment guidelines and private contracts. The highly publicized, alleged failures by the three largest NRSROs to warn investors in a timely manner about the impending bankruptcies of Enron and other issuers raised concerns in Congress and among others about the role and performance of NRSROs in the securities market and SEC's supervision of the industry.[Footnote 3] In response to a congressional mandate, SEC prepared a report in 2003 discussing its findings from examinations of these rating agencies, which revealed concerns related to potential conflicts of interest caused by the business model they employ--in which the issuers of securities pay the rating agencies for their ratings (issuer-pays model).[Footnote 4] The report also discussed the lack of a formal regulatory program to oversee NRSROs, and SEC efforts over the years to establish one. Participants in congressional hearings held at the time noted the high concentration of market share among a small number of large NRSROs and criticized SEC's no-action letter process (used to recognize NRSROs) as a barrier to entry to the market for new credit rating agencies, and thus a hindrance to competition.[Footnote 5] To address these concerns, Congress passed the Credit Rating Agency Reform Act (Act) in 2006[Footnote 6]. The Act amended the Securities Exchange Act of 1934 (Exchange Act) to improve ratings quality for the protection of investors by fostering accountability, transparency, and competition in the credit rating industry.[Footnote 7] The Act added section 15E to the Exchange Act, which establishes SEC oversight over those credit rating agencies that register as NRSROs. Section 15E also provides SEC with examination authority and establishes a registration program for credit rating agencies seeking NRSRO designation, defines eligibility requirements, prescribes the minimum information applicants must provide in their application, and establishes a time frame and parameters for SEC review and approval of applications. NRSRO applicants and registered NRSROs are also required to disclose information, including ratings performance, conflicts of interest, and the procedures used to determine ratings. More recently, the performance of the three largest NRSROs in rating subprime residential mortgage-backed securities (RMBS) and related securities renewed questions about the accuracy of their credit ratings generally, the integrity of the ratings process, and investor reliance on NRSRO ratings for investment decisions.[Footnote 8] In July 2008, SEC made public its report on its examinations of these three NRSROs, which identified significant issues with their documentation and disclosure of critical ratings processes and the management of conflicts of interest related to these products. [Footnote 9] Partially in response to these findings, SEC issued additional rules in 2008 and 2009 intended to enhance NRSRO disclosure to investors, strengthen the integrity of the ratings process, and more effectively address the potential for conflicts of interest. SEC also held a roundtable relating to its oversight of credit rating agencies in April 2009, where roundtable participants offered proposals to, among other things, reduce conflicts of interest and increase NRSRO incentives to produce accurate ratings by establishing alternative means for compensating NRSROs. Section 7 of the Act requires us to review, by September 2010, the implementation and impact of the Act and the rules issued under it on the quality of credit ratings, financial markets, competition in the credit rating industry, the process for NRSRO registration, and other matters. This report responds to that mandate. It also responds to your interest in identifying and assessing alternative models for compensating NRSROs and assessing the potential impact of removing NRSRO references from SEC rules. Specifically, this report (1) discusses the implementation of the Act, focusing on SEC rulemaking and SEC's implementation of the NRSRO registration and examination programs; (2) evaluates the performance- related NRSRO disclosures that the Act and SEC rules require; (3) evaluates the potential regulatory impact of removing NRSRO references from certain SEC rules; (4) evaluates the impact of the Act on competition among NRSROs; and (5) provides an overview of proposed alternative models for compensating NRSROs and an evaluative framework for assessing the models. To address the first objective, we reviewed the rules SEC adopted to implement the Act, SEC's July 2008 public report discussing examination findings on selected NRSROs, SEC's Division of Trading and Markets (Trading and Markets) internal memoranda to the commissioners documenting its review of NRSRO applications, and an internal memorandum on NRSRO monitoring. We also reviewed the Office of Compliance Examinations and Inspections' (OCIE) guidance for conducting an NRSRO examination and reviewed completed examinations. We discussed the application review process with Trading and Markets staff, and conducted interviews with staff from SEC's Division of Investment Management (Investment Management) and the Financial Industry Regulatory Authority (FINRA) about their respective registration programs and with OCIE staff regarding NRSRO examinations. For the second objective, we analyzed SEC rules requiring NRSRO disclosure of performance statistics and ratings history samples. We analyzed and compared the NRSROs' 2009 disclosures of performance statistics, focusing on the corporate and structured finance asset classes, and we reviewed voluntary disclosures of additional performance statistics by several NRSROs. We also assessed NRSRO ratings history data disclosed by the seven issuer-pays NRSROs pursuant to SEC's rule to determine their reliability and usability for generating comparative performance statistics. We identified a number of issues that led us to determine that the data were not in a format that allowed us to generate comparative performance statistics. For the third objective, we obtained and reviewed SEC's proposed rules to remove references to NRSRO ratings, focusing on proposed amendments to rule 2a-7 under the Investment Company Act of 1940 (Investment Company Act) and Exchange Act rule 15c3-1, as well as the comment letters submitted to SEC on the proposals.[Footnote 10] We also reviewed OCIE examinations of money market funds from FY 2003-2009 that reviewed rule 2a-7 compliance. We conducted interviews with OCIE staff, Trading and Markets staff, Investment Management staff, and market participants and observers. For the fourth objective, we reviewed proposed and final SEC rules intended to promote competition among NRSROs, as well as the comment letters SEC received in response to those rules. We reviewed SEC's 2008 and 2009 mandated annual reports on NRSROs, including SEC's studies on competition in the credit rating industry. We used the Herfindahl-Hirschman Index (HHI)--a measure of industry concentration that reflects both the number of firms in the industry and each firm's market share--to track industry concentration over time.[Footnote 11] We calculated the HHI using three alternative variables: (1) NRSRO revenues, (2) the number of organizations or entities rated by the NRSROs that issue debt securities, and (3) for asset-backed securities (ABS), the dollar amount of new U.S.-issued debt rated. We obtained the revenue data from the NRSROs' Form NRSRO filings, the registration form SEC requires credit rating agencies to submit when applying for NRSRO registration and then annually after registration. We interviewed staff from Trading and Markets to determine the steps they took to ensure the data represented the firms' total revenues. We obtained data on the number of rated organizations or entities that issue debt securities from the NRSROs. To ensure consistency among the data, we specified how the NRSROs were to count rated organizations and entities and classify them. We also examined trends within the data. We obtained data on the dollar value of rated new U.S.-asset- backed securitization from an industry newsletter that tracks these issuances. We obtained information from this firm on the processes and procedures it used to collect and manage the data. We also used these data to generate a series of descriptive graphs showing the NRSROs' market coverage, that is, the percentage of new U.S. asset-backed issuance that each rated. For all of our data sources, we determined that the data were reliable enough for our purposes, which were to show the relative concentration of NRSROs in the industry. We also reviewed academic studies on competition in the industry and obtained the views of SEC's Office of Risk Analysis (ORA), the NRSROs, credit rating agencies that are not registered NRSROs, institutional investors, issuers, and industry experts on the impact of the Act on competition. For the fifth objective, we identified proposals on alternative models for compensating NRSROs by reviewing white papers submitted to the SEC roundtable on credit rating agency oversight, academic and white papers, and interviewed the authors of the proposed models. We obtained the views of Trading and Markets, ORA, NRSROs, and credit rating agencies that are not registered as NRSROs, institutional investors, issuers, and academic and industry experts. To develop the framework for evaluating the models, we reviewed prior GAO reports and academic and market research to identify appropriate factors for inclusion. We then convened a panel of GAO experts (financial markets specialists, economists, an attorney and a social scientist) to review the framework and incorporated their comments. Finally, we solicited comments from NRSROs, proposal authors, industry experts, and trade associations and incorporated them as appropriate. For all relevant objectives, we also reviewed the recently passed Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) to understand additional changes to SEC's oversight of NRSROs.[Footnote 12] We conducted this performance audit from May 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix I provides a more detailed description of our scope and methodology. Background: The ratings produced by the NRSROs generally are letter-based symbols intended to reflect assessments of credit risk for entities issuing securities in public markets. Typically, credit rating agencies designate issuers or securities considered investment-grade, or lower risk, with higher letter ratings, and issuers or securities considered speculative-grade, or higher risk, with lower letter ratings. For example, Standard & Poor's and Fitch Ratings (Fitch) designate investment-grade, long-term debt with ratings of AAA, AA, A, and BBB, and speculative-grade, long-term debt with ratings of BB, B, CCC, CC, and C. The rating scale employed by Moody's Investors Service (Moody's) uses Aaa, Aa, A and Baa for investment-grade, long-term debt, and Ba, B, Caa, Ca, and C for speculative-grade, long-term debt. Rating agencies may employ different rating scales for different regions, sectors, jurisdictions, or types of securities. For example, the rating scale that a ratings agency uses to assign short-term obligations may differ from the scale it uses for long-term obligations. NRSRO credit ratings are designed to measure the likelihood of default for an issue or issuer, although some also measure other variables, such as the expected value of dollar losses given a default. These assessments reflect a variety of quantitative and qualitative factors that vary based on sector. The NRSROs describe ratings as intended only to reflect credit risk, not other valuation factors such as liquidity or price risk. To determine an appropriate rating, credit analysts use publicly available information, and market and economic data, and may obtain nonpublic information from the issuer and engage in discussions with its senior management. However, not all NRSROs rely on nonpublic information in producing credit ratings.[Footnote 13] Issuers seek credit ratings for a number of reasons, such as to improve the marketability or pricing of their financial obligations, or to satisfy investors, lenders, or counterparties. In certain markets, such as the U.S. long-term corporate debt market, a single- rated debt issue may be priced below an issue with similar ratings from two agencies, because the absence of a second rating is interpreted as an issuer's inability to obtain another equivalent rating. However, in other markets such as the ABS market, a single rating may be adequate confirmation of asset quality. Institutional investors, such as mutual funds, pension funds, and insurance companies, are among the largest owners of debt securities in the United States and are substantial users of credit ratings. Retail participation in the debt markets generally takes place indirectly through these fiduciaries. Institutional investors may use credit ratings as one of several important inputs to their own internal credit assessments and investment analysis, or to identify pricing discrepancies for their trading operations. Broker-dealers that make recommendations and sell securities to their clients also use ratings. These firms often act as dealers in markets that place significant importance on credit ratings. For example, in the over-the- counter derivatives markets, broker-dealers tend to use credit ratings (when available) to determine acceptable counterparties, as well as collateral levels for outstanding credit exposures. Large broker- dealers also frequently obtain credit ratings as issuers of long-and short-term debts. Academic literature suggests that credit ratings affect financial markets both by providing information to investors and other market participants and by their use in regulations.[Footnote 14] Several studies suggest that bond prices, stock returns, and credit-default swap spreads vary with credit ratings downgrades. Other studies find that obtaining a credit rating generally increases a firm's access to capital markets and that firms with credit ratings have different capital structures than firms without them as a result. Furthermore, some studies suggest that firms adjust their capital structure to achieve a particular credit rating. One explanation for these relationships is that rating agencies have access to private information about the issuers and issues they rate, and the ratings they assign incorporate this information. Thus, ratings are a mechanism for communicating this otherwise unavailable information to market participants. Alternatively, at least in market segments with rating-based regulations, investors' willingness and ability to purchase bonds with credit ratings above a regulatory threshold could be greater than their willingness and ability to purchase bonds with ratings below the threshold. Thus, firms with credit ratings above the regulatory threshold have lower costs of capital than those with credit ratings below the threshold. NRSROs today operate primarily under one of two compensation models: issuer-pays or subscriber-pays. Under the issuer-pays model, issuers pay the NRSRO for a rating. These ratings are generally free to the public, although users may have to purchase access to in-depth reports explaining the basis for the rating. Under the subscriber-pays model, users pay a subscription to the NRSRO for access to its ratings. Trading and Markets administers and executes the agency's programs relating to NRSRO oversight, which includes administration of the NRSRO registration program and rulemaking. OCIE administers SEC's nationwide examination and inspection program. Within OCIE, the NRSRO examination team within the Office of Market Oversight conducts NRSRO examinations. The purpose of an NRSRO examination is to promote compliance with applicable laws and rules, identify any violations of such laws and rules, and ensure remedial action. Examinations also serve to inform SEC and SEC staff of NRSROs' compliance with their regulatory obligations and noteworthy industry developments. If OCIE discovers potential violations of federal securities laws or rules during an NRSRO examination, it may refer the case to Enforcement, which is responsible for further investigating these potential violations; recommending SEC action when appropriate, either in a federal court or before an administrative law judge. NRSRO Application Review Process Limits SEC Staff's Ability to Ensure That Applicants Meet the Act's Requirements and NRSRO Examination Program Faces Staffing Challenges: SEC's implementation of the Act involved developing an NRSRO application review process and an examination program. As currently implemented and staffed, both programs require further attention. First, in June 2007, SEC adopted final rules that established a voluntary registration program for recognizing credit rating agencies as NRSROs.[Footnote 15] Over the past 3 years, SEC has registered 10 credit rating agencies as NRSROs, instituted proceedings to determine whether to deny registration to one applicant, and has begun examinations. However, as implemented, the registration process potentially limits the staff's ability to ensure that applicants meet the Act's requirements and may create a situation in which ratings from an NRSRO that may not meet the Act's requirements are used by investors and for regulatory purposes. Second, although SEC has established an OCIE branch dedicated to the examination of NRSROs and hired individuals with experience in credit rating analysis and structured finance to fill these positions, OCIE has not completed timely examinations of the NRSROs and has expressed concerns about its ability to meet its planned NRSRO routine examination schedule of examining the three largest NRSROs every 2 years and the other NRSROs every 3 years. While SEC requested additional resources that it anticipated using to fully staff this oversight function, it will likely need to revisit those requests due to the passage of the Dodd- Frank Act, which among other things, requires that each NRSRO be examined every year and that SEC establish an Office of Credit Ratings to carry out these examinations. Formalizing a plan to assess not only the number of staff it needs for this office but also the skills required of this staff would help SEC be strategically positioned to implement the Dodd-Frank Act requirements. SEC may face challenges in meeting the required examination timetable and providing quality oversight over NRSROs unless it develops a plan that ensures SEC has sufficient staff that have the appropriate qualifications and are appropriately trained. SEC Established a Formal Registration and Oversight Program for NRSROs and Continues Rulemaking under the Act: SEC adopted final rules for a formal regulatory and oversight program for NRSROs in June 2007.[Footnote 16] The rules establish a voluntary registration program for those credit rating agencies that seek to be recognized as NRSROs and require that NRSROs make and retain certain records, furnish financial reports to SEC, and establish procedures to address uses of material nonpublic information. The rules also require the disclosure of certain performance measures and ratings methodologies, prohibit certain conflicts of interest and require the management of other conflicts of interest, and prohibit specified coercive and unfair practices by NRSROs. SEC amended several of these rules in February and December 2009 with the goal of further increasing transparency of NRSRO rating methodologies, strengthening the disclosures of ratings performance, prohibiting NRSROs from engaging in certain practices, and enhancing NRSRO record keeping. [Footnote 17] These amendments were designed in part to address concerns that SEC staff identified in its July 2008 report about the integrity of the process by which the three largest NRSROs rated structured finance products.[Footnote 18] Table 1 summarizes the rules and amendments to those rules adopted by SEC under the Act. Table 1: SEC Rules Pertaining to NRSROs, 2007 and 2009: Rule 17g-1: Prescribes how an NRSRO must apply to be registered with SEC, keep its registration up-to-date, and comply with the statutory requirement to furnish SEC with an annual certification. Specifically, all of these actions must be accomplished by furnishing SEC with information on a Form NRSRO. As part of registration, NRSROs must disclose certain performance statistics and general descriptions of their ratings processes and methodologies. Rule 17g-2: Requires an NRSRO to make and retain certain types of business records and disclose certain ratings history data. Rule 17g-3: Requires an NRSRO to furnish SEC with four, or in some cases five, financial reports annually. The first report requires the submission of audited financial statements. The remaining reports are unaudited. Also requires the NRSRO to provide SEC with an unaudited report of the number of credit rating actions during the fiscal year in each class of credit rating for which it is registered. Rule 17g-4: Prescribes minimal requirements for the policies and procedures that registered NRSROs are required to establish, maintain, and enforce in order to address specific areas in which material, nonpublic information could be inappropriately disclosed or used. Rule 17g-5: Prohibits certain actions that constitute an impermissible conflict of interest and prescribes minimal requirements to manage other inherent conflicts of interest. For structured finance products, requires an NRSRO hired by an arranger to rate a structured finance product to obtain representation from the arranger that it will provide the information given to the hired NRSRO to the nonhired NRSROs. Rule 17g-6: Prohibits any act or practice by an NRSRO that SEC determines to be unfair, abusive, or coercive. Source: 17 C.F.R. §§ 240.17g-1 - 17g-6 (2010). [End of table] The Act replaced the SEC staff no-action letter process for recognizing credit rating agencies as NRSROs with a speedier and more transparent registration system.[Footnote 19] It created a registration process with required information for applicants to submit, a specific time frame for SEC's review of the application, and specific reasons for which SEC could deny an application. According to the Senate report accompanying the Act's passage, the new registration program does not favor any particular business model, thus encouraging purely statistical models to compete with the qualitative models of the dominant rating agencies and subscriber-pays models to compete with issuer-pays models.[Footnote 20] The Senate Report stated that the new registration program would enhance competition and provide investors with more choices, higher-quality ratings, and lower costs. The Act added new section 15E to the Exchange Act, which provides, in pertinent part, that a credit rating agency electing to register as an NRSRO must submit the following information to SEC: * statistics that measure the performance of credit ratings over short- , mid-, and long-term periods; * the procedures and methodologies the applicant uses in determining credit ratings; * policies or procedures the applicant adopted and implemented to prevent the misuse of material, nonpublic information; * the organizational structure of the applicant; * its code of ethics and, if one does not exist, why not; * any conflicts of interest relating to the issuance of credit ratings; * the categories for which the applicant intends to apply for registration; * on a confidential basis, a list of the 20 largest issuers and subscribers that use its credit rating services; * on a confidential basis, written certification from 10 or more qualified institutional buyers (QIB) that have used the credit ratings of the applicant for at least 3 years immediately preceding the data of the certification;[Footnote 21] and: * any other information and documents concerning the applicant and any person associated with such applicant as SEC, by rule, may prescribe as necessary or appropriate in the public interest or for the protection of investors.[Footnote 22] In implementing Section 15E of the Exchange Act, SEC created and adopted Form NRSRO to collect the required information as well as audited financial statements and revenue and compensation information. Form NRSRO requires certification by the applicant that the information contained is accurate in all significant respects. Section 15E(a)(2) [15 U.S.C. § 78o-7(a)(2)] sets forth the application review requirements. SEC must either grant registration or institute proceedings to determine if registration should be denied within 90 calendar days of a credit rating agency furnishing its application to SEC. The deadline can be extended if the applicant consents. If SEC institutes proceedings, it has to provide the applicant a notice of the grounds for denial under consideration and an opportunity for a hearing and conclude the proceedings not later than 120 days after the date on which the application for registration was furnished.[Footnote 23] SEC can extend the conclusion of the proceedings for up to 90 days, if it finds good cause for such extension and publishes its reasons for so finding, or for longer periods if the applicant consents.[Footnote 24] The Act requires that SEC shall grant registration if it finds the requirements of section 15E are satisfied, unless it makes either of two findings (in which case registration must be denied): first, that the applicant does not have adequate financial and managerial resources to consistently produce credit ratings with integrity and to materially comply with the procedures and methodologies disclosed in Form NRSRO as well as with the requirements of the Act regarding the prevention of misuse of nonpublic information, management of conflicts of interest, prohibited conduct, and the designation of a compliance officer.[Footnote 25] Second, that the applicant or a person associated with it committed or omitted any act such that if the applicant were registered, its registration would be subject to suspension or revocation under subsection (d) of Section 15E of the Exchange Act.[Footnote 26] SEC Staff View Their Ability to Ensure That an NRSRO Applicant Meets the Act's Requirements as Limited: The NRSRO registration program has provided greater transparency and shortened the time between application and SEC recognition. However, as currently implemented, the registration process potentially limits staff's ability to ensure information provided on applications is accurate and lacks criteria needed to recommend that SEC deny an application. Since the implementation of Section 15E, 11 credit rating agencies have applied for NRSRO registration.[Footnote 27] To apply, a credit rating agency must fill out Form NRSRO and submit it and the required accompanying information to SEC. Trading and Markets staff review these documents and draft a memorandum to the Commission with the results of their review and recommendation for registration or denial of the application. We reviewed 10 memoranda that Trading and Markets provided the Commission and found that staff recommended that all be registered. The Commission has instituted proceedings to deny the application of the eleventh applicant.[Footnote 28] A few of the 10 memoranda we reviewed described concerns that Trading and Markets had with the applications. Staff told us these concerns generally were not resolved before registration for several reasons: (1) staff lacked criteria against which to measure certain concerns, (2) staff lacked the ability to examine the credit rating agencies before registration, and (3) staff came up against the 90-day time frame for the review of applications. According to Trading and Markets staff, some of these concerns were not addressed because they were qualitative in nature and would have required subjective judgments by the staff for which section 15E, implementing rules, or Form NRSRO do not provide criteria. For example, staff noted a concern about one applicant's managerial resources because the designated compliance officer lacked experience as a credit analyst. However, because section 15E, the rules implementing section 15E, and Form NRSRO do not prescribe minimum qualifications for the compliance officer position, staff were unable to support a finding that the applicant lacked the necessary managerial resources. As previously noted, a finding that the applicant did not have adequate financial and managerial resources would have been grounds for denying registration. Staff noted that because of the newness of the registration and oversight programs and the rules, no history of regulatory compliance could be used as a benchmark or criteria by which SEC could evaluate whether the applicant had adequate financial or managerial resources. Staff reviewed financial statements, other required financial information, and required managerial information, and provided summaries of the information in their memoranda. Trading and Markets staff also stated that for some of the concerns raised, they likely would need to conduct examinations to obtain the information necessary to assess if the concerns were legitimate. However, staff stated they cannot examine the applicants' books and records to investigate qualitative concerns because an applicant does not become subject to SEC's oversight authority until it becomes a registered NRSRO. For example, staff noted that one NRSRO appeared to produce ratings that are significantly more volatile than those of other applicants. Staff noted that appropriate explanations for the ratings volatility could exist but without the ability to examine the applicant, determining the causes of this volatility would be difficult, as would determining whether or not it demonstrated inadequate managerial resources. Similarly, staff said that any assessment of financial sufficiency would have to be determined through an examination because of the uniqueness of the applicants' business models. For example, they said a firm that uses only publicly available information and a quantitative model to produce its ratings likely needs far fewer financial resources than a firm that uses qualitative information and must employ analysts to assess this information. In addition to qualitative concerns, staff also noted factual concerns about the veracity of some information provided on Form NRSRO. However, staff said that the 90-day deadline for action and the lack of express authority to conduct an examination of the applicant did not allow for the resolution of these types of concerns during the application process.[Footnote 29] The deadline can be extended with the applicant's consent but to date only two applicants have done so. [Footnote 30] Staff said that even if SEC had authority to examine an NRSRO applicant prior to acting on its application, the Act's 90-day deadline for acting on an application would not provide enough time for a more thorough review. An application is deemed "furnished" to SEC, and thus begins the 90-day time frame, when SEC receives a complete and properly executed Form NRSRO. Staff said the Act does not provide SEC with any way to extend the review period without the applicant's consent or an SEC decision to institute proceedings (which would extend the deadline by 30 days, or an additional 90 days based upon a finding of good cause or upon the applicant's consent). Staff said that generally speaking, they would not recommend that SEC institute proceedings absent sufficient evidence to support the findings necessary to deny an application. While possible, they said that providing sufficient material on Form NRSRO to support the institution of proceedings would be unlikely.[Footnote 31] Trading and Markets staff said their principal purpose in raising qualitative and factual concerns in memoranda was to inform the Commission and notify OCIE so these issues could be monitored through future examinations. Trading and Markets staff said that conducting the lengthy examinations that likely would be needed to resolve many of these qualitative issues in effect could be viewed as a return to the prior staff no-action letter process in which examinations were used to make qualitative and subjective assessments of a credit rating agency seeking NRSRO designation. Staff pointed to the legislative history of the Act as illustrating that Congress clearly found that the prior staff no-action letter process created artificial barriers to entry to NRSRO registration, and that the Act specifically was designed to replace that process with a more efficient and transparent registration program.[Footnote 32] For this reason, staff told us they have interpreted the Act to mean it is not appropriate for SEC to institute proceedings to deny an application merely to resolve staff questions about an application, and absent sufficient evidence at the application stage to support one of the statutory grounds for denial of registration. As Trading and Markets staff interpret the NRSRO registration program, they believe that the Commission must find an applicant has satisfied the requirements of Section 15E if the applicant meets the definition of a credit rating agency, submits the required material, and is capable of complying with the applicable U.S. securities laws. Because the Act allows SEC to deny an application if the applicants are found to have inadequate managerial or financial resources, the investing public may have some expectation that SEC determined that applicants had the financial and managerial resources to produce ratings with integrity before registering them. Furthermore, because each applicant must certify the accuracy of its information and statements, an expectation could exist that the information provided to SEC and on which the Commission bases its approval decision was accurate and complete. This not only includes the qualified institutional buyer certifications, which are used to determine whether or not the market recognizes and uses the ratings provided by the NRSRO, but also the public disclosures, such as the descriptions of the ratings methodologies. We identified other registration processes that have built in greater authority and flexibility for the staff to clarify issues before registering applicants. For example, both FINRA's registrant application process for broker-dealers and SEC's registration process for investment advisors require applicants to provide specific information and utilize deadlines to ensure an efficient process. According to staff from these programs, they are able to clarify any outstanding questions they have regarding information required on the application and to delay registering the applicant until they are satisfied the applicant has met all of the necessary requirements.[Footnote 33] However, because the NRSRO registration program requires SEC to act within 90 days of receiving the application and SEC has limited ability to extend that deadline, staff have recommended granting registration to credit rating agencies as NRSROs with some concerns outstanding about their meeting the Act's requirements. Furthermore, the uncertainty as to the extent of SEC's authority to compel the production of certain additional information that could be used to verify the information provided on Form NRSRO and the lack of specific criteria against which to assess the application may lead to SEC granting registration to an applicant that does not fully meet Section 15E's requirements as an NRSRO. New Dedicated Teams in SEC Provide Input on Regulatory Initiatives and Examine NRSROs: In December 2008, Trading and Markets created an NRSRO monitoring unit to provide input on regulatory initiatives related to NRSROs. The members of this unit are responsible for meeting periodically with NRSROs to establish an ongoing dialogue and discuss topics such as updates to rating methodologies and practices, financial results, and compliance and internal audit activity. The unit is also responsible for preparing internal periodic profile reports of each NRSRO and the annual report to Congress on NRSROs, and analyzing and preparing reports on topics of interest or potential concern. According to documents we reviewed, the unit also is supposed to meet periodically with NRSROs to discuss issues specifically related to model development, validation, and governance to gain a better understanding of the models and the controls around each. As of August 2, 2010, this unit has three members with qualifications and experience, ranging from a former rating agency managing director with more than 20 years experience to an analyst with 6 years of experience as a corporate credit analyst. Following its registration as an NRSRO, a credit rating agency immediately becomes subject to SEC oversight, including compliance examinations and enforcement. OCIE, which conducts NRSRO examinations, in May 2009 reorganized the Office of Market Oversight to create a new NRSRO examination team (a branch chief, a senior specialized examiner, and three staff examiners) to perform NRSRO examinations.[Footnote 34] The senior specialized examiner and three staff examiners were new SEC employees and include two former NRSRO analysts (both former managing directors of a major rating agency with over 20 and over 10 years experience respectively) and attorneys with experience in structured finance products and corporate law. According to OCIE staff, the new examiners have received standard OCIE training and on-the-job training from the branch chief and other examiners who completed previous NRSRO examinations. In addition, OCIE is considering incorporating NRSRO- specific training into its standard examiner training and has developed written guidance to assist examiners and foster consistency in examinations. Based on our review of OCIE's NRSRO examination guidelines and interviews with OCIE staff, OCIE plans to conduct routine and special NRSRO examinations. Routine examinations assess an NRSRO for compliance with the Act and applicable rules and regulations at regular intervals. Special, or cause, examinations typically originate from a tip or need to gather specific information (limited scope) or follow up on past examination findings and recommendations. OCIE expects to conduct special or cause examinations as necessary. According to the guidelines, each routine examination will generally begin with an initial risk assessment and scope analysis of the NRSRO to be reviewed. Specific review areas for each examination (such as conflicts of interest or document retention) are determined during the risk assessment process and throughout the examination. OCIE explained that all examinations are based on risk assessment to maximize the examination team's time and resources. NRSRO examination staff typically review, among other things, whether the NRSRO adequately has (1) disclosed a description of its ratings procedures and methodologies; (2) documented internally its ratings procedures and methodologies; (3) documented its ratings process in ratings files, including making and retaining required documentation; and (4) adhered to its ratings policies and procedures in the creation of ratings. OCIE examinations do not assess whether the NRSRO produces accurate ratings, as SEC is prohibited from evaluating the substance of credit ratings or the procedures and methodologies by which an NRSRO determines credit ratings. Areas of review may include credit rating and surveillance process, record retention, prevention of the misuse of material nonpublic information, conflicts of interest, prohibited acts and practices, financial operations, marketing, compliance, internal audit, and unsolicited ratings. For example, under the conflicts of interest area, the guidance provides a description of the applicable rule (17g-5) and a description of the disclosure requirements. It suggests OCIE request all current written policies and procedures related to conflicts of interest, and then assess if these policies and procedures were designed reasonably, captured all the relevant conflicts, and were followed. OCIE has been working with the NRSROs to standardize the level of detail that the examiners would expect to see. This process is ongoing and examiners likely will have to complete a few examinations before determining exactly what they would need to see to make compliance determinations. If deficiencies were found, OCIE would send a deficiency letter to the NRSRO requesting that it make the appropriate corrections and notify OCIE of how it plans to make those corrections. In cases of potential violations, OCIE may refer the NRSRO to the Enforcement Division for further review. OCIE completed its first series of examinations of the largest NRSROs in July 2008. The examinations focused on the rating of subprime RMBS by Standard & Poor's, Moody's, and Fitch and were initiated in response to the recent mortgage crisis. OCIE conducted these examinations jointly with Trading and Markets and made its examination results public.[Footnote 35] Although these examinations occurred after the enactment of the Act, the period subject to examination predated the Act. Among other things, the examinations found that: * RMBS and collateralized debt obligations (CDO) deals substantially increased in number and complexity since 2002, and some of the rating agencies appear to have struggled with that growth; * significant aspects of the rating process were not always disclosed; * the management of conflicts of interest raised some concerns; and: * the surveillance processes the ratings agencies used appeared to have been less robust than the processes used for initial ratings. The examinations also resulted in remedial actions that the examined NRSROs stated they would take to address the findings of these examinations and additional rulemaking by SEC. Examples of remedial actions included that the examined NRSROs evaluate, both at that time and on a periodic basis, whether they have sufficient staff and resources to manage their volume of business and meet their obligations under Section 15E of the Exchange Act and the rules applicable to NRSRO, and that each NRSRO conduct a review of its current disclosures relating to process and methodologies for rating RMBS and CDOs to assess whether it is fully disclosing its ratings methodologies in compliance with Section 15E of the Exchange Act and the rules applicable to NRSROs. The additional rulemaking included amendments to SEC's rules identifying a series of conflicts of interest that NRSROs must disclose and manage and others that were outright prohibited. In October 2008, OCIE began routine examinations of four of the remaining seven NRSROs. As of August 30, 2010, OCIE had completed three examinations and closed the remaining examination.[Footnote 36] OCIE staff explained these routine examinations took longer to complete than anticipated because of the transition from the old to the new staff, the resignation or departure from the area of some examination staff, and the need to conduct other examinations and initiatives. For example, OCIE also has completed three special (limited scope) examinations and begun another special (cause) examination since the NRSRO examination program started. Given the small number of NRSRO examiners, the need to conduct these additional examinations slowed progress on the routine examination schedule. NRSRO examiners also have been conducting outreach initiatives for designated compliance officers of the NRSROs, which are intended to educate them on SEC's expectations with regard to compliance officers' skills and backgrounds. As a result, with the current level of staffing it is unlikely that OCIE would have been able to meet its planned routine examination schedule of examining the three largest NRSROs every 2 years and the remaining NRSROs every 3 years depending on staffing resources, and two examinations have taken over 18 months to complete. SEC requested additional resources which it anticipated using to fully staff this oversight function. Under the recently passed Dodd-Frank Act, SEC must establish an Office of Credit Ratings and conduct annual examinations of each NRSRO. The Dodd-Frank Act also outlines eight specific areas these examinations must review and requires that the Commission produce an annual public report summarizing the findings of these examinations. According to OCIE staff, in August 2010, the NRSRO examination team will begin new examinations of all NRSROs as mandated by the Dodd-Frank Act. OCIE staff said they are relying on currently designated NRSRO examiners and examiners from other OCIE examinations programs to staff the teams. The Dodd-Frank Act requires SEC to staff the office sufficiently to carry out these requirements. Although SEC may be able to staff the teams needed to conduct the first round of annual examinations by using staff from other examination programs, such an approach is not sustainable in the long term. In creating this new office, developing a plan to assess not only the number of staff it needs but also the skills required of this staff would help SEC be strategically positioned to meet the Act's requirements. Without a plan that helps ensure SEC has sufficient staff that have the appropriate qualifications and are appropriately trained, SEC may face challenges in meeting the required examination timetable and providing quality oversight over NRSROs. SEC Has Increased the Amount of Performance-related Data NRSROs Are Required to Disclose, but These Data Have Limited Usefulness: Since the implementation of the Act, SEC has made several revisions to the Form NRSRO that are intended to make more information publicly available for evaluating and comparing NRSRO performance. The revised Form NRSRO requires NRSROs to disclose credit rating performance statistics over 1-, 3-, and 10-year periods. SEC has also required NRSROs to make ratings history data publicly available. SEC intended these disclosures to allow users to better evaluate and compare NRSRO performance. However, because SEC did not specify how NRSROs should calculate these statistics, the NRSROs used varied methodologies, limiting their comparability. Further, we found that the ratings history data sets do not contain enough information to construct comparable performance statistics and are not representative of the population of credit ratings at each NRSRO. Without better disclosures, the information being provided will not serve its intended purpose of increasing transparency. SEC-required Performance Statistics Have Increased the Data Available from NRSROs, but Their Comparability Is Limited: Pursuant to the requirements of section 15E of the Exchange Act, NRSROs are required to disclose credit rating performance statistics over short-, mid-, and long-term periods, as applicable. Form NRSRO specifies that these statistics must at a minimum show the performance of credit ratings in each class of class for which an NRSRO is registered over 1-, 3-, and 10-year periods, including, as applicable, historical transition and default rates for each rating category and notch. The statistics must include defaults relative to initial ratings.[Footnote 37] Transition rates compare ratings at the beginning of a time period with ratings at the end of the time period, while default rates show the percentage of ratings with each rating that defaulted over a given time period. SEC requires NRSRO applicants to furnish the required transition and default rates as part of their NRSRO application on Form NRSRO, and once registered, to update the statistics annually. As part of these disclosures, NRSROs must define the credit rating categories they use and explain these performance measures, including the inputs, time horizons, and metrics used to determine them. SEC Intended Statistics to Facilitate Comparison of Credit Rating Performance among NRSROs: In adopting these requirements, SEC stated that these types of statistics are important indicators of the performance of a rating agency in terms of its ability to assess the creditworthiness of issuers and, consequently, would be useful to users of ratings in evaluating a rating agency's performance. SEC specified the 1-, 3-, and 10-year periods so that the performance statistics the NRSROs generated would be more easily comparable. SEC also stated that requiring NRSROs to define the ratings categories used and explain their performance statistics would assist users of ratings in understanding how the measurements were derived and in comparing the measurement statistics of different NRSROs. In deciding which statistics to require, SEC identified default and transition rates as common benchmarks.[Footnote 38] To compute 1-year transition rates by rating category, an NRSRO will form cohorts by grouping all of the entities (issues or issuers) with ratings outstanding at the beginning of the year by their rating at the beginning of the year. The NRSRO then calculates the number or percentage of entities in each cohort that have each possible rating at the end of the year. Table 2 provides a hypothetical example of a 1- year transition matrix for cohorts of rated entities for 2009. Table 2: Hypothetical 1-Year Transition Matrix and 1-Year Default Rates Relative to Beginning-of-Year Ratings for 2009 Cohorts: Rating as of Jan 1, 2009: AAA; Rating as of Dec. 31, 2009: AAA: 93.3%; Rating as of Dec. 31, 2009: AA: 5.4%; Rating as of Dec. 31, 2009: A: 1.3%; Rating as of Dec. 31, 2009: BBB: 0.0%; Rating as of Dec. 31, 2009: BB: 0.0%; Rating as of Dec. 31, 2009: B: 0.0%; Rating as of Dec. 31, 2009: CCC: 0.0%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 0.0%; Rating as of Dec. 31, 2009: W: 0.0%; Rating as of Dec. 31, 2009: Cohort size[A]: 149. Rating as of Jan 1, 2009: AA; Rating as of Dec. 31, 2009: AAA: 0.4%; Rating as of Dec. 31, 2009: AA: 92.1%; Rating as of Dec. 31, 2009: A: 7.0%; Rating as of Dec. 31, 2009: BBB: 0.4%; Rating as of Dec. 31, 2009: BB: 0.0%; Rating as of Dec. 31, 2009: B: 0.0%; Rating as of Dec. 31, 2009: CCC: 0.0%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 0.0%; Rating as of Dec. 31, 2009: W: 0.2%; Rating as of Dec. 31, 2009: Cohort size[A]: 543. Rating as of Jan 1, 2009: A; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 1.3%; Rating as of Dec. 31, 2009: A: 91.3%; Rating as of Dec. 31, 2009: BBB: 6.9%; Rating as of Dec. 31, 2009: BB: 0.2%; Rating as of Dec. 31, 2009: B: 0.0%; Rating as of Dec. 31, 2009: CCC: 0.0%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 0.0%; Rating as of Dec. 31, 2009: W: 0.3%; Rating as of Dec. 31, 2009: Cohort size[A]: 1,181. Rating as of Jan 1, 2009: BBB; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 0.2%; Rating as of Dec. 31, 2009: A: 3.6%; Rating as of Dec. 31, 2009: BBB: 90.4%; Rating as of Dec. 31, 2009: BB: 5.0%; Rating as of Dec. 31, 2009: B: 0.5%; Rating as of Dec. 31, 2009: CCC: 0.0%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 0.1%; Rating as of Dec. 31, 2009: W: 0.3%; Rating as of Dec. 31, 2009: Cohort size[A]: 1,266. Rating as of Jan 1, 2009: BB; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 0.0%; Rating as of Dec. 31, 2009: A: 0.3%; Rating as of Dec. 31, 2009: BBB: 8.1%; Rating as of Dec. 31, 2009: BB: 83.6%; Rating as of Dec. 31, 2009: B: 6.9%; Rating as of Dec. 31, 2009: CCC: 0.4%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 0.1%; Rating as of Dec. 31, 2009: W: 0.6%; Rating as of Dec. 31, 2009: Cohort size[A]: 700. Rating as of Jan 1, 2009: B; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 0.0%; Rating as of Dec. 31, 2009: A: 0.0%; Rating as of Dec. 31, 2009: BBB: 0.2%; Rating as of Dec. 31, 2009: BB: 8.7%; Rating as of Dec. 31, 2009: B: 83.2%; Rating as of Dec. 31, 2009: CCC: 4.1%; Rating as of Dec. 31, 2009: CC: 1.5%; Rating as of Dec. 31, 2009: D: 0.7%; Rating as of Dec. 31, 2009: W: 1.7%; Rating as of Dec. 31, 2009: Cohort size[A]: 541. Rating as of Jan 1, 2009: CCC; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 0.0%; Rating as of Dec. 31, 2009: A: 0.0%; Rating as of Dec. 31, 2009: BBB: 2.1%; Rating as of Dec. 31, 2009: BB: 0.0%; Rating as of Dec. 31, 2009: B: 19.1%; Rating as of Dec. 31, 2009: CCC: 68.1%; Rating as of Dec. 31, 2009: CC: 2.1%; Rating as of Dec. 31, 2009: D: 4.3%; Rating as of Dec. 31, 2009: W: 4.3%; Rating as of Dec. 31, 2009: Cohort size[A]: 47. Rating as of Jan 1, 2009: CC; Rating as of Dec. 31, 2009: AAA: 0.0%; Rating as of Dec. 31, 2009: AA: 0.0%; Rating as of Dec. 31, 2009: A: 0.0%; Rating as of Dec. 31, 2009: BBB: 0.0%; Rating as of Dec. 31, 2009: BB: 0.0%; Rating as of Dec. 31, 2009: B: 20.0%; Rating as of Dec. 31, 2009: CCC: 20.0%; Rating as of Dec. 31, 2009: CC: 0.0%; Rating as of Dec. 31, 2009: D: 20.0%; Rating as of Dec. 31, 2009: W: 40.0%; Rating as of Dec. 31, 2009: Cohort size[A]: 40. Source: GAO. [A] Cohort size refers to the number of entities that had a particular rating at the beginning of the time period, in this case January 1, 2009. [End of table] The rows of the matrix show all possible credit ratings an entity could have at the beginning of the year, by rating category. The columns of the matrix show all possible credit ratings an entity could have at the end of the year, also by rating category. The matrix also includes columns for defaults (D) and withdrawals (W), since an entity could default or have its rating withdrawn during the course of the year. The table cells show the rates at which ratings migrate upward, downward, or stay the same. For example, 1.3 percent of the entities rated A at the beginning of the year were rated AA at the end of the year, 6.9 percent of the entities rated A at the beginning of the year were rated BBB at the end of the year, and 91.3 percent of the entities rated A at the beginning of the year were rated A at the end of the year. By showing the number or fraction of entities in each cohort with stable ratings--that is, with the same beginning-of-year and end-of- year ratings--transition matrixes allow users to compare the stability of different rating categories (for the same NRSRO). Table 1 shows that 93.3 percent of entities rated AAA at the beginning of the year were still rated AAA at the end of the year, but 90.4 percent of entities rated BBB at the beginning of the year were still rated BBB at the end of the year. Thus, AAA ratings demonstrated more stability than BBB ratings. In general, an NRSRO's credit ratings are intended to order rated entities by their creditworthiness, with high ratings indicating relatively more creditworthiness than low ratings. One aspect of creditworthiness is the extent to which it changes over time, with more creditworthy entities demonstrating greater stability. Thus, an NRSRO's higher ratings should exhibit more stability than its lower ratings. Transition rates illustrate how well a particular rating scale rank orders credit risk on this margin. As previously stated, default rates show the percentage of entities with each rating that defaulted over a given time period. For example, to calculate simple 1-year default rates relative to beginning-of-year ratings, an NRSRO will form cohorts by grouping all of the entities with ratings outstanding at the beginning of the year by their rating at the beginning of the year. The NRSRO then calculates the fraction of entities in each cohort that default during the year. Table 2 shows hypothetical 1-year default rates for the 2009 cohorts of rated entities. For example, no bonds rated AAA defaulted during 2009, while 20 percent of bonds rated CC defaulted during this period. SEC rules require that NRSROs disclose on their Form NRSRO 1-, 3-, and 10-year default rates relative to initial ratings. For example, to calculate simple 1-year default rates relative to initial ratings, an NRSRO could form cohorts by grouping all of the entities assigned initial ratings during a given time period by their initial rating. The NRSRO could then calculate the fraction of entities in each cohort that default within 1 year after they receive their initial rating. As previously discussed, an NRSRO's credit ratings are intended to put rated entities in order of their creditworthiness. It follows that entities with higher credit ratings should default less often than entities with lower credit ratings. Default rates also illustrate how accurately a particular rating scale rank orders credit risk on this margin. NRSROs' Differing Methodologies for Calculating Performance Statistics Limit Their Comparability: We reviewed the 2009 performance statistics published by the 10 NRSROs as part of their annual update to Form NRSRO, focusing on the corporate and structured finance asset classes. The required disclosures increased the amount of information publicly available about the performance of some NRSROs, particularly those that were newly registered. However, SEC did not specify how the NRSROs were to calculate the required performance statistics, and, as a result, the NRSROs used different methodologies for calculating the transition and default rates. For the transition rates, they differed by whether they (1) were for a single cohort or averaged over many cohorts, (2) constructed cohorts on an annual basis or monthly basis, (3) were adjusted for entities that have had their ratings withdrawn or unadjusted, and (4) allowed entities to transition to default or not. Because of these differences, users cannot use the performance statistics to compare transition rates across NRSROs, as the rule intended. First, some NRSROs provided transition rates for individual cohorts for the most recent 1-, 3-, and 10-year period.[Footnote 39] These NRSROs provided statistics summarizing the transition rates for individual cohorts--for example, 1-year transition rates for the 2009 cohort, 3-year transition rates for the 2007 cohort, and 10-year transition rates for the 2000 cohort.[Footnote 40] Other NRSROs provided average transition rates for multiple 1-, 3-, and 10-year time periods over a range of years. For example, one NRSRO calculated 1-year transition rates for every annual cohort from 1981 to 2009 (obtaining 28 separate 1-year transition rates) and then averaged the rates in those matrixes to obtain average 1-year transition rates for each rating category and notch. The NRSRO used the same methodology to calculate average 3-year and average 10-year transition rates over the same period. Single and average cohort approaches provide different information about an NRSRO's performance. The single cohort approach uses information from only the most recent 1-, 3-, and 10-year periods and thus describes the NRSRO's performance at specific points in time. As such, it is useful for describing the historical experience of a particular group of ratings under a particular set of circumstances. Single cohort transition matrixes are thus useful as predictors of the performance of ratings in future time periods under similar circumstances, but they are less useful as predictors of the performance of ratings in future time periods under different economic and other conditions. On the other hand, the average cohort approach uses information from multiple time periods and thus describes the NRSRO's performance during an average 1-, 3-, or 10-year time period. As such, average cohort transition rates are useful indicators of expected transition rates in the future, given that future economic and other conditions are unknown. Both approaches are valid, depending on the needs of the user, but they do not yield comparable information. Second, the NRSROs differ in whether they construct cohorts on an annual or monthly cohort basis. The frequency with which the cohorts are formed affects the accuracy of average transition rates. The higher the sampling frequency--the shorter the time period between cohort formation dates--the more observations are available for calculating the averages and the more accurate the transition rates are for predictive purposes. Most of the NRSROs used annual cohorts (that is, they formed a new cohort on December 30 or 31 or January 1 of each year), but one NRSRO used monthly cohorts (that is, it formed a new cohort on the first day of each month). Using monthly cohorts means that 12 times as many observations contribute to average transition rates. Third, we found some NRSROs adjusted their transition rates to reflect those entities with ratings that were withdrawn during the time period, while others did not. NRSROs withdraw ratings for a number of reasons. In many cases, the issue matures and the rating is no longer needed. In other cases, the NRSRO discontinues a rating for lack of information. Transition rates that include entities with withdrawn ratings in the cohorts are called "unadjusted" rates, while those that exclude entities with withdrawn ratings are called "withdrawal- adjusted" rates. The treatment of withdrawn ratings in calculating transition rates can have a significant impact on the magnitude of the rates. For example, suppose a cohort with an initial BBB rating has 100 rated entities at the beginning of the time period, and suppose that 25 are withdrawn during the period and the remainder are still rated BBB at the end of the period. The unadjusted transition rate from BBB to BBB would be 75 percent (75/100) and the withdrawal- adjusted transition rate would be 100 percent (75/75). For one NRSRO, we could not determine from the disclosures how it treated withdrawals in its transition rates. Fourth, one NRSRO did not include transitions to default in its transition rates. As a result, its performance statistics do not include information on the number of ratings that moved from a particular rating category to default during the 1-, 3-, or 10-year periods. This information is important to the investor, because ratings that move from a higher rating category (such as AAA, AA, or A) directly to default within the time period may signal poor ratings performance by the NRSROs. Table 3 summarizes the variation in the calculation of transition rates that we found in reviewing the NRSROs' 2009 performance statistics. Table 3: Methods Used by NRSROs to Calculate SEC-required 1-, 3-, and 10-year Transition Rates: NRSRO methods for calculating transition rates on corporate issuer ratings[A]: Single-cohort transition rates[B]: Withdrawal-adjusted, includes transitions to default[C]; Number of NRSROs using these methods: 2. Single-cohort transition rates[B]: Not withdrawal-adjusted, includes transitions to default; Number of NRSROs using these methods: 2. Average transition rates for multiple cohorts[D]: Withdrawal-adjusted, includes transitions to default; Number of NRSROs using these methods: 2. Average transition rates for multiple cohorts[D]: Not withdrawal- adjusted; Number of NRSROs using these methods: [Empty]. Average transition rates for multiple cohorts[D]: Includes transitions to default; Number of NRSROs using these methods: 1. Average transition rates for multiple cohorts[D]: Does not include transitions to default; Number of NRSROs using these methods: 1. Transition rate methodology unclear: Number of NRSROs using these methods: 1. NRSRO methods for calculating transition rates on corporate issuer ratings[A]: Total[E]; Number of NRSROs using these methods: 9. NRSRO methods for calculating transition rates on corporate issuer ratings[A]: Single-cohort transition rates: Withdrawal-adjusted, includes transitions to default; Number of NRSROs using these methods: 2. Single-cohort transition rates: Not withdrawal-adjusted, includes transitions to default; Number of NRSROs using these methods: 3. Average transition rates for multiple cohorts: Withdrawal-adjusted, includes transitions to default; Number of NRSROs using these methods: 2. Average transition rates for multiple cohorts: Not withdrawal-adjusted; Number of NRSROs using these methods: [Empty]. Average transition rates for multiple cohorts: Includes transitions to default; Number of NRSROs using these methods: 1. Average transition rates for multiple cohorts: Does not include transitions to default; Number of NRSROs using these methods: 1. Transition rate methodology unclear: Number of NRSROs using these methods: 1. NRSRO methods for calculating transition rates on corporate issuer ratings[A]: Total; Number of NRSROs using these methods: 10. Source: GAO analysis of transition rates reported on 2009 Form NRSRO filings. [A] Two NRSROs did not provide transition rates, but provided data so users could calculate these rates. Transition matrixes may also vary across NRSROs on margins not reported in the table. [B] Single-cohort transition rates describe the performance of individual cohorts. [C] Withdrawal-adjusted transition rates are those for which withdrawn ratings are excluded from cohorts. [D] Average transition rates describe the average performance of multiple cohorts over a given time period. [E] Total does not add up to 10 because one NRSRO is not registered in the corporate asset class. [End of table] NRSROs also used different methodologies for calculating default rates. In general, default rates differed by whether they were (1) relative to ratings at the beginning of a given time period or relative to initial ratings, (2) adjusted for entities that had their ratings withdrawn or unadjusted, (3) adjusted for how long entities survived without defaulting or unadjusted, (4) calculated using annual or monthly cohorts, and (5) calculated for a single cohort or averaged over many cohorts. Because of these differences, users cannot compare default rates across NRSROs, as the rule intended. First, most NRSROs reported default rates relative to ratings at the beginning of a specific time period, while two reported default rates relative to initial rating.[Footnote 41] Calculating default rates relative to ratings at the beginning of a given time period is similar to calculating transition rates relative to ratings at the beginning of a given time period. NRSROs form cohorts by grouping entities that had the same outstanding rating on a specific date, and then calculating the number or fraction of entities in each group that defaulted over a given time period. To calculate default rates relative to initial rating, NRSROs form cohorts by grouping entities that were assigned the same initial rating, regardless of when the initial ratings were assigned. One NRSRO calculated the default rates relative to initial ratings over the entire period for which it had historical ratings data, from 1983-2009. Issuers that were assigned initial ratings of AAA at any point during that 26-year period were grouped, as were issuers that were assigned initial ratings of AA, and so forth. The NRSRO then determined whether there had been any defaults at any time for any of those issuers over the 26 years and calculated the default rate. Another NRSRO used a different methodology and provided default rates relative to the initial ratings only for those ratings that were outstanding at the beginning of the most recent 1-, 3-, and 10-year periods. For example, for the 1-year period, this NRSRO determined the ratings that had been outstanding as of December 31, 2008, grouped them according to their initial ratings, determined whether any defaulted in 2009, and calculated the default rates. Default rates for entities based on their initial rating omit important information about the performance of NRSRO ratings over time. In some asset classes (specifically corporate, financial institution, and insurance company), performance statistics are based on issuer ratings, not the ratings on specific securities of those issuers.[Footnote 42] For some issuers, their ratings history spans decades or longer. The initial rating given to those firms is not relevant information at the time the issuer defaulted, because the performance of the issuer likely changed throughout the years. For example, an issuer initially could have been rated AAA 30 years ago but deteriorated in the last few years. An investor likely would be more interested in the last ratings the NRSRO provided for the issuer to determine whether it accurately predicted the default. In contrast, structured finance products typically do not have maturities that last for decades, so calculating defaults relative to initial ratings may provide more useful information in that sector.[Footnote 43] Second, as with transition rates, some NRSROs calculated their default rates adjusting for withdrawn ratings, while others did not. In addition to affecting the relative magnitude of default rates, the treatment of withdrawn ratings also provides different information about default risk. Unadjusted default rates describe the historical frequency of defaults for a cohort during a given time period and treat entities with withdrawn ratings as if they had remained in the data sample for the entire period. They can be used to predict the expected probability of default for entities over a time period that is at most as long as the period used to calculate the default rate. However, unadjusted default rates likely underestimate actual default rates for a cohort because NRSROs are less likely to observe defaults among entities with withdrawn ratings, either because they choose not to track those entities or because they have less access to information about them. Furthermore, unadjusted default rates are only useful for predicting default rates for entities that have withdrawal patterns similar to those of the cohort used to calculate the unadjusted default rates. The greater the differences in the withdrawal experience of two groups of rated entities, the less useful the unadjusted default rates for one group are for predicting defaults in the other group. For one NRSRO, we could not determine from the disclosures how it treated withdrawals in its default rates. On the other hand, withdrawal-adjusted default rates describe the historical frequency of defaults for entities during a given time period conditional on those entities having a rating outstanding for the entire period. These statistics treat entities with withdrawn ratings as if they faced the same likelihood of default as entities with ratings that were not withdrawn. Withdrawal-adjusted default rates can be used to predict the expected probability of default for entities over the same length of time as the period used to calculate the default rate. The usefulness of the prediction does not depend on similarities in withdrawal patterns for the entities. However, withdrawal-adjusted default rates assume that withdrawals are random and not correlated with the likelihood that an entity defaults. Withdrawal-adjusted default rates can be biased downward or upward if entities with withdrawn ratings are more or less likely to default, respectively, than entities with ratings that were not withdrawn. Third, some NRSROs reported simple default rates while others reported default rates conditioned on how long the entities went without defaulting or withdrawing. NRSROs calculate simple default rates by dividing the number of defaults that occurred over a specific time period by the number of rated entities in the cohort at the beginning of the time period. For example, some NRSROs reported the simple 3- year default rate for the 2007 cohort. To calculate this, they formed cohorts for 2007, took the number of defaults that occurred over the 3- year period in each cohort, and divided them by the number of rated entities included in each cohort. NRSROs calculate conditional default rates by adjusting for the fact that entities can default at different points during the chosen time period. This method is called conditional because default rates are conditioned upon those issuers that "survived" for a particular amount of time.[Footnote 44] Simple defaults rates are equal to conditional defaults rates if neither are adjusted for withdrawals. However, simple withdrawal-adjusted default rates are larger than conditional withdrawal-adjusted default rates. The former assume that all withdrawals occurred at the beginning of the period and thus never had an opportunity to default. The latter reflect the fact that withdrawals occur at different times during the period, so the number of ratings that could default are larger at the beginning of the period than at the end. Fourth, as with transition rates, the frequency with which the cohorts are formed affects the accuracy of average transition rates. Again, most of the NRSROs used annual cohorts, but two NRSROs used monthly cohorts. Fifth, some NRSROs reported default rates for the most recent 1-, 3-, and 10-year periods for individual cohorts, while others reported average 1-, 3-, and 10-year default rates for multiple cohorts. For example, some NRSROs reported simple 3-year default rates for the 2007 cohort. However, one NRSRO reported average simple 3-year default rates for 1990-2009. It did so by first calculating simple 3-year default rates for every cohort from 1990 through 2009 and then averaging those default rates. Table 4 summarizes the variation in the calculation of the default rates that we found in our review of the 2009 performance statistics. Table 4: Methods Used by NRSROs to Calculate SEC-required 1-, 3-, and 10-year Default Rates: NRSRO methods for calculating default rates on corporate issuer ratings[A]: Single-cohort default rates relative to beginning-of-year ratings[B]: Simple, withdrawal-adjusted[C]; Number of NRSROs using these methods: 2. Single-cohort default rates relative to beginning-of-year ratings[B]: Simple, unadjusted; Number of NRSROs using these methods: 1. Single-cohort default rates relative to beginning-of-year ratings[B]: Conditional-on-survival, withdrawal-adjusted[D]; Number of NRSROs using these methods: 2. Average default rates for multiple cohorts relative to beginning-of- year ratings[E]: Simple, withdrawal-adjusted; Number of NRSROs using these methods: 2. Default rates relative to initial rating: Conditional-on-survival, withdrawal-adjusted; Number of NRSROs using these methods: 1. Default rate methodology unclear: Number of NRSROs using these methods: 1. NRSRO methods for calculating default rates on corporate issuer ratings[A]: Total[F]; Number of NRSROs using these methods: 9. NRSRO methods for calculating default rates on structured finance ratings: Single-cohort default rates relative to beginning-of-year ratings: Simple, withdrawal-adjusted; Number of NRSROs using these methods: 1. Single-cohort default rates relative to beginning-of-year ratings: Simple, not withdrawal-adjusted; Number of NRSROs using these methods: 2. Single-cohort default rates relative to beginning-of-year ratings: Conditional-on-survival, withdrawal-adjusted; Number of NRSROs using these methods: 1. Average default rates for multiple cohorts relative to beginning-of- year ratings: Simple, withdrawal adjusted; Number of NRSROs using these methods: 2. Default rates relative to initial rating: Simple, not withdrawal- adjusted; Number of NRSROs using these methods: 1. Default rates relative to initial rating: Conditional-on-survival, withdrawal-adjusted; Number of NRSROs using these methods: 1. Default rate methodology unclear: Number of NRSROs using these methods: 1. NRSRO methods for calculating default rates on corporate issuer ratings[A]: Total[F]; Number of NRSROs using these methods: 9. Source: GAO analysis of default rates reported on 2009 Form NRSRO filings: [A] Default rates may also vary across NRSROs on margins not summarized in the table. [B] Single-cohort default rates describe the performance of individual cohorts. [C] Withdrawal-adjusted default rates are those for with withdrawn ratings are excluded from cohorts. [D] Conditional default rates are those that are adjusted for the fact that entities can default at different points during the chosen time period. [E] Average default rates describe the average performance of multiple cohorts over a given time period. [F] Totals do not add up to 10 because one NRSRO is not registered in the corporate asset class and one NRSRO did not report any default rates for issuers of ABS. [End of table] Besides not specifying how the NRSROs should calculate their transition and default rates, SEC did not specify how the NRSROs should present their performance statistics. For example, four NRSROs reported their transition and default rates as percentages, but did not report the absolute number of ratings in each cohort. As a result, these disclosures have limited utility for comparison purposes. All else being equal, transition and default rates will be more precise, and thus more meaningful, the greater the number of observations used to calculate them. Furthermore, defaults are relatively rare events that may not be observed at all in samples that are too small. Knowing the absolute numbers of ratings in each cohort is thus important for comparative purposes to give users an idea of precisely how transition and default rates are calculated and the total numbers of entities involved in each rate. That is, if an NRSRO's default rate is 20 percent, it is useful for users to know if one out of five rated entities defaulted, or if 10,000 out of 50,000 rated entities defaulted. As another example, if an NRSRO's default rate increases from 10 to 12 percent in different years, it is useful for users to know whether that 2 percentage point difference resulted from the default of 20 or 2,000 additional rated entities. At least one SEC-designated asset classes may be too broadly defined to be meaningful. Two NRSROs may rate the same asset class, but differences in ratings performance may reflect the differences between the sets of rated issues and issuers, and not necessarily provide insights into the relative merits of the ratings methodology used. For example, several NRSROs specialize in rating certain asset classes. Realpoint only rates commercial mortgage-backed securities (CMBS) while Japan Credit Rating Agency and Ratings and Investment, the two Japanese NRSROs, focus on Japanese issues. Thus, comparing the transition and default rates of these NRSROs with those of other NRSROs that may rate more types of structured finance products or focus on other geographic regions, may not be meaningful. In structured finance, the NRSROs that rate ABS generally present performance statistics for this asset class by sectors in their voluntary disclosures; that is, CMBS, RMBS, and ABS backed by auto, student, or credit card loans. These ABS sectors have risk characteristics that vary significantly. Thus, presenting performance statistics for the ABS asset class as whole, instead of by sectors, may not be useful. SEC has not yet re-evaluated the appropriateness of the currently designated asset classes to determine if they are appropriate for presenting performance statistics. Trading and Markets staff said SEC issued rules requiring the NRSRO to publish performance statistics under tight time frames.[Footnote 45] Because this is a new area for SEC, staff said they wanted to focus on drafting a rule that would be appropriate. They said that once SEC, NRSROs, and the market obtain some experience with these disclosures, SEC could respond to any issues or comments with further rulemakings. Because the NRSROs do not have specific guidance for calculating and presenting their currently required performance statistics, they used different methodologies. As a result, users of these statistics cannot compare ratings performance across NRSROs. Further, asset classes that are defined too broadly limit the usefulness of the disclosures. Under the Dodd-Frank Act, SEC must adopt rules requiring the NRSROs to publicly disclose information on the initial credit ratings determined by each NRSRO for each type of obligor, security, and money market instruments, and any subsequent changes to such credit ratings. [Footnote 46] The purpose of these rules is to allow users of credit ratings to evaluate the accuracy of ratings and compare the performance of ratings by different NRSROs. At a minimum, these disclosures must be comparable among NRSROs, clear and informative for investors having a wide range of sophistication who use or might use credit ratings, and include information over a range of years and for a variety of credit ratings, including those credit ratings that the NRSROs withdraw. In developing these new disclosure requirements, it will be important for SEC to provide clear and specific guidance to the NRSROs. Otherwise, the resulting disclosures may lack comparability. Although Using Consistent Methodologies to Generate Performance Statistics Is Helpful, Other Differences Can Make Comparisons among NRSROs Difficult: Even if NRSROs use the same methodologies to generate and present performance statistics, there are differences in NRSROs' measures of creditworthiness, ratings scales, ratings methodologies, and other processes. It is important that users of NRSRO performance statistics be aware of this contextual information when comparing NRSRO performance. * NRSROs vary in how they measure creditworthiness. For example, some NRSROs' credit ratings measure the likelihood of default, while others also measure other characteristics, such as the anticipated severity of dollar losses given a default. * NRSROs also vary in how they define the elements of their ratings scales. As previously discussed, NRSRO rating scales rank rated entities according to their relative creditworthiness, with higher ratings indicating higher creditworthiness. However, the creditworthiness associated with each rating category can vary across NRSROs. Even within a rating scale, the assignment of ratings in the same rating category to issuers and issues may not fully reflect small differences in the degrees of risk. Moreover, the degree of risk within a particular rating scale can change over time.[Footnote 47] * NRSROs can differ in how they determine when to withdraw a rating. As previously discussed, withdrawals typically occur when an issue matures, but NRSROs also exercise judgment on whether or not to withdraw ratings in other cases, such as when they believe they do not have sufficient information to provide a rating. They also can vary in the extent to which they track withdrawn ratings. We obtained information from four NRSROs on their treatment of withdrawn ratings. Three NRSROs continue to track the issue or issuer after a rating is withdrawn to determine whether it eventually defaulted. These NRSROs then update their performance statistics to account for these defaults. One NRSRO did not. NRSROs that track post-withdrawal defaults will show a higher default rate than those that do not, all other things being equal. * NRSROs can differ in how they define default. Therefore, some agencies may have higher default rates than others as a result of a broader set of criteria for determining that a default has occurred. * Differences in NRSROs' rating methodologies can affect the relative stability of ratings. For example, in their public disclosures, two NRSROs stated that they rate through the business cycle, meaning that their ratings are intended to measure how an issuer will weather a variety of macroeconomic conditions, relative to other issuers. These rating agencies, upon receiving new information about the issuer, may not immediately revise the rating. As a result, ratings that reflect a through-the-cycle approach are less likely to fluctuate over time. However, another NRSRO told us that it updates ratings more frequently to reflect current market information and conditions. NRSROs that use this approach likely will have transition rates that show more volatility than the transition rates of NRSROs that rate through the business cycle. Users of NRSRO performance statistics can obtain some of this contextual information from other disclosures NRSROs are required to make under Form NRSRO.[Footnote 48] As previously discussed, NRSRO applicants and registered NRSROs are required to disclose general descriptions of their policies and procedures for determining ratings. For example, these disclosures discuss each NRSRO's approach to measuring creditworthiness, identifying defaults, and determining when to withdraw a rating. As part of their required disclosures of performance statistics, NRSROs also must describe the rating categories for their ratings scales. However, these descriptions define only the rank ordering of the elements of the rating scale, and do not give any indication of the actual degree of risk associated with a rating category. When it proposed rules to require performance disclosures from the NRSROs, SEC requested comments on whether the performance statistics should use standardized inputs, time horizons, and metrics to allow for greater comparability.[Footnote 49] Some commenters opposed the use of standardized measures, stating that such measures would be impractical because credit rating agencies use different methodologies to determine credit ratings and different definitions of default and that the use of such measures could interfere with the methodologies for determining credit ratings. However, a few commenters supported the use of standardized measures because it would make it easier to compare NRSROs. In light of the different views expressed, SEC stated it would continue considering this issue to determine the feasibility and potential benefits and limitations of devising measurements that would allow reliable comparisons of performance between NRSROs. SEC's ability to achieve comparability on some of these margins may be limited, however, given Section 15E prohibits SEC from regulating credit ratings and the procedures and methodologies used to determine them. Required Disclosures of Ratings History Data Have Limited Usefulness for Generating Comparative Performance Statistics: In February 2009, SEC adopted a rule requiring issuer-pays rating agencies to disclose a random 10 percent sample of outstanding ratings in each class of ratings in which they have more than 500 issuer-paid ratings.[Footnote 50] In December 2009, SEC issued a second rule requiring all NRSROs to disclose 100 percent of the histories of their ratings actions for credit ratings initiated on or after June 26, 2007.[Footnote 51] SEC intended the two rules to be complementary and allow users to generate a variety of performance measures and comparative studies. However, we found that the data disclosed under the 10 percent sample disclosure requirement do not contain enough information to construct comparable performance statistics and are not representative of the population of credit ratings at each NRSRO and that the data disclosed under the 100 percent disclosure requirement likely present similar issues. Factors Limiting Utility of 10 percent Samples Include Lack of Information on Ratings Types and Variables, and Sampling Methodologies: According to SEC, the 10 percent sample requirement is intended to foster accountability and comparability--and hence, competition--among NRSROs. SEC stated in the final rule that market observers should be able to develop performance statistics based on the data to compare the rating performance of different NRSROs, which will foster NRSRO competition. The rule specified that the ratings histories NRSROs must provide for each security that is part of their sample should include (1) all ratings actions (initial rating, upgrades, downgrades, placements on watch for upgrade or downgrade, and withdrawals); (2) the date of such actions; (3) the name of the security or issuer rated; and (4) if applicable, the CUSIP number of a rated security or CIK number of a rated issuer.[Footnote 52] New ratings actions must be disclosed no later than 6 months after they are taken. We reviewed the 10 percent samples from the seven issuer-paid NRSROs. We could not use these samples to generate reliable performance statistics for the NRSROs, as the rule intended, for the following reasons: (1) the data fields the NRSROs included in their disclosures were not always sufficient to identify complete ratings histories for the rated entities comprising each sample, (2) the data fields did not always give us enough information to identify specific types of ratings for making comparisons, (3) the data fields did not always give us enough information to identify the beginning of the ratings histories in all of the samples, (4) SEC rules do not require the NRSROs to publish a codebook or any explanation of the variables used in the samples, (5) not all NRSROs are disclosing defaults in the ratings histories provided as part of their 10 percent samples, and (6) SEC guidance to the NRSROs for generating the random samples does not ensure that the methods used will create a sample that is representative of the population of credit ratings produced by each NRSRO. As a result, users cannot generate valid comparative performance statistics that can be compared across NRSROs. First, SEC did not specify the data fields the NRSROs were to disclose in the rule, and the data fields provided by the NRSROs were not always sufficient to identify a complete rating history for ratings in each of the seven samples. If users cannot identify the rating history for each rating in the sample, they cannot develop performance measures that track how an issue or issuer's credit rating evolves. Second, the data fields did not always give users enough information to identify specific types of ratings for making comparisons. In one sample, we could not distinguish between issue ratings and issuer ratings. Distinguishing issuer rating histories from issue rating histories is important because, as we previously discussed, performance statistics for some asset classes, such as corporate issuers, financial institutions, and insurance companies, typically are calculated using issuer ratings, while performance statistics for issuers of ABS typically are calculated using issue ratings. Distinguishing between issuer and issue ratings is important for evaluating comparable entities across agencies. Comparing the performance of one agency's issuer ratings with another agency's issue ratings would not be meaningful. For ABS, one NRSRO told us that its sample did not have enough information to identify the individual tranches that constitute a deal.[Footnote 53] Without this ability, users cannot construct meaningful performance measures for ABS. Third, the data fields did not always give us enough information to consistently identify the beginning of the ratings histories in all of the samples. One NRSRO did not include a variable describing rating actions, so we could not identify the initial rating in the rating histories. As a result, we could not calculate transitions or defaults relative to initial rating for this sample. We also could not calculate measurements, such as path-to-default or time-to-default, which depend upon comparing a starting point to the state of a rating at the time of default. The rating histories in three NRSROs' samples did not always begin with an initial rating action. Those histories could not be used to calculate the performance statistics discussed above for the three NRSROs. Fourth, SEC rules do not require NRSROs to publish a codebook or any explanation of the variables used in the samples, and none voluntarily publish one to accompany its sample data. For several NRSROs, we had to contact them to obtain an explanation of variables used in the samples. Without the ability to easily determine what data the variables represent, users could not begin to construct performance statistics. Fifth, not all NRSROs have been disclosing defaults in the ratings histories provided as part of their 10 percent samples. As previously discussed, SEC requires that the ratings histories disclosed as part of the sample include all ratings actions taken. However, not all NRSROs consider the designation of "default" as a rating action. For example, one NRSRO does not consider default as part of its rating scale, so it does not disclose any defaults for any of the ratings that are part of its sample. Without this information, users cannot calculate any default statistics or other statistics that incorporate default rates for this sample. Sixth, SEC guidance to the NRSROs for generating the random samples does not help ensure that the methods used will create a sample that is representative of the population of credit ratings at each NRSRO. The rule requires NRSROs to generate a random 10 percent sample of the outstanding ratings in each asset class for which the NRSRO is registered, but does not specify what kind of ratings to draw the sample from. Depending on how NRSROs construct their universe of outstanding credit ratings from which they draw the 10 percent samples, the samples may not represent similar ratings. For example, two NRSROs said they draw the 10 percent sample from the total number of entities that are rated, while two other NRSROs said they draw the sample from the total number of ratings. A corporate issuer may have a long-term debt rating and a short-term debt rating. An NRSRO drawing the sample from the total number of entities rated would count the corporate issuer as one rated entity. However, an NRSRO drawing the sample from the total number of ratings would count the ratings on both the issuer and the issue. Further, where samples include both issuer and issue ratings for asset classes such as corporate, financial institutions, or insurance companies, the user may have to first separate out the issuer ratings in order to calculate performance statistics. However, the fraction of issuers represented in the samples varies across asset classes and NRSROs and users do not know what these fractions are. Where NRSROs construct their universe of ratings based on the total number of ratings issued, and provide multiple kinds of ratings, the fraction of the sample that represents rated issuers is likely to be relatively small. Because NRSROs are not required to draw the sample from the rating types that are typically analyzed in each asset class, users may not have enough observations to generate statistically valid performance measures or develop comparable measures.[Footnote 54] Furthermore, NRSROs are not required to redraw the 10 percent samples periodically. The rule requires that NRSROs re-examine their samples periodically to make sure that they remain 10 percent of outstanding ratings. We obtained information from five NRSROs on their methods for maintaining compliance with the rule. Two NRSROs told us they create a larger-than-required sample so that over time it is unlikely to dip below 10 percent of outstanding ratings. The other three NRSROs said they review the samples periodically to identify those securities that have matured or been withdrawn and replace them with randomly selected outstanding issues. These NRSROs do not redraw the samples. However, in both methodologies, the distribution of some ratings types will become less representative over time. For example, some ratings mature and are withdrawn at a faster rate than others, but these may be replaced with ratings that mature more slowly. If the NRSROs do not periodically redraw their samples, over time, statistics generated from the samples will become less representative of the population of credit ratings. Most importantly, an NRSRO's 10 percent sample is representative only of the NRSRO's currently outstanding ratings, a subset of all the ratings the NRSRO has produced. That is, they do not represent ratings that have been withdrawn in prior time periods. As previously discussed, the methodologies used by some NRSROs for constructing default and transitions rates over time factor in ratings that have been withdrawn so that the statistics represent the population of ratings that were in effect during the period studied. Moreover, because withdrawn ratings may be systematically different from outstanding ratings, the 10 percent samples may not be representative of the underlying populations of all ratings the NRSROs have issued. Thus, historical performance statistics calculated using an NRSRO's sample may contain biases that are not present in the universe of ratings that the NRSROs use to compose cohorts in their own studies. Furthermore, the extent of these biases may differ across NRSROs. Because the samples do not contain information on withdrawn ratings, they also do not contain information on the post-withdrawal performance of such ratings. As previously discussed, some NRSROs track issuers for evidence of default after their ratings are withdrawn. They then count these defaults as part of their own studies. One NRSRO said that defaults on withdrawn ratings account for about 20 percent of all the defaults it reports in its performance statistics for corporate issuers, financial institutions, and insurance companies. Unless NRSROs include withdrawn ratings as part of their samples, users cannot calculate performance statistics that are representative of the underlying population. Exclusion of Many Issuers Limits Utility of 100 Percent Requirement for Comparative Purposes: As previously mentioned, SEC adopted a second rule in December 2009 requiring all NRSROs to disclose 100 percent of their ratings actions histories for credit ratings initiated on or after June 26, 2007. In the case of issuer-paid credit ratings, each new ratings action must be disclosed no later than 12 months after it is taken. For ratings actions that are not issuer-paid, each new ratings action must be disclosed no later than 24 months after it is taken. SEC stated in the final rule that the 100 percent requirement will help individual users of credit ratings design their own performance metrics. SEC also noted in the final rule that the 10 percent requirement and 100 percent requirement will provide different types of data sets with which to analyze and compare the performance of NRSROs' credit ratings. For example, SEC stated in the final rule that because the 10 percent sample requirement applies to all outstanding and future credit ratings in the rule's scope, initially it will provide information that is much more retrospective and include histories for ratings that have been outstanding for much longer periods. However, SEC stated that because the 100 percent requirement is broader in scope, the disclosure eventually will provide for a more granular comparison of ratings performance. SEC stated that, unlike the 10 percent requirement, it will permit users of credit ratings and others to take a specific debt instrument and compare the ratings history of each NRSRO that rated it.[Footnote 55] SEC also noted that while the 10 percent sample requirement is limited to issuer-paid credit ratings, the 100 percent requirement covers all credit ratings, thereby allowing comparisons of a broader set of NRSROs. The 100 percent requirement will make a larger amount of data available to users over time than the 10 percent requirement; however, several factors may limit the usefulness of the data for generating meaningful and comparable performance statistics. First, according to Trading and Markets staff, the rule does not require that these data include the ratings of any issuer that was rated before June 26, 2007. Officials from two NRSROs told us their samples thus exclude issuer ratings on many major American corporations. We searched the data disclosed by a third NRSRO pursuant to this rule, and could not find issuer ratings for several issuers that this NRSRO currently rates, such as the Allstate Corporation, Ford Motor Company, and General Electric Company. As performance statistics for several asset classes, including corporate issuers, are based on issuer, not issue ratings, performance statistics calculated using data that do not include the ratings of issuers rated prior to June 26, 2007, would not reflect the overall rating performance of NRSROs and may not be representative of the universe of issuer ratings. For example, one NRSRO told us that new issuers, especially new nonfinanical companies, generally are rated speculative. It said that its ratings history data would include ratings on only these issuers, and not the older, more established issuers. Where the data do not contain the types of ratings typically analyzed for a particular asset class, they will have limited usefulness for generating performance statistics. Second, as with the 10 percent samples, data on withdrawn ratings and any subsequent defaults on withdrawn ratings are not required to be disclosed. To the extent that withdrawn ratings are not included in the data, users will not be able to generate withdrawal-adjusted statistics and the data will underrepresent defaulted issuers and issues. Finally, SEC required that NRSROs disclose the same ratings history information for the 100 percent disclosure requirement as for the 10 percent sample disclosure requirement (ratings action, date of such actions, the name of the issuer or issue, and the CUSIP or CIK number), but again, did not specify the data fields NRSROs were to include in their disclosures. As we discussed, the data fields provided by the NRSROs in their 10 percent samples were not always sufficient to ensure that the rating histories had enough information to allow the user to construct complete ratings histories or identify specific types of ratings for making comparisons. Without additional SEC guidance to NRSROs on how to format and describe the data, the 100 percent data sets likely will present challenges similar to those for 10 percent sample for users seeking to construct ratings histories and develop comparable performance statistics. NRSROs' Different Methodologies for Counting Total Outstanding Ratings Limit the Usefulness of These Disclosures: SEC requires that each NRSRO publicly disclose on its initial application and annual certification of Form NRSRO the approximate number of total outstanding ratings by each of the five major asset classes. In requiring public disclosure of this information, SEC said that users of credit ratings will find this information useful in understanding an NRSRO. For example, SEC said it would provide information to users of credit ratings as to how broad an NRSRO's coverage is within a particular class of credit ratings. However, SEC did not specify how the NRSROs were to count their outstanding ratings. As a result, the NRSROs used diverse methodologies to count up their outstanding ratings. For example, in the corporate issuers, financial institutions, insurance company, and government securities asset classes, some NRSROs counted the number of issuers rated, others counted the number of ratings on issues (which could be multiple) and others counted the number of rated issuers and issues.[Footnote 56] As another example, in the structured finance asset class some NRSROs counted the number of issuers whose deals they rated, some counted the number of deals, others counted the number of tranches underlying each deal, and others counted the number of ratings on each tranche (which could also be multiple). The NRSROs did not disclose how they determined their total outstanding ratings, so users have no way of knowing that these differences exist. Because of the inconsistencies in how the NRSROs count their total outstanding ratings, users cannot rely on the disclosures to assess how broad an NRSRO's coverage is within a particular class of credit ratings. As SEC Works to Remove NRSRO References from SEC Rules, It Will Need To Ensure It Has the Staff with the Requisite Skills to Evaluate Compliance with Any Alternative Creditworthiness Standard: In July 2008, SEC proposed amendments to multiple rules and forms that would have removed the references to NRSRO ratings from those rules. While SEC removed references from six rules and two forms, it retained the use of the ratings or delayed further action on two rules. These rules govern money market fund investments and the amount of capital broker-dealers must hold and use NRSRO references as risk-limiting measures. We found that OCIE examiners had concerns with these proposals. For example, in the securities rule regulating money-market fund investments, SEC proposed to remove NRSRO references, which the rule used to define the minimum credit quality of the securities a money market fund could hold, and relying instead solely on the existing requirement that fund boards independently assess the credit quality of portfolio securities and determine that each presents minimal credit risks to the fund. OCIE examiners expressed concerns that the proposed rule might allow money market funds to invest in riskier securities than the current rule allows. SEC opted not to remove references at that time. The recently adopted Dodd-Frank Act requires SEC and other federal agencies to remove references to NRSRO ratings from their regulations and substitute an alternative standard of creditworthiness. Given the Dodd-Frank Act requirements, SEC's previous experience with proposals to remove credit rating references highlights the importance of developing a plan to help ensure that (1) any adopted alternative standards of creditworthiness for a particular rule facilitate its purpose (e.g., that money market funds invest only in high-quality securities or that broker-dealers hold sufficient capital against their investments), and (2) examiners have the requisite skills to apply the adopted standards. Without such a plan, SEC may develop alternative standards of creditworthiness that are not effective in supporting the purpose of a particular rule. SEC Has Removed References from Multiple SEC Rules and Forms but Retained Their Use or Delayed Action on Two Rules: In the past 2 years, SEC has proposed or made changes to regulations that rely on the use of NRSRO ratings.[Footnote 57] Federal securities and banking regulations rely on NRSRO ratings for a variety of purposes. For example, NRSRO ratings are components of the definition of a mortgage-related security and establish criteria for eligibility for certain types of securities registration. According to a recent Joint Forum survey, U.S. federal banking and securities statutes, legislation, regulations, and guidance contain 81 references to NRSRO ratings, 45 of which are in SEC regulations or guidance and 36 in the statutes, regulations or guidance of various banking regulators. [Footnote 58] In particular, SEC has proposed removing references to NRSRO ratings from Investment Company Act rule 2a-7, which contains provisions that limit the types of securities a money market fund can hold, and from Exchange Act rule 15c3-1 (the "net capital rule"), which includes provisions to designate the capital that broker-dealers must hold against their assets.[Footnote 59] Rule 2a-7 governs the operations of money market funds. Unlike most other investment companies, money market funds seek to maintain a stable share price, typically $1.00 per share.[Footnote 60] The Investment Company Act and applicable rules generally require investment companies to calculate current net asset value per share by valuing portfolio instruments at market value or, if market quotations are not readily available, at fair value as determined in good faith by, or under the direction of, the board of directors. Rule 2a-7 exempts money market funds from these provisions, but contains conditions on the investments of the fund such as maturity, quality, liquidity, and diversification, which are designed to minimize the deviation between a fund's stabilized share price and the market value of its portfolio. If the deviation becomes significant, the fund may be required to take certain steps to address the deviation, including selling and redeeming its shares at less than $1.00 (breaking the buck).[Footnote 61] Among these risk-limiting conditions, rule 2a-7 limits a money market fund's portfolio investments to eligible securities. Under rule 2a-7, eligible securities are those securities that have received a credit rating from the "requisite NRSROs"[Footnote 62] in one of the two highest, short-term rating categories or are comparable unrated securities. [Footnote 63] Rule 2a-7 further restricts money market funds to holding securities that the fund's board of directors (or those on whom they rely) determines present minimal credit risks. This second requirement specifically requires that the determination "be based on factors pertaining to credit quality in addition to any rating assigned to such securities by an NRSRO." The net capital rule requires broker-dealers to maintain, at all times, a minimum amount of net capital and uses NRSRO ratings as a third-party assessment of credit risk in prescribing the level of capital required to be held. The rule was adopted to create uniform capital requirements for and help ensure the liquidity of all registered broker-dealers. In computing net capital, broker-dealers must deduct from their net worth certain percentages of the market value of their proprietary securities positions.[Footnote 64] The deductions are known as haircuts and serve to provide a margin of safety against losses broker-dealers might incur as a result of market fluctuations in the prices of, or lack of liquidity in, their proprietary positions. SEC allows broker-dealers to apply reduced haircuts for certain types of securities they hold that at least two NRSROs rate as investment-grade because these securities typically are more liquid and less volatile in price than securities not so highly rated. That is, the more highly rated the security, the more it counts toward the total amount of capital the broker-dealers are required to hold. In addition to NRSRO ratings, the net capital rule uses measures such as position concentration, maturity, and type of security to determine appropriate haircuts. SEC proposed removing references to ratings in rule 2a-7 and the net capital rule in July 2008.[Footnote 65] Among other reasons, SEC proposed these amendments to address the risk that the references to and use of NRSRO ratings in SEC rules could be interpreted by investors as an endorsement of the quality of the rating and might encourage investors to place undue reliance on them. For rule 2a-7, SEC proposed eliminating the requirement that portfolio securities have a certain NRSRO rating (or be a comparable unrated security), while retaining the requirement that portfolio securities be limited to those that the fund's board of directors determines present minimal credit risks. The proposal also would have specifically required the board's determination to be based on factors pertaining to credit quality and the issuer's ability to meet its short-term financial obligations.[Footnote 66] The proposal would have eliminated the requirement that money market funds restrict themselves to investing in securities highly rated by NRSROs (or comparable unrated securities), and instead relied on the existing requirement that the fund's board of directors determine that the securities present minimal credit risks. Under the proposal, fund boards could have continued to use quality determinations prepared by outside sources, including NRSRO ratings, if they concluded these ratings were credible for making credit risk determinations. In the rule proposal, SEC stated it expected that boards of directors (or their designees) would understand the basis for the rating and make an independent judgment of credit risk. In February 2010, SEC adopted amendments to rule 2a-7, which continues to use NRSRO ratings in defining eligible securities. The amendments require money market fund boards to designate, at least once each calendar year, at least four NRSROs, the credit ratings of which the boards deem to be sufficiently reliable for use by their funds to comply with rule 2a-7's eligible security requirements.[Footnote 67] As proposed in July 2008, the revisions to the net capital rule would substitute two new standards for the current NRSRO ratings-based categories. For determining haircuts on commercial paper, SEC proposed to replace the top tiers of ratings-based categories with a requirement that the instrument be subject to a minimal amount of credit risk and have sufficient liquidity so that it could be sold at or near its carrying value almost immediately.[Footnote 68] For determining haircuts on nonconvertible debt securities, SEC proposed a requirement that the instrument be subject to "no greater than moderate" credit risk and have sufficient liquidity so that it could be sold at or near its carrying value in a reasonably short time. [Footnote 69] According to SEC, the proposed standards are meant to serve the same purpose as the prior standards. Thus, securities with "no greater than moderate" credit risk would encompass all so-called investment-grade securities. SEC believes broker-dealers have the financial sophistication and the resources necessary to make the basic determination of whether or not a security meets the requirements in the proposed amendments and distinguish between securities subject to minimal credit risk and those subject to moderate credit risk. Under the proposal, broker-dealers would have to be able to explain how the securities they used for net capital purposes met the standards in the proposed amendments. However, SEC stated it would be appropriate, as one means of complying with the proposed amendments, for broker- dealers to refer to NRSRO ratings for the purposes of determining haircuts under the rule. SEC decided to delay any action on this proposal and as of June 2010, continued to solicit comments.[Footnote 70] In October 2009, SEC adopted amendments to six rules and two forms that removed the references to NRSRO ratings made in these rules. [Footnote 71] Four of these rules and the two forms originally were adopted in 1998 as part of SEC's new framework for regulation of exchanges and alternative trading systems and utilized "investment- grade" and "non-investment-grade" to distinguish between classes of securities. The adopted amendments and changes to forms eliminated the distinction between classes of securities and the use of "investment grade" and "non-investment-grade." The remaining two rules utilize the terms "highest rating category from an NRSRO" and "investment-grade rating from at least one NRSRO" to define securities exempted from specific requirements or define a class of securities eligible for purchase by funds when the security's principal underwriter had certain relationships with the fund or its investment adviser. In both cases, the adopted amendments remove the references to ratings and either remove the exemption or redefine the class of eligible securities.[Footnote 72] Developing a Plan to Address the Implications of the Adopted Alternative Standards May Help SEC Ensure It Has the Skills and Resources Necessary to Evaluate Compliance with the Standards: The proposed changes to rules 2a-7 and 15c3-1 would have eliminated the bright-line creditworthiness standard OCIE examiners used to determine that money market funds invested in high quality securities or the appropriateness of the haircut a broker-dealer took for net capital purposes on a security. We reviewed OCIE's 2a-7 examination module and 65 OCIE money market fund examinations (for FY 2003-2009) identified as having 2a-7 deficiencies to understand how OCIE examiners assess compliance with the rule's requirements for determining an "eligible security" and minimal credit risks and how the removal of NRSRO references would affect SEC's ability to oversee a fund's exposure to credit risks. As stated above, rule 2a-7 limits money market funds investments to those securities that are rated in one of the two highest short-term categories by an NRSRO or comparable unrated securities and that the fund's board determines present minimal credit risks for the fund. OCIE examiners examine money market funds for compliance with this provision by reviewing the NRSRO ratings at the time of purchase for securities held. Examiners typically identify if securities held by a money market fund are eligible securities by requesting a list of all portfolio holdings, including the current NRSRO rating for each holding, and verify the NRSRO ratings by reviewing the published ratings on Bloomberg or on the NRSRO Web site. OCIE examiners then typically review a fund's compliance and procedures manuals to help ensure that the board has established minimal credit risk guidelines and receives periodic credit risk updates and reports from the adviser verifying that all the securities comply. Examiners further request and review a small sample of credit analysis packages that demonstrate that the securities are eligible and a sample of the materials presented to the fund's security evaluation committee as evidence of ongoing reviews that a security continues to present minimal credit risks. According to OCIE examiners, policies, procedures, and practices for conducting minimal credit risk analysis vary widely. Of the 65 examinations of money market funds OCIE completed in FY 2003- 2009 that we reviewed, 36 examinations identified 42 deficiencies in the funds' compliance with the requirement for a minimal credit risk determination.[Footnote 73] They generally could be categorized as deficiencies in fund board oversight, policies and procedures, or credit file documentation. According to OCIE examiners, citing funds for a deficiency in documenting its analysis of minimal credit risk in an examination is not unusual. For example, in one examination deficiency letter OCIE found no current written documentation in the credit files substantiating that the fund adequately determined that each security purchased presented minimal credit risks and requested that the fund bring its files up-to-date. According to Enforcement staff, SEC has not brought any enforcement actions against a money market fund for violations of this requirement. OCIE examiners expressed concerns with the proposed rule because they believed it might allow money market funds to invest in riskier securities than the current rule allows. Under the proposed rule, a money market fund could invest in any security it finds to present a minimal credit risk. OCIE examiners stated they would have likely continued to evaluate for compliance with the minimal credit risk determination requirement as they do under the current rule. As such, OCIE only examines a fund's policies and procedures to assess if they effectively address credit risk and to assess whether a fund follows its policies and procedures in making credit risk determinations. It does not evaluate the standards used to determine whether a security is deemed to represent a minimal credit risk, dictate the types of analyses that must be included in a minimal credit risk determination, or make any of its own determinations as to whether the security represents a minimum credit risk to that fund[Footnote 74]. According to Investment Management staff, the minimum credit risk requirement was not designed for these purposes as the rule recognizes that funds can have different investment objectives and positions, and as such, the same security could present different risks to different funds. One fund might consider a particular security an appropriate investment, while another would not. OCIE examiners stated that the proposed rule eliminated the floor, in terms of creditworthiness, that NRSRO references provided and it was unclear how, if at all, the standard for eligible securities under the proposed rule would ensure that money market funds continued to invest only in securities of the highest credit quality. Further, if OCIE examiners were given the authority to evaluate funds' credit risk determinations, OCIE staff told us that additional resources and skill sets would be needed to conduct such examinations and they questioned OCIE's ability to evaluate the credit risk determinations.[Footnote 75] To date, examiners have not needed to have these skills because examiners, as dictated by the rules and interpretations, relied on NRSRO ratings. OCIE examiners told us that as proposed, they likely would approach compliance examinations by continuing to focus on ensuring that funds had reasonable policies and procedures in place for determining what constituted an eligible security and documentation demonstrating that those policies and procedures were followed and an analysis of credit risk completed. The proposal to remove NRSRO references from the net capital rule also would eliminate the credit-risk criteria OCIE examiners currently use, among other factors, to determine whether a broker-dealer was taking appropriate haircuts. Under the current rule, broker-dealers use a variety of factors, including whether the security is rated investment grade, to determine the haircut they must take on debt securities when determining their net worth for regulatory capital purposes.[Footnote 76] OCIE examiners generally confirm the net capital calculation by reviewing and confirming a firm's inventory, selecting a sample of securities with which to verify the existence of a ready market, and verifying that the haircuts were accurate and considered in the net capital computation. Under the proposed rule, broker-dealers would be responsible for determining the level of risk a security presented and the amount of the subsequent haircut, which could be different for each broker-dealer, depending on the methods used. Going forward, the Dodd-Frank Act requires SEC to remove NRSRO ratings from its rules. SEC's previous experience with proposals to remove credit rating references highlights the importance of developing a plan to help ensure that (1) any adopted alternative standards of creditworthiness for a particular rule facilitate its purpose (e.g., that money market funds invest only in high-quality securities or that broker-dealers hold sufficient capital against their investments), and (2) examiners have the requisite skills to determine that the adopted standards have been applied. Without such a plan, SEC may develop alternative standards of creditworthiness that are not effective in supporting the purpose of a particular rule. The Number of NRSROs Has Increased since the Act Was Implemented but Industry Concentration Remains High: Since the implementation of the Act, the number of NRSROs has increased from 7 to 10. However, the market remains highly concentrated. Continued concentration is likely a result of multiple factors. First, relatively little time has passed since SEC implemented the NRSRO registration program and NRSRO rulemaking. Second, credit rating agencies face barriers in entering the credit rating industry and registering as an NRSRO. Academic research suggests that increasing competition among NRSROs improves information availability but the impact on ratings quality is unclear. The Number of NRSROs Has Increased from 7 to 10, but the Industry Remains Concentrated: Since the implementation of the Act, the number of NRSROs has increased from 7 to 10; however, the market remains highly concentrated. As previously discussed, 7 credit rating agencies had received SEC staff no-action letters recognizing them as NRSROs prior to the Act. When the NRSRO registration program became effective, these firms applied to register as NRSROs and received SEC approval.[Footnote 77] All of these operate primarily under an issuer- pays business model. SEC also granted NRSRO registration to 3 additional credit rating agencies that operate primarily under a subscriber-pays business model.[Footnote 78] Figure 1 indicates when the 10 NRSROs began producing credit ratings and the year that SEC first recognized them as NRSROs, either through the no-action letter process or the NRSRO registration program. Figure 1: Years the Current NRSROs Have Produced Credit Ratings and Have Been Recognized as NRSROs: [Refer to PDF for image: timeline] A.M. Best Company: Years credit rating agency has produced credit ratings: 1898-2010; Years SEC has recognized credit rating agency as an NRSRO: 2005-2010. Moody’s Investors Service[A]: Years credit rating agency has produced credit ratings: 1909-2010; Years SEC has recognized credit rating agency as an NRSRO: 1975-2010. Fitch Ratings[B]: Years credit rating agency has produced credit ratings: 1924-2010; Years SEC has recognized credit rating agency as an NRSRO: 1975-2010. Standard & Poor’s Ratings Services[C]: Years credit rating agency has produced credit ratings: 1922-2010; Years SEC has recognized credit rating agency as an NRSRO: 1975-2010. DBRS: Years credit rating agency has produced credit ratings: 1976-2010; Years SEC has recognized credit rating agency as an NRSRO: 2003-2010. LACE Financial: Years credit rating agency has produced credit ratings: 1984-2010; Years SEC has recognized credit rating agency as an NRSRO: 2008-2010. Japan Credit Rating Agency: Years credit rating agency has produced credit ratings: 1985-2010; Years SEC has recognized credit rating agency as an NRSRO: 2007-2010. Rating and Investment Information: Years credit rating agency has produced credit ratings: 1985-2010; Years SEC has recognized credit rating agency as an NRSRO: 2007-2010. Egan-Jones Rating Company: Years credit rating agency has produced credit ratings: 1993-2010; Years SEC has recognized credit rating agency as an NRSRO: 2007-2010. Realpoint: Years credit rating agency has produced credit ratings: 2001-2010; Years SEC has recognized credit rating agency as an NRSRO: 2008-2010. Source: NRSROs, SEC. [A] Moody's Investors Service was founded as John Moody & Company, which began producing credit ratings in 1909. [B] Fitch Ratings was founded as the Fitch Publishing Company, which began producing credit ratings in 1924. [C] Poor's Publishing and Standard Statistics began producing credit ratings in 1922 and 1923, respectively. Standard Statistics merged with Poor's Publishing forming Standard & Poor's Corporation. [End of figure] None of the 10 NRSROs, including the 3 newly registered subscriber- pays NRSROs, is a new entrant into the credit rating industry. Further, all 10 NRSROs have been producing ratings for a number of years. A.M. Best, Fitch, Moody's, and Standard & Poor's have been producing ratings the longest--for more than 80 years. Several of these NRSROs have undergone mergers with or acquisitions of other rating agencies or NRSROs over the years. For example, Poor's Publishing and Standard Statistics merged in 1941 to form Standard & Poor's, and Moody's was acquired by Dun and Bradstreet in 1962. Fitch merged with IBCA Ltd in 1997, and in April 2000, acquired Duff & Phelps Credit Rating Company and Thomson BankWatch.[Footnote 79] More recently, in May 2010 Realpoint was acquired by Morningstar, Inc.[Footnote 80] Credit rating agencies can apply to register as NRSROs in five distinct asset classes: financial institutions, insurance companies, corporate, ABS, and government securities. Table 5 describes the asset classes in which each NRSRO is registered. Table 5: NRSROs Registered by Asset Class, 2010: A.M. Best: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Empty]. DBRS: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. Egan-Jones Ratings: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. Fitch Ratings: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. Japan Credit Rating Agency: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. LACE: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Empty]; Government municipal and sovereign securities: [Check]. Moody's Investors Service: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. Ratings and Investment, Inc.: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Empty]; Government municipal and sovereign securities: [Check]. Realpoint: Financial institutions: [Empty]; Insurance companies: [Empty]; Corporate issuers: [Empty]; ABS: [Check]; Government municipal and sovereign securities: [Empty]. Standard & Poor's: Financial institutions: [Check]; Insurance companies: [Check]; Corporate issuers: [Check]; ABS: [Check]; Government municipal and sovereign securities: [Check]. Source: 2009 SEC Form NRSRO filings. [End of table] Some NRSROs, such as Moody's, Standard & Poor's, and Fitch, cover a wide range of securities that span all five asset classes. Others have specialized in a particular asset class, sector, or geographic region. For example, although Realpoint is designated in ABS, it specializes in one type of ABS, specifically CMBS. LACE is designated in four asset classes, but specializes in rating financial institutions. Similarly, A.M. Best is designated in four asset classes, but specializes in rating insurance companies and related securities. [Footnote 81] Japan Credit Rating Agency and Ratings and Investment, Inc., are Japanese rating agencies that mainly rate Japanese issuers. Although the number of NRSROs has increased, the credit rating industry remains highly concentrated. To assess the impact of the Act on competition among NRSROs, we calculated the HHI, a key statistical measure used to assess market concentration and the potential for firms to exercise their ability to influence market prices.[Footnote 82] The HHI reflects the number of firms in the industry and each firm's market share. It is calculated by summing the squares of the market shares of each firm competing in the market. The HHI also reflects the distribution of market shares of the top firms and the composition of the market outside the top firms. The HHI is measured on a scale of 0 to 10,000, with larger values indicating more concentration. According to DOJ, markets in which the value of the HHI is between 1,500 and 2,500 points are considered to be moderately concentrated, and those in which the value of the HHI is in excess of 2,500 points are considered to be highly concentrated, although other factors also play a role. We calculated the HHI by summing the squares of the market shares of all the firms competing in the industry. Doing so requires defining what constitutes the industry and specifying our measure of market share. We defined the relevant industry as the set of credit rating agencies that have NRSRO status, and we used a variety of market share definitions to ensure that any trends in industry concentration we observed were robust to alternative specifications of NRSROs' market shares. A firm's market share typically is measured in terms of dollars, as either its sales or revenue as a fraction of total sales or revenue for all firms in the industry, or in terms of quantities, such as its output as a fraction of total output produced by all firms in the industry. We first calculated the HHI using market shares based on total revenues earned by the NRSROs.[Footnote 83] NRSROs generally earn revenues from a number of activities related to the production of credit ratings. Issuer-pays NRSROs earn the bulk of their revenues from the fees paid by issuers to have their issues rated. However, issuer-pays NRSROs offer other services as well, including subscription services to users of credit ratings. Subscriber-pays NRSROs earn their revenues from subscription fees and other services. Some of the types of services offered by the subscriber-pays NRSROs are data and valuation and proxy services for financial institutions. [Footnote 84] NRSRO applicants and registered NRSROs must provide data on the total revenues earned in the prior calendar year to SEC on Form NRSRO. [Footnote 85] We used these data to calculate the HHI from 2006 to 2009.[Footnote 86] Table 6 provides the results of these calculations. Table 6: HHI for NRSROs Based on Total Revenues, 2006-2009: All asset classes; 2006: 3,617; 2007: 3,511; 2008: 3,333; 2009: 3,324. Annual Percentage Change; 2006: [Empty]; 2007: -2.93%; 2008: -5.08%; 2009: -0.27%. Source: GAO analysis of NRSRO revenues provided on Form NRSRO. [End of table] The table shows that while the HHI declined between 2006 and 2009, the industry remains highly concentrated according to DOJ standards. This decline is likely influenced by the entrance of the three new NRSROs in late 2007. An NRSRO's total revenue does not necessarily reflect its total output; that is, the number of ratings it produces. For example, both issuer-pays and subscriber-pays NRSROs could provide ratings on the same group of entities, but receive vastly different revenues. Because market shares based on numbers of ratings can differ from those based on total revenue, so can the HHI, possibly revealing a different trend in industry concentration. To assess industry concentration using an output-based measure of market share, we attempted to calculate the HHI using market shares based on the number of each NRSRO's outstanding ratings.[Footnote 87] However, as previously discussed, we found inconsistencies in the methods that the NRSROs use to count their outstanding ratings. As such, the data were not valid for purposes of calculating the HHI. As an alternative assessment of industry concentration using an output- based measure of market share, we calculated the HHI using market shares based on the number of issuers each NRSRO rates (see table 7).[Footnote 88] We obtained data from nine of the NRSROs on the number of issuers they rated in each asset class in 2006-2009 and used it to calculate the HHI for these 4 years. We were unable to obtain data on the number of rated issuers from one NRSRO because it said it did not track rated organizations by whether or not they issue debt securities. However, this NRSRO did provide us with the total number of organizations it rated. Table 7: HHI for NRSROs Based on Number of Issuers Rated, 2006-2009: Asset Class: Corporate issuers; 2006: 3,069; 2007: 2,625; 2008: 2,596; 2009: 2,483. Asset Class: Financial institutions; 2006: 2,773; 2007: 2,555; 2008: 2,550; 2009: 2,452. Asset Class: Insurance companies; 2006: 3,353; 2007: 3,066; 2008: 2,826; 2009: 2,749. Asset Class: Issuers of government securities; 2006: 3,822; 2007: 3,820; 2008: 3,846; 2009: 3,889. Asset Class: Issuers of asset-backed securities; 2006: 3,602; 2007: 3,561; 2008: 3,553; 2009: 3,493. Source: GAO analysis of NRSRO data. Note: Calculations are based on data from nine of the ten currently registered NRSROs. [End of table] The table shows that the industry is concentrated in every asset class in every year according to DOJ standards, although the industry has become less concentrated in corporate issuers, financial institutions, insurance companies, and issuers of ABS asset classes with the HHIs decreasing by 5 percent, 4 percent, 10 percent, and 2 percent, respectively, between 2007 and 2009. The industry has become more concentrated in the issuers of government securities asset class, with the HHI increasing by about 2 percent between 2007 and 2009. These results, however, assume that none of the organizations rated by the 10th NRSRO issues debt securities. To assess the sensitivity of these results to the missing data from the NRSRO that did not track which of its rated organizations issue debt securities, we recalculated the HHIs assuming that all of the organizations this NRSRO reported rating issue debt securities. We did so because it is likely that some of the organizations this NRSRO rates do issue debt securities, but we cannot determine how many. Calculating the HHIs based on the alternative assumption that all of the organizations this NRSRO rates issue debt securities gives us a range within which the true value of the HHI is likely to fall. The main difference between our alternative and baseline estimates is in the HHI for the financial institutions asset class. The alternative assumption produces estimates of the HHI for the financial institutions asset class for 2008 and 2009 that are 133 percent and 136 percent, respectively, larger than our baseline estimates. The two estimates differ because the NRSRO that did not provide us with data rates at least 10 times as many organizations in the financial institutions asset class as any other NRSRO. Assuming that all of these organizations issue debt securities produces high HHI estimates because it implies that the excluded NRSRO has a relatively high market share and thus that the industry is relatively highly concentrated.[Footnote 89] Finally, to assess trends in concentration in the market for rating structured finance securities, we calculated the HHI for January 2004- June 2010 using market shares based on the dollar value of issuance of U.S.-issued ABS rated by an NRSRO.[Footnote 90] Issuance-based HHI declined by about 18 percent over this time period (1 percent during 2004-2007 and 17 percent since 2007), indicating that this market has become less concentrated (see table 8). We note that the market for ABS declined considerably since 2007. According to data from Asset- Backed Alert, the number of ABS deals declined from over 3,000 in 2006 to about 370 in 2009. For 2010, 223 deals were reported as of the end of June. Table 8: HHI Based on Dollar Value of Newly Issued U.S.-ABS, January 2004-June 2010: Asset Class: All U.S. asset-backed securities; 2004: 3,444; 2005: 3,375; 2006: 3,469; 2007: 3,398; 2008: 3,396; 2009: 2,973; 2010[A]: 2,809. Asset Class: U.S. commercial mortgage-backed securities; 2004: 3,224; 2005: 3,222; 2006: 3,359; 2007: 3,212; 2008: 3,751; 2009: 2,916; 2010[A]: 2,804. Asset Class: U.S. traditional asset-backed securities; 2004: 3,374; 2005: 3,338; 2006: 3,314; 2007: 3,280; 2008: 3,305; 2009: 3,262; 2010[A]: 3,046. Asset Class: U.S. prime residential mortgage-backed securities; 2004: 3,677; 2005: 3,672; 2006: 3,542; 2007: 3,376; 2008: 3,148; 2009: 3,222; 2010[A]: 4,145. Asset Class: U.S. nonprime residential mortgage-backed securities; 2004: 3,390; 2005: 3,177; 2006: 3,344; 2007: 3,515; 2008: 3,531; 2009: 10,000[B]; 2010[A]: 6,009. Asset Class: U.S. Collateralized Debt Obligations; 2004: 3,772; 2005: 3,944; 2006: 4,173; 2007: 4,253; 2008: 4,846; 2009: 3,795; 2010[A]: 5,561. Source: GAO analysis of Asset -Backed Alert data. [A] The HHIs for 2010 are based on data through June 30, 2010. [B] Only one deal was issued in 2009 and was rated by a single NRSRO. [End of table] Since the U.S. ABS asset class includes distinct products, we also examined five sectors in the ABS asset class to determine if trends in market concentration varied across these sectors (table 8). The five sectors are traditional ABS (that is, securities backed by student loans, auto loans and credit card loans, but not by mortgages), prime RMBS, nonprime RMBS, CMBS, and CDO.[Footnote 91] We calculated the HHI for the market for rating securities in each of these sectors using market shares based on the dollar value of issuance rated by an NRSRO. The 17 percent reduction in the HHI for the market for rating securities in the ABS asset class as a whole since 2007 was driven primarily by the reduction in concentration in the market for rating traditional ABS and CMBS. For these markets, the issuance-based HHI declined by about 7 percent and 13 percent, respectively, since 2007. On the other hand, the markets for rating prime RMBS, and CDOs have become more concentrated since 2007. While the market for rating nonprime RMBS also has become more concentrated since 2007, the number of issuances offered since then has declined so rapidly that trends in the HHI are difficult to interpret.[Footnote 92] The HHI indicates that the market for rating ABS remains highly concentrated by DOJ standards, even in those sectors in which concentration has declined since 2007. To assess which NRSROs are dominating this market, we examined the NRSROs' annual market coverage. Annual market coverage is an indication of the quantity of ratings an NRSRO produces relative to the quantity of issues or issuers that are available to be rated. Because more than one NRSRO can rate an issue or issuer, the sum of each NRSRO's annual market coverage can add to more or less than 100 percent. We measure an NRSRO's annual market coverage as the dollar volume an NRSRO rates as a fraction of the total volume issued. We did not assess the causal factors behind any trends we observed. Trends in annual market coverage from January 2004 through June 2010 among the six NRSROs that rated U.S. ABS issuance generally shifted beginning in 2007 (see figure 2).[Footnote 93] These six were the only NRSROs to rate new U.S. ABS issuance during this period. From 2004- 2007, Standard & Poor's, Moody's, and Fitch provided the most annual market coverage in this asset class, rating an average of about 94 percent, 89 percent, and 49 percent, respectively, of issuance. Starting in 2008, Standard & Poor's and Moody's annual market coverage began to decline. For the first half of 2010, Standard & Poor's and Moody's were rating only about 73 percent and 62 percent, respectively, of issuance. Fitch's annual market coverage peaked in 2009 at about 59 percent of issuance, and then declined to about 31 percent for the first half of 2010. On the other hand, DBRS increased its coverage from about 4 percent in 2007 and 2008 to about 33 percent in June 2010.[Footnote 94] Figure 2: NRSRO U.S. Annual Market Coverage by ABS, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 493; S&P: 94%; Moody's: 88.1%; Fitch: 50.1%; DBRS: 3.4%; Realpoint: 0%; A.M. Best: 0%. Year: 2005; Total number of deals: 597; S&P: 93.7%; Moody's: 89.6%; Fitch: 48.4%; DBRS: 6.8%; Realpoint: 0%; A.M. Best: 0%. Year: 2006; Total number of deals: 819; S&P: 93.5%; Moody's: 91.9%; Fitch: 46.7%; DBRS: 4.3%; Realpoint: 0%; A.M. Best: 0%. Year: 2007; Total number of deals: 648; S&P: 94.5%; Moody's: 86.2%; Fitch: 51.6%; DBRS: 4.4%; Realpoint: 0%; A.M. Best: 0%. Year: 2008; Total number of deals: 198; S&P: 84.6%; Moody's: 79.7%; Fitch: 45.3%; DBRS: 4.5%; Realpoint: 0%; A.M. Best: 0.2%. Year: 2009; Total number of deals: 178; S&P: 67.8%; Moody's: 58.4%; Fitch: 56.2%; DBRS: 12.5%; Realpoint: 0.2%; A.M. Best: 0%. Year: 2010; Total number of deals: 119; S&P: 72.6%; Moody's: 61.9%; Fitch: 31.3%; DBRS: 33.3%; Realpoint: 0.5%; A.M. Best: 0%. Source: GAO analysis of ABA data. [End of figure] We also examined NRSROs' coverage of the five sub-sectors in the ABS asset class and found that trends in annual market coverage varied across sectors. Trends in coverage of traditional ABS are similar to those for the ABS asset class as a whole, the main difference being that Fitch's coverage did not peak in 2009 before declining (see figure 3). Figure 3: NRSRO U.S. Annual Market Coverage by Traditional ABS, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 493; S&P: 97.4%; Moody's: 93.5%; Fitch: 74.2%; DBRS: 0.1%; A.M. Best: 0%. Year: 2005; Total number of deals: 597; S&P: 98.3%; Moody's: 96.7%; Fitch: 71.9%; DBRS: 2.3%; A.M. Best: 0%. Year: 2006; Total number of deals: 819; S&P: 95.7%; Moody's: 93%; Fitch: 75.6%; DBRS: 2.1%; A.M. Best: 0%. Year: 2007; Total number of deals: 648; S&P: 96%; Moody's: 95%; Fitch: 74.4%; DBRS: 3.8%; A.M. Best: 0.1%. Year: 2008; Total number of deals: 198; S&P: 93.4%; Moody's: 97.4%; Fitch: 71.6%; DBRS: 3.1%; A.M. Best: 0.3%. Year: 2009; Total number of deals: 178; S&P: 82.6%; Moody's: 81.9%; Fitch: 60.6%; DBRS: 4.7%; A.M. Best: 0%. Year: 2010; Total number of deals: 119; S&P: 76.6%; Moody's: 81.5%; Fitch: 40%; DBRS: 20.2%; A.M. Best: 0%. Source: GAO analysis of ABA data. [End of figure] In prime RMBS, Moody's market coverage began to decline in 2007 from about 88 percent to about 6 percent in June 2010. Standard & Poor's annual market coverage of prime RMBS began to decline in 2008, falling from about 94 percent in 2007 to about 42 percent in 2009, but it has since increased to about 63 percent (see figure 4). Fitch's market share peaked in 2009 at about 62 percent, but has since declined to about 9 percent. Figure 4: NRSRO U.S. Annual Market Coverage by Prime RMBS, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 574; S&P: 89.7%; Moody's: 81.9%; Fitch: 36.6%; DBRS: 1%. Year: 2005; Total number of deals: 752; S&P: 90%; Moody's: 84.5%; Fitch: 30.5%; DBRS: 5%. Year: 2006; Total number of deals: 713; S&P: 89.4%; Moody's: 87.5%; Fitch: 42.8%; DBRS: 2.5%. Year: 2007; Total number of deals: 584; S&P: 94%; Moody's: 74.2%; Fitch: 57.9%; DBRS: 2.9%. Year: 2008; Total number of deals: 83; S&P: 84.1%; Moody's: 40.5%; Fitch: 45%; DBRS: 17.8%. Year: 2009; Total number of deals: 126; S&P: 42%; Moody's: 5.5%; Fitch: 61.5%; DBRS: 41.3%. Year: 2010; Total number of deals: 60. S&P: 68.3%; Moody's: 6.4%; Fitch: 8.5%; DBRS: 73.4%. Source: GAO analysis of ABA data. [End of figure] Trends in coverage of the nonprime RMBS market are the most dramatic, with both Standard & Poor's and Moody's coverage plummeting from more than 90 percent to zero for 2009 and 2010, and Fitch's coverage falling from about 57 percent for 2005 to about 1 percent for 2008 (see figure 5). On the other hand, DBRS coverage of nonprime RMBS increased from about 14 percent for 2005, and 12 percent in 2006-2007 to about 73 percent for 2008. Furthermore, DBRS was the lead NRSRO to rate the most nonprime RMBS deals issued in 2009 and the first half of 2010, rating three more than Moody's. Figure 5: NRSRO U.S. Annual Market Coverage by Nonprime RMBS, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 608; S&P: 99.1%; Moody's: 93.7%; Fitch: 50.4%; DBRS: 6.8%. Year: 2005; Total number of deals: 621; S&P: 97.2%; Moody's: 95%; Fitch: 56.5%; DBRS: 13.6%. Year: 2006; Total number of deals: 654; S&P: 99.2%; Moody's: 99.1%; Fitch: 45.4%; DBRS: 12%. Year: 2007; Total number of deals: 344; S&P: 98.4%; Moody's: 94.3%; Fitch: 31.9%; DBRS: 12.4%. Year: 2008; Total number of deals: 13; S&P: 94.5%; Moody's: 47.7%; Fitch: 1.2%; DBRS: 73.4%. Year: 2009; Total number of deals: 1; S&P: 0%; Moody's: 0%; Fitch: 0%; DBRS: 100%. Year: 2010; Total number of deals: 3; S&P: 0%; Moody's: 38%; Fitch: 0%; DBRS: 100%. Source: GAO analysis of ABA data. [End of figure] Declines in Standard & Poor's, Moody's, and Fitch's coverage of the CMBS market through 2009 were almost as dramatic as those in the nonprime RMBS market, but they have not been matched by correspondingly dramatic increases in DBRS's coverage of the nonprime RMBS market (see figure 6). Rather, Realpoint's CMBS coverage has increased from virtually zero through 2008 to 19 percent in 2010. [Footnote 95] In addition, Standard & Poor's, Moody's, and Fitch's coverage of the CMBS market have all rebounded somewhat in 2010, with Fitch's annual market coverage in 2010 about equal to its annual market coverage in 2005 (about 56 percent). Figure 6: NRSRO U.S. Annual Market Coverage by CMBS, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 117; S&P: 77.7%; Moody's: 72%; Fitch: 46.8%; DBRS: 8%; Realpoint: 0%. Year: 2005; Total number of deals: 125; S&P: 82%; Moody's: 75.2%; Fitch: 56%; DBRS: 6.6%; Realpoint: 0%. Year: 2006; Total number of deals: 130; S&P: 79.1%; Moody's: 71.9%; Fitch: 57.1%; DBRS: 1%; Realpoint: 0%. Year: 2007; Total number of deals: 103; S&P: 85.8%; Moody's: 72.9%; Fitch: 67.5%; DBRS: 5.6%; Realpoint: 0%. Year: 2008; Total number of deals: 24; S&P: 69.4%; Moody's: 63.4%; Fitch: 26.8%; DBRS: 0%; Realpoint: 0%. Year: 2009; Total number of deals: 51; S&P: 16.3%; Moody's: 11.8%; Fitch: 12.7%; DBRS: 0.5%; Realpoint: 3.2%. Year: 2010; Total number of deals: 8; S&P: 39.3%; Moody's: 52.3%; Fitch: 56.4%; DBRS: 0%; Realpoint: 19%. Source: GAO analysis of ABA data. [End of figure] Finally, in the CDO market, Moody's coverage fell from about 97 percent in 2007 to about 39 percent in 2009 (see figure 7). Figure 7: NRSRO U.S. Annual Market Coverage by CDO, Dollar Volume, 2004-June 2010: [Refer to PDF for image: multiple line graph] Percentage of total issuance: Year: 2004; Total number of deals: 270; S&P: 94.7%; Moody's: 86.3%; Fitch: 35.3%; DBRS: 0.1%. Year: 2005; Total number of deals: 428; S&P: 97.5%; Moody's: 89.8%; Fitch: 28.2%; DBRS: 0.3%. Year: 2006; Total number of deals: 767; S&P: 97.2%; Moody's: 98.4%; Fitch: 20.9%; DBRS: 0.1%. Year: 2007; Total number of deals: 529; S&P: 97.4%; Moody's: 97.3%; Fitch: 18.3%; DBRS: 0%. Year: 2008; Total number of deals: 98; S&P: 71.2%; Moody's: 63.4%; Fitch: 2%; DBRS: 0.3%. Year: 2009; Total number of deals: 12; S&P: 58.9%; Moody's: 39%; Fitch: 22.4%; DBRS: 0%. Year: 2010; Total number of deals: 33; S&P: 49.8%; Moody's: 100%; Fitch: 0%; DBRS: 0%. Source: GAO analysis of ABA data. [End of figure] Moody's CDO coverage was about the same as Standard & Poor's in 2007, but has declined steadily since then to about 40 percent in 2009. It has since increased to 100 percent. Fitch's CDO coverage declined from about 35 percent in 2004 to zero in 2010 (with a brief increase in 2009 covering about 22 percent of the market). DBRS's CDO coverage has remained negligible throughout the period. In December 2009, SEC adopted rule amendments that are intended, in part, to increase competition in the rating of ABS.[Footnote 96] Beginning in June 2010, the amended rule requires an NRSRO hired by arrangers to rate a structured finance product to disclose on its password-protected Website each structured finance product it has been hired to rate, along with the type of structured finance product, the name of the issuer, the date the rating process began, and the Web site at which the issuer will disclose the information it has provided to the NRSRO for the rating. The amended rule requires the arranger to provide representations to the hired NRSRO that it will make available the information it has provided to the hired NRSRO for determining an initial rating or for monitoring a rating on the issuer's password- protected Web site. The issuer must also provide representations to the hired NRSRO that it will allow other NRSROs access to the information so that the other NRSROs can produce unsolicited ratings on the same structured finance product. SEC proposed these rule amendments after its examinations of the three largest NRSROs identified issues in the management of conflicts of interest particular to structured finance ABS. In particular, SEC found that analysts appeared to be aware of NRSROs' business interest in securing the rating of the deal and that rating agencies did not appear to take steps to prevent considerations of market share and other business interests from influencing ratings or rating criteria. In the proposed rule amendments, SEC stated it believed that the issuer-pays conflict is particularly acute in the case of structured finance products because certain arrangers of structured finance products bring repeat business to NRSROs. As such, SEC believes that some arrangers have the potential to influence NRSROs on structured finance products more than on corporate securities. In the amended rule, SEC stated that one of its goals is to facilitate the issuance of credit ratings for structured finance products by nonhired NRSROs at the same time as the hired NRSRO and provide investors with more views on the creditworthiness of the structured finance product. SEC stated this practice may serve to increase unsolicited ratings for structured finance products, mitigate ratings shopping, and affect competition among NRSROs by having more ratings in the market. Furthermore, SEC stated that market participants could use unsolicited ratings to evaluate the ratings issued by the hired rating agency. Specifically, SEC intends that by opening up the ratings process to more NRSROs, hired NRSROs will find it easier to resist any pressure by the arranger to obtain better-than-warranted ratings, because of the likelihood that any steps taken to inappropriately favor the arranger could be exposed to the market through credit ratings issued by other NRSROs. Although not enough time has passed to assess the impact of the amended rule on competition, we found that one NRSRO has withdrawn its NRSRO registration in this asset class and the other NRSROs had varying views on its potential effectiveness. One NRSRO said that the high cost of establishing and maintaining the data systems could negatively impact both NRSROs and issuers. Further, this NRSRO said the amended rule could deter issuers from taking innovative structured finance products to the market because they would have to disclose proprietary information. Another NRSRO said the cost of implementing the rule could burden nonhired NRSROs, who may need to post only very limited information under the rule, and thus compromise the effectiveness of the rule. In May 2010, one NRSRO announced it was withdrawing its NRSRO registration in this asset class.[Footnote 97] This NRSRO said it made this decision in part because it believed that the funds raising activities through structured finance products in the geographic region it serves might be negatively affected. A fourth NRSRO agreed that publishing unsolicited ratings could enhance the transparency of the ratings process, but said that rating new structured finance issuances was too costly without fees from an issuer. For example, this NRSRO said that its costs were approximately $2,500 to verify the data underlying the security in an ABS that it rates. Additionally, this NRSRO said that legal costs for analyzing the securities in a deal ranged from $25,000 to $50,000. However, another NRSRO was not concerned with the costs associated with implementing the rule and believed the new rule could be effective. Multiple Factors Likely Account for Continued Concentration among NRSROs: The continued concentration among NRSROs since the implementation of the Act likely resulted from several factors. First, little time has passed since the Act took effect. Second, the three new NRSROs registered under the new program have not had much time to build market share. Furthermore, SEC rules implementing the new NRSRO registration program and requiring disclosures of ratings performance, ratings methodologies, and conflicts of interest have been in place since June 2007, and have been amended twice. Finally, the credit crunch and the ensuing financial crisis occurred soon after the implementation of the Act, substantially slowing certain sectors of the credit market.[Footnote 98] Generally speaking, barriers to becoming an NRSRO create challenges for newer and smaller credit rating agencies. Two types of barriers to entry likely contributed to the continued concentration of the industry: entering the credit rating industry and registering as an NRSRO. We have identified three barriers to entering the credit rating industry. First, credit rating agencies may have relatively high fixed costs.[Footnote 99] Credit rating agencies that are established can produce ratings in volume leading to economies of scale. The combination of high fixed costs and economies of scale favor larger established rating agencies and pose barriers to smaller firms entering the market. And, the markets for some asset classes may be more difficult for rating agencies to enter than others. For example, a credit rating agency told us it is difficult for a credit rating agency to get into the credit rating market for the structured finance class because of costs associated with acquiring the expertise to rate this type of product. Further, rating methodologies for structured finance products are complex, requiring expertise to develop and apply the models needed to rate this type of product. Second, establishing a reputation takes time.[Footnote 100] The better a rating agency's reputation for producing ratings, the more business it will be able to attract as compared with a credit rating agency without a reputation.[Footnote 101] However, given the nature of the ratings, reputation can take years to establish. In most cases, ratings are intended to predict the likelihood of default over the life of the bond, but some securities take from 10 to 30 years to mature. When a new rating agency begins to rate securities, evaluating the quality of those ratings at the time of purchase is difficult. Instead, users need to see how the ratings perform over the life of the bond to determine how accurate and timely they are. Several NRSROs have commented that their reputation is critical to their success. One NRSRO said that a reputation is difficult to earn and easy to lose. Third, network effects pose a challenge to entry in the ratings industry and favor the larger, more established credit rating agencies.[Footnote 102] The more securities a specific NRSRO rates, the more value to assigning ratings to that same NRSRO, because comparing securities rated by that NRSRO would be easier for investors and other market participants. Network effects can make gaining market share difficult for new entrants if investors and other market participants already are using an existing NRSRO's ratings. NRSRO references have been widely embedded in numerous federal and state laws and regulations.[Footnote 103] Further, many investment guidelines and private contracts reference specific NRSROs, which makes marketing to investors, other users, or issuers more difficult for newer NRSROs. Several institutional investors and investment advisors with whom we spoke told us they use the big three NRSROs' ratings either because of investor guidelines, regulatory guidelines, or depth of ratings coverage. Although there is no limit on the number of NRSROs that can rate a particular issue, asset managers may face budgetary constraints that limit their ability to subscribe to NRSROs beyond those their investment agreements require them to consider.[Footnote 104] For example, one asset manager told us that they have a limited budget for subscriptions to ratings and that purchasing subscriptions for each analyst in the credit research department is costly. They told us they have subscription services with four NRSROs but have a limited working relationship with the others. Three issuers with whom we spoke told us that their choice of NRSRO is driven by investor expectations, which directs them to the big three firms. A credit rating agency also encounters barriers to entry when registering as an NRSRO. Despite the efficiency and transparency of the new NRSRO registration program, compliance with the Act and SEC rules may result in higher costs for smaller NRSROs and may inhibit credit rating agencies from registering as NRSROs. For example, one small NRSRO estimated it spent $500,000 annually to maintain the NRSRO designation. Two rating agencies with which we spoke said they would not register because of the regulatory burden associated with being in compliance with the Act. Furthermore, one NRSRO told us it might de- register as an NRSRO should regulation became too costly. Besides barriers to entry, differences between NRSROs' compensation models and a degree of specialization may also contribute to market concentration. For example, subscriber-pays NRSROs may be limited in their ability to rate newly issued securities. One NRSRO explained that it uses the subscriber-pays model to provide ongoing surveillance of rated securities in the secondary market, which it produces using publicly available information. However, it said it could not use the subscriber-pays model to rate the initial offering of securities because, under this model, it would not have access to the data provided by the issuer to complete the analysis and produce an initial ratings. To the extent the subscriber-pays NRSROs are not rating new issues, market coverage of these securities will be concentrated among issuer-pays NRSROs. As another example, the difference between the issuer-pays and the subscriber-pays compensation models could also impact market concentration when it is measured in terms of total revenues. As previously discussed, both issuers-pays and subscriber- pays NRSROs could provide ratings on the same group of entities, but receive vastly different revenues. Issuer-pays NRSROs charge the issuers fees for every rating produced, while subscriber-pays NRSROs are charging users a subscription fee for access to their ratings. Thus, if issuer-pays and subscriber-pays rate the same entities, total revenue will likely be concentrated among NRSROs using the compensation model that generates the greatest revenues per rated entity. Finally, to the extent certain NRSROs specialize in a particular asset class, sector, or geographic, the overall credit rating industry will likely be highly concentrated among those NRSROs, which rate across asset classes, sectors, and geographic regions. However, a specialized NRSRO could have a significant presence in its market. Academic Research Suggests Increasing Competition in the Credit Rating Industry Improves Information Availability, but the Impact on Rating Quality Is Unclear: The impact of increasing competition on the quality of credit ratings is not yet well understood. Academic researchers generally measure the quality of credit ratings according to how much information they convey about the risk of default or of loss in the event of default. Their findings suggest that the entry of new credit rating agencies can improve overall information available to investors and other market participants. However, the effect of entry on the quality of ratings produced by any one rating agency is not clear. Moreover, there have been few studies investigating the effect of new entrants and competition in the credit rating industry. These studies are unpublished and, thus, their findings should be viewed as preliminary in nature. We reviewed three studies that examined the impact of competition on ratings quality.[Footnote 105] Based on an analysis of insurance company ratings, one study suggests that entry of a new credit rating agency improves the amount of information available to investors and other market participants. This study suggests that new entrants have stricter criteria than incumbents for assigning the same rating, assuming they both use the same rating scale. It is more difficult for an issuer to get the highest rating from the new rating agency than from the incumbent rating agency. An issuer can choose to be rated by the incumbent rating agency, by the new rating agency, or both. Furthermore, the two ratings agencies' criteria are different. As a result, an issuer can communicate more information about its riskiness to the market by its choice of ratings agency and the combination of ratings it gets than it could communicate when there was only one rating agency. A different study of CDO ratings also suggests that the number of rating agencies from which an issuer requests ratings is informative. Specifically, tranches rated by more than one rating agency were less likely to be downgraded than those rated by a single rating agency. This result is consistent with the hypothesis that issuers of less- risky CDOs were more likely to request two or more ratings. Based on ratings of corporate issuers, insurance companies, and financial institutions, a third study analyzes the impact of competitive pressure from a new credit rating agency on the quality of an incumbent rating agency's ratings. The study uses three alternative indicators of quality. The first indicator is the correlation between a bond's rating and its yield, with lower correlations indicating that ratings are less informative about bond repayment and thus are of lower quality. The second indicator is the magnitude of the effect of a downgrade on an issuer's stock price, with larger magnitudes indicating that the downgrade is worse news.[Footnote 106] The last indicator is the rating it assigns to an issuer or a bond, with higher ratings presumed to be more favorable to the issuer and thus of lower quality. This study suggests that the incumbent credit rating agencies produce lower-quality ratings in market segments in which smaller, newer credit rating agencies have higher market share. Together, the three studies' findings have implications for the amount of information available to credit market participants and for the quality of credit ratings and may offer some preliminary observations about the impact of new entrants on rating quality and competition. The first and second studies both suggest that entry of new credit rating agencies will allow issuers and other rated entities to communicate more information to the market, both by the numbers of ratings they request and by the combination of ratings they receive from different rating agencies. The findings of the second and third studies together seem to suggest that entry of new credit rating agencies will lead incumbent rating agencies to produce lower-quality ratings, either relative to the new entrant's ratings or relative to their own ratings in markets in which they face less competitive pressure. However, it is difficult to predict what the effect will be on the incumbent rating agencies' ratings when a new entrant enters the market. Models Proposing Alternative Means of Compensating NRSROs Intend to Address Conflicts of Interests in the Issuer-Pays Model: As part of an April 2009 roundtable held to examine oversight of credit rating agencies, SEC requested perspectives from users of ratings and others on whether it should consider additional rules to better align the raters' interest with those who rely on those ratings, and specifically, whether one business model represented a better way of managing conflicts of interest than another. In response, some roundtable participants proposed alternative models for compensating NRSROs, and market observers have proposed others in congressional hearings and academic literature. We identified five unique models that have been proposed, although they are in various stages of development. To assist Congress and others in assessing these proposals, we created an evaluative framework of seven factors that any compensation model should address to be effective. By applying these factors, users of the framework can identify the potential benefits of the model consistent with policymakers' goals as well as any tradeoffs. Proposed Alternative Compensation Models: In recent years, academic researchers and industry experts have begun to develop a number of alternative compensation models for credit rating agencies in response to concerns about conflict of interest, ratings integrity, and competition. In a July 2008 report discussing the examinations of the three most active NRSROs and their performance of in rating subprime RMBS and related CDOs, SEC staff identified issues in the management of conflicts of interest resulting from the issuer-pays model the firms used. NRSROs using this model have an interest in generating business from the firms that seek the rating, which could conflict with providing quality ratings. In response to the examination findings, SEC introduced new and amended rules intended to improve the management of conflicts of interest in the issuer-pays model.[Footnote 107] In April 2009, as part of the roundtable held to examine oversight of credit rating agencies, SEC requested perspectives from users of ratings and others on whether SEC should consider additional rules to better align the NRSROs' interest with those who rely on those ratings, and specifically, whether one form of model represented a better way of managing conflicts of interest than another. In response, some roundtable participants proposed alternative models for compensating NRSROs. These models generally intend to address the conflict of interest in the issuer-pays model, better align the NRSROs' interest with users of ratings, or improve the incentives NRSROs have to produce reliable and high-quality ratings. Other models with similar goals have been presented in Congressional hearings and in academic literature. Below, we provide a summary of the key provisions of five distinct alternative models for compensating NRSROs (alternative compensation models). Given their theoretical nature, they vary greatly in the amount of detail currently available. None of these models has been implemented to date. Five alternative compensation models have been proposed: random selection model, investor-owned credit rating agency model, stand- alone model, designation model, and user-pay model.[Footnote 108] Random Selection Model: Under the random selection model, a ratings clearinghouse randomly would select a credit rating agency to rate a new issuance.[Footnote 109] All issuers or sponsors that wanted to obtain ratings for their issuances would be required to request ratings from the clearinghouse, which would use a random number generator, such as a computerized algorithm, to assign a credit rating agency. The clearinghouse would notify the credit rating agency of the opportunity to rate the issuance and provide basic information pertaining to the type of issuance, but not the issuer's name. Not until the credit rating agency agreed to complete the rating would the clearinghouse disclose to the credit rating agency the identity of the issuer and the details of the issuance. If the selected credit rating agency agreed to rate the issuance, the issuer would pay a fee to the ratings clearinghouse. The clearinghouse then would distribute the fees to the credit rating agency upon the completion of the initial and maintenance ratings. The letter rating would be free of charge to the public. In addition to this function another primary role of the clearinghouse would be to design the criteria by which new entrants could qualify as a credit rating agency. According to the proposed model, the ratings clearinghouse could be a nonprofit organization, a governmental agency such as SEC, or a private-public partnership. Funding for this ratings clearinghouse would be paid for by the issuer, on top of that required to rate the security, to cover clearinghouse costs. The ratings clearinghouse also would be responsible for setting the ratings fees for the credit rating agency depending on the type of security issued. The proposal incorporates a peer comparison review to create an incentive for credit rating agencies to produce quality ratings. As part of this review, the ratings clearinghouse would evaluate the performance of all credit rating agencies on the basis of two empirical tests. As one potential test, the proposal suggests an analysis of the magnitude of debt instruments that default or lose substantial value for investment-grade debt instruments. If the default percentage for a given credit rating agencies differed from its peers by a set parameter, then it would be subject to sanctions, which would range from losing a percentage of business to losing a percentage of rating fees. The second potential test would evaluate annual yields, as set by the market, to be compared to identically rated debt securities from different asset classes for each credit rating agency. Securities in different asset classes that are rated similarly should have the same yield. If a threshold differential exists between the yields of identically-rated securities for a credit rating agency, then it would be subject to sanctions. According to the architect of the model, this model would eliminate the conflict of interest when an issuer pays a credit rating agency for a rating by making the compensation neutral, eliminating the linkage between the credit rating agency and the issuer (the conflict of interest). The elimination of the conflict of interest would remove a barrier to entry and would allow for new competition. Furthermore, he believes that the peer comparison review coupled with economic sanctions for poor performance would motivate the credit rating agencies to continually adjust their rating models and produce quality ratings. Investor-Owned Credit Rating Agency Model: Under the investor-owned credit rating agency (IOCRA) model, sophisticated investors--termed highly sophisticated institutional purchasers (HSIP)--would create and operate an NRSRO that would produce ratings. Issuers would be required to obtain two ratings; one from the IOCRA and the second from their choice of NRSRO. More specifically, an NRSRO would be prohibited from publicly releasing a rating that was paid for by the issuer or sponsor, unless the NRSRO received written notification that the issuer had made arrangements and paid an IOCRA to publicly release its rating. The IOCRA would publish its rating simultaneously when the NRSRO published its rating. Institutional investors would have to qualify as an HSIP before forming an IOCRA or joining an existing one. To qualify as an HSIP, an institutional investor would have to demonstrate that it was large and sophisticated, managed billions of dollars in assets, and could be relied upon to represent the buy-side interest in accurately rating debt market instruments. The HSIPs would hold a majority voting and operational control over the IOCRA. The proposal contemplates that the IOCRA could be a for-profit or a not-for-profit entity. There would not be a regulatory limit on the number of IOCRAs that could be formed. Under the proposal, market forces would set IOCRA fees, which likely would be comparable to fees currently charged by the dominant NRSROs. The letter rating and the underlying research would be free to the public. Proponents of this model believe it would improve the rating process by changing the incentive structure of the NRSROs' business. They said the IOCRA would affect competition and ratings quality by introducing new competition to the industry, and the investors' interest would be counter-balanced against the interest of the issuers. Stand-Alone Model: Under the stand-alone model, NRSROs only would be permitted to produce credit ratings. The NRSROs would be able to interact with and advise organizations being rated, but could not charge fees for providing advice.[Footnote 110] Instead of receiving issuer fees, the NRSROs would be compensated through transaction fees imposed on original issuance and on secondary market transactions. Part of the fee would be paid by the issuer or secondary-market seller, and the other portion of the fee by the investor purchasing the security in either the primary or secondary market. The NRSRO would be compensated over the life of the security based on these transaction fees. The letter rating would be free to the public. Proponents of this model believe that by creating a funding source that is beyond the influence of both issuers and investors, the focus of the NRSRO will be on producing the most accurate and timely credit analysis rather than on satisfying the desires of any other vested interest. Designation Model: Under the designation model, all NRSROs would have the option of rating a new issuance, and security holders would direct, or designate, fees to the NRSROs of their choice, based on the proportion of securities that they owned. When an issuer decided to bring a security to market, it would be required to provide all interested NRSROs with the information to rate the issuance. The issuer would pay the rating fees to a third-party administrator, which would manage the designation process.[Footnote 111] When the security was issued, the security holders would designate which of the NRSROs that rated the security should receive fees, based on their perception of research underlying the ratings. The security holders could designate one or several NRSROs. The third-party administrator would be responsible for disbursing the fees to the NRSROs in accordance with the security holders' designations. After the initial rating, the issuer would continue to pay maintenance rating fees to the third-party administrator, which bond holders also would allocate through the designation process every quarter over the life of the security. When the debt was repaid (or repurchased by the issuer), a final rating fee would be paid in conjunction with the retirement of the security. The letter rating would be free to the public, while the research underlying it would be distributed to securityholders and (at the discretion of the relevant NRSROs) to potential securityholders. The proposed model suggests that the issuer's transfer agent could perform the responsibilities of the third-party administrator. The transfer agent currently is responsible for maintaining ownership records of the security holders. The authors of this model believe this model would eliminate the conflict of interest between the issuers paying for the rating and the NRSRO and would increase competition by encouraging NRSROs to prepare unsolicited ratings, because each NRSRO would be assured of receiving compensation for its rating, provided some group of investors or other users of ratings found them useful enough to allocate to the provider a portion of the fees they designated or paid. User-Pay Model: Under the user-pay model, issuers would not pay for ratings. Rather, to address the free-rider problem, the model specifies that all users of ratings would be required to enter into a contract with the NRSRO and pay for the rating services of an NRSRO. The proposal defines "user" as any entity that included a rated security, loan, or contract as an element of its assets or liabilities as recorded in an audited financial statement. Users of ratings would include holders of long or short positions in a fixed-income instrument, as well as parties that refer to a credit rating in contractual commitments (that is, as parties to a lease) or that are parties to derivative products that rely on rated securities or entities. A user would be required to pay for ratings services supplied during each period in which it booked the related asset or liability. The model relies on third-party auditors to ensure that NRSROs receive payment from users of ratings for their services. Any entity that required audited financial statements in which the rated instrument or covenant was included among the assets or liabilities would be required to demonstrate to the auditors that the holder had paid for the rating services. No audit opinion would be issued until the auditor was satisfied that the rating agencies had been properly compensated. The model would require the close cooperation of the auditing community and the Public Company Auditing Oversight Board. The architects of this model believe that, while more cumbersome, the model attempts to capture "free riders"--those users of ratings that do not compensate NRSROs for the use of their intellectual property and require them to pay for the ratings. Framework to Evaluate Alternative Compensation Models: In this report, we are not evaluating the proposed alternative compensation models. Instead, we are providing a framework that Congress and others can use to evaluate or craft alternative compensation models for NRSROs. The framework contains seven factors, all of which are essential for a compensation model to be fully effective. Furthermore, we have provided key questions under each factor that can be applied to an alternative compensation model to identify its relative strengths and weaknesses, potential trade offs (in terms of policy goals), or areas in which further elaboration or clarification would be warranted. Similarly, the framework could be used to further develop proposals or identify aspects of current regulations to make them more effective and appropriate for addressing the limitations of the current credit rating system. 1. Independence. The ability for the compensation model to mitigate conflicts of interest inherent between the entity paying for the rating and the NRSRO. * What potential conflicts of interest exist in the alternative compensation model? * What controls, if any, would need to be implemented to mitigate these conflicts? How does the compensation model seek to limit conflicts of interest between the entity paying for the ratings and the NRSRO? Between users of ratings and the NRSRO? Between issuers and the NRSRO? As previously discussed, conflicts of interest arise between the entity paying for the rating and the NRSRO. The alternative compensation models we have identified continue to rely on issuer fees to fund ratings and to help ensure that ratings remain free to the public. However, several intend to mitigate the potential for the issuer to influence NRSROs in different ways--either by increasing the investor's role in the rating process or assigning NRSROs using a rotational process or randomly. In assessing these as well as other potential compensation models, it is important to consider whether the models introduce any new conflicts of interest and evaluate the steps the models propose to mitigate them. 2. Accountability. The ability of the compensation model to promote NRSRO responsibility for the accuracy and timeliness of their ratings. * How does the compensation model create economic incentives for NRSROs to produce quality ratings over the life of a bond? * How is NRSRO performance evaluated and by whom? For example, does the compensation model rely on market forces or third parties to evaluate performance? For models that rely on third parties, how are "quality" credit ratings defined and what criteria would be used to assess ratings performance? * When an NRSRO demonstrates poor performance, what are the economic consequences under the compensation model and who determines these consequences? For example, how is an NRSRO's compensation or opportunity for future ratings business linked to ratings performance? The quality of an NRSRO's ratings largely is determined by the ratings methodologies it employs, but the compensation model also can affect the ratings process. An effective compensation model will provide economic incentives for the rating agency to produce not only a quality rating at issuance, but also appropriate surveillance of the security over its life. It also should link NRSRO compensation to the performance of the rated entity. As such, when evaluating various compensation models it is important to consider how NRSRO performance is evaluated and by whom. NRSRO performance could be evaluated by market participants or an independent arbiter, with the consequences of performance dictated by the compensation model. For example, models can rely on market discipline to evaluate NRSRO ratings and determine which ones merit future business. Models also could rely on third parties to evaluate NRSRO performance which might require that the third party develop performance measures. However, as we previously discussed, there are differences in the NRSROs' measures of creditworthiness, ratings scales, ratings methodologies, and other processes that can make comparison of NRSRO performance difficult when using these measures. 3. Competition. The extent to which the compensation model creates an environment in which NRSROs compete for customers by producing higher- quality ratings at competitive prices. * On which dimensions does the compensation model encourage NRSROs to compete? To what extent does the compensation model encourage competition around the quality of ratings? Ratings fees? Product innovation? * To what extent would the compensation model encourage new entrants and reduce barriers to entry in the industry? * To what extent does the model allow for flexibility in the differing sizes, resources, and specialties of NRSROs? * To what extent do market forces determine ratings fees? When evaluating an alternative compensation model, considering its potential impact on the competition in the ratings industry is important. Most importantly, the model should not in itself present a barrier to entry or increase existing barriers. It should not promote convergence of one class of products or methodologies by NRSROs, but should foster diversity in ratings methodologies and products. For example, the compensation model should be flexible to allow for relatively smaller NRSROs or NRSROs with specialties to adapt to any new requirements and not inadvertently hinder them from competing with the larger NRSROs or expanding their product lines to meet market demand. An effective compensation model also will promote competition around the quality of ratings. In that sense, this factor is closely related to the accountability factor, in that the compensation model should not economically reward NRSROs that consistently produce poor-quality ratings. Some compensation models could increase competition by reducing barriers to entry for smaller or newer NRSROs; for example, by offering or guaranteeing them more opportunity to produce ratings and increase their coverage of the market. However, it is unclear whether a model that increases the number of NRSROs would result in more competition among them to produce quality ratings over time. In assessing these models, considering their potential impact on NRSROs' incentives to compete around ratings quality, product innovation, and overall efficiency is important. Similarly, an effective compensation model will promote competition around ratings fees. Some NRSROs are highly specialized, serving particular markets or asset classes. As such, the ratings fees charged by each NRSRO reflects its own cost structure. Compensation models should not incorporate a uniform approach to setting ratings fees. Such an approach would promote inefficiencies in the market and dissuade some NRSROs from continuing to offer services if they believed they were economically disadvantaged. Those NRSROs with comparatively lower-cost structures for producing ratings might benefit from such an approach, but overall it would not encourage NRSROs to produce ratings cost effectively. 4. Transparency. The accessibility, usability, and clarity of the compensation model and the dissemination of information on the model to market participants.[Footnote 112] * How clear are the mechanics of the compensation model to market participants? How transparent are the following procedures and processes: - how the NRSROs obtain ratings business; - how ratings fees are determined; - how NRSROs are compensated; and: - how the compensation model links ratings performance to NRSRO compensation. An effective compensation model should be transparent to market participants to help them understand it and to increase market acceptance. For example, the model should be transparent about how NRSROs obtain ratings business, such as (1) whether issuers will select the NRSROs; (2) whether ratings business will be assigned, randomly awarded, or mandated; or (3) whether NRSROs will have the option to provide ratings on any new business. If the model relies on third parties or systems for this function, it should be explicit about the criteria or procedures employed. Similarly, if ratings fees are not determined by market forces, the model should clearly explain the process and criteria for determining fees.[Footnote 113] Issuers, NRSROs, oversight bodies, and users of ratings also should understand the proposed compensation mechanism, and it should clearly link ratings performance to NRSRO compensation. Any criteria for evaluating ratings performance and the process for determining these criteria should be disclosed. Lack of transparency in any of these areas could hinder support and trust in the model. 5. Feasibility. The simplicity and ease with which the compensation model can be implemented in the securities market. * Is the model easily implemented? If not, how difficult will implementing the model be? * Could the compensation model be instituted through existing regulatory or statutory authority or are additional authorities needed? * What are the costs to implement the compensation model and who would fund them? * Which body would administer the compensation model, and is this an established body? If not, how would it be created? * What, if any, infrastructure would be needed to implement the compensation model? What information technology would be required? Which body would be responsible for developing and maintaining it? * What impact would the alternative compensation model have on bringing new issuances to market and trading on the secondary market? * How many NRSROs would be required for the compensation model to function as intended? How would the exit of an NRSRO from the ratings industry affect the model's feasibility? What impact would the alternative compensation model have on the financial viability of an NRSRO? When assessing a compensation model, considering the model's feasibility for successful implementation is essential. We note that the market itself has not undertaken the implementation of any compensation model other than the current issuer-and subscriber-pays models. As such, SEC or Congress likely would have to direct market participants to implement any alternative model.[Footnote 114] Models that are technically simplistic in nature will be more feasible to implement than complex ones. Further, some alternative models we identified involve potentially significant costs. For example, some models would require the development of information technology systems that would be used across the market by potentially thousands of participants. Assessing not only the costs of implementing these models, but also determining who would be responsible for overseeing and paying for their development upfront is a key question. Costly models could deter market participants from implementing and participating in them. Assessing the impact of any potential model on the efficiency of the securitization market is an important part of the evaluation process. An alternative compensation model should be flexible and adaptable to real-time demands of the securities markets, and not hinder the timing of initial issuance and trading on the secondary market. The model also should be viable regardless of the number of NRSROs that enter or exit the market. For compensation models that require a third-party administrator, the process for selecting or creating this administrator could have a significant impact on the success of its implementation. If market participants question the independence or capability of the administrator to run the model effectively and efficiently, they may be less likely to accept the model. For a compensation model to work as intended, it should not have to rely on a certain number of NRSROs to attain its goal. Such models could be undermined if only one or two NRSROs participated in the rating of specific types of securities or if an NRSRO exited the industry. While some proposed alternative compensation models intend to encourage competition among the NRSROs, the models themselves should not hinder the financial viability of an NRSRO. For example, the potential impact on the smaller NRSROs of any participation costs should be considered. The model also should not introduce undue uncertainty into the industry, so that NRSROs could not conduct appropriate business planning (that is, attract and retain qualified staff). 6. Market Acceptance and Choice. The willingness of the securities market to accept the compensation model, the ratings produced under that model, and any new market players established by the compensation model. * What role do market participants have in selecting NRSROs to produce ratings, assessing the quality of ratings, and determining NRSRO compensation? More specifically, what are the roles of issuers and investors in these processes? Where do these roles differ between models and what are the trade offs? * Are all market participants likely to accept the ratings produced under the compensation model? If not, what are the potential consequences for the securitization market? * What impact, if any, would the model have on each market participant using the ratings? * Would market participation need to be mandated, and if so, for which participants? The likelihood that market participants will accept an alternative compensation model is another important consideration in its evaluation. In achieving its goals, such as increasing independence in the ratings process or competition among NRSROs, a particular model may limit or promote the participation and choices of some market participants over others and could affect the market's acceptance of and participation in the model. For example, as we have pointed out, the proposed models we have identified require that the issuer pays for the rating; however, in most of the models the issuer is no longer able to select which NRSRO rates the security. Limiting the issuer's choice may address conflicts of interest and increase the independence of the ratings process, but also could deter issuers from soliciting NRSROs to rate their debt. Market acceptance by institutional investors is also instrumental to the success of the model. For example, many private investment guidelines require the use of specific NRSROs. If these specified NRSROs are not producing ratings under an adopted model, then these investors may be limited in the securities that they can consider purchasing. Such tradeoffs would need to be carefully evaluated to ensure the model's viability and minimize its impact on the securities market. Market participants should accept the ratings produced by the compensation model. If market participants, particularly issuers and end users of ratings, do not have confidence in the ratings produced under the model, its viability could be significantly undermined. This is a particular concern with models that would mandate the use of a particular NRSRO or otherwise limit the market's influence in the supply and demand for ratings. Market participants, including regulators outside of the United States, also could affect the acceptance of the model. A rating that is not accepted could create inconsistencies between domestic and foreign securities markets for investors that rely on ratings. 7. Oversight. The evaluation of the model to help ensure it works as intended. * Does the model provide for an independent internal control function? * What external oversight (from a regulator or third-party auditor) does the compensation model provide to ensure it is working as intended? * If third-party auditors provide external oversight, how are they selected, what are their reporting responsibilities, and to whom do they report? * Who will compensate the regulator or third-party auditor for auditing the compensation model? How will the compensation for regulator/auditor be determined? * To what extent will a third-party auditor allow flexibility in oversight to accommodate NRSROs of different sizes? An effective alternative compensation model also will provide for independent internal controls and robust, external oversight to ensure its integrity. This is especially important when a model calls for third-party administration or the use of information technology systems in its implementation. For example, any centralized system that collects fees from issuers and compensates the NRSRO(s) should be audited, as should any procedures used to award ratings business to an NRSRO, evaluate NRSRO performance, or apply economic penalties. Such oversight will help ensure the model functions as intended, thus increasing transparency and market acceptance. Funding for this oversight should be specified. The Dodd-Frank Act contains a mandate for SEC to conduct a study of the feasibility of establishing a system in which a public or private utility or a self-regulatory organization assigns NRSROs to determine the credit ratings of structured finance products. The study must include an assessment of the potential mechanisms for determining fees for NRSROs, appropriate methods for paying fees to the NRSROs, and the range of metrics that could be used to determine the accuracy of credit ratings. SEC also must evaluate alternative means for compensating NRSROs that would create incentives for accurate credit ratings.[Footnote 115] Our framework could be used to evaluate current proposals for compensating NRSROs, develop new proposals, and identify tradeoffs among them.[Footnote 116] Conclusions: The Credit Rating Agency Reform Act sought to improve ratings quality for the protection of investors, including establishing SEC oversight over credit rating agencies that register as NRSROs. However, SEC faced a number of challenges in implementing the law, and the recent financial crisis resulted in additional changes to SEC's oversight of NRSROs under the Dodd-Frank Act. As SEC starts to implement its new requirements, there are a number of challenges from its existing responsibilities that must also be addressed. SEC's implementation of the Act involved developing an NRSRO registration program and an examination program. As currently implemented and staffed, both programs require further attention. * As intended by the Act, the new registration program for NRSROs reduced the time SEC staff took to act on an application and improved the transparency of SEC's process for awarding NRSRO designations to interested credit rating agencies. However, due to the time constraints, a lack of criteria, and lack of express preregistration examination authority, Trading and Markets staff said the registration process generally does not allow staff to conduct reviews that would allow staff to confirm that the information provided on Form NRSRO was accurate and the applicant met all of the Act's requirements. However, SEC has yet to explicitly identify the legislative changes needed to address this limitation and work with Congress to ensure it has the authority needed to effectively carry out its oversight responsibilities. This raises a concern because SEC may approve applicants that do not meet the Act's requirements. * Although SEC has established an OCIE branch dedicated to the examination of NRSROs and hired individuals with experience in credit rating analysis and structured finance to fill these positions, OCIE has not completed timely examinations of the NRSROs and expressed concerns about its ability to meet its planned NRSRO routine examination schedule of examining the three largest NRSROs every 2 years and the other NRSROs every 3 years. While SEC requested additional resources that it anticipated using to fully staff this oversight function, it will likely need to revisit those requests due to the passage of the Dodd-Frank Act, which requires SEC to establish an Office of Credit Ratings and examine each NRSRO every year. As SEC begins its planning of this new office, it is essential that SEC assess not only the number of staff it needs but also the skills required of this staff. Approaching this effort strategically may facilitate the recruitment and training of new hires. Without a plan that details the amount of staff needed with the requisite qualifications and training, SEC may face challenges in meeting the required examination timetable and providing quality oversight of the NRSROs. SEC rules requiring NRSROs to publish short-, medium-, and long-term performance statistics have increased the amount of information publicly available about the performance of some NRSROs, particularly those newly registered. Overall, the disclosure of these statistics has not had the intended effect of increasing transparency for users. Specifically, * SEC has not provided specific guidance for the NRSROs for calculating and presenting the required performance statistics. Therefore, NRSROs have used different methodologies for calculating the required performance statistics, which renders them ineffective for comparative purposes. * SEC has yet to evaluate the appropriateness of the required performance statistics for SEC's currently designated asset classes to determine if the requirements need to be modified. Asset classes that are defined too broadly limit the usefulness of the disclosures. The Dodd-Frank Act directs SEC to adopt additional rules requiring NRSROs to publicly disclose information on the initial credit ratings determined by each NRSRO for each type of obligor, security, and money market instruments, and any subsequent changes to such credit ratings. In developing these new disclosure requirements, it will also be important for SEC to provide clear and specific guidance to NRSROs. Otherwise, the resulting disclosures may lack comparability. Recent SEC rules to make ratings history data publicly available are intended to generate performance measures and studies to evaluate and compare NRSRO performance. However, it is unlikely that the ratings histories NRSROs publish pursuant to the 10 percent and 100 percent requirements can be used for these purposes. Specifically, * SEC did not specify the data fields the NRSROs were to disclose as part of their 10 percent sample disclosures, and the data fields provided by the NRSROs did not always provide sufficient information to allow users to identify a complete rating history for each rating in the sample, including the beginning of rating histories, or specific types of ratings for making comparisons. As a result, users cannot develop performance measures that track how an issue or issuer's credit rating evolves, evaluate comparable entities across NRSROs, or calculate measures that compare a starting point to the state of a rating at the time of default. * Users cannot easily determine what data the variables represent because NRSROs were not required to provide an explanation of the variables used in the samples as part of their disclosures. Without such explanations, users may find it difficult to begin to construct performance statistics. * Because SEC did not require it, not all NRSROs disclosed defaults as part of their ratings histories in the samples. As such, users cannot calculate default statistics for those NRSROs. * Because SEC guidance to NRSROs for generating the 10 percent random samples does not specify that the NRSROs draw the sample from those ratings that are typically analyzed in each asset class and does not require the NRSROs to periodically redraw the samples or include ratings that have been withdrawn in prior time periods, the samples are not representative of the population of credit ratings at each NRSRO. As a result, users cannot generate performance measures that represent the population of credit rating over time and that can be compared across NRSROs. * Because the 100 percent disclosure requirement does not require that the NRSROs disclose the ratings of any issuer rated before June 26, 2007, or include data on withdrawn ratings, performance statistics calculated using the 100 percent data set would not reflect the overall rating performance of NRSROs and may not be representative of the universe of issuer ratings. * As SEC also did not provide guidance to NRSROs on how to format and describe the data disclosed under the 100 percent requirement, users will likely experience challenges when seeking to construct ratings histories and develop comparable performance statistics. Finally, SEC's rule requiring NRSROs to disclose total outstanding ratings is intended for users to assess how broad an NRSRO's coverage is within a particular class of credit ratings. However, because SEC did not specify how NRSROs were to count their outstanding ratings, NRSROs used diverse methodologies to count up their outstanding ratings. As a result, users of data cannot use them for their intended purpose. The Dodd-Frank Act requires SEC and other federal agencies to remove references to NRSRO ratings from their regulations, and substitute an alternative standard of creditworthiness. SEC has recently removed or proposed to remove references to NRSRO ratings from several rules. In comparison, the Dodd-Frank Act requirement is a broader undertaking that requires a strategic approach. SEC's previous experience highlights the importance of developing a plan to ensure that (1) any adopted alternative standards of creditworthiness for a particular rule facilitate its purpose and (2) examiners have the requisite skills to determine that the adopted standards have been applied. Without such a plan, SEC may develop alternative standards of creditworthiness that are not effective in supporting the purpose of a particular rule. Among its stated goals, the Act intended to improve competition among NRSROs by creating a more efficient and transparent NRSRO designation process for SEC to administer. The Act made receiving an NRSRO designation easier and resulted in three new, subscriber-pays NRSROs. However, industry concentration remains high. Given the limited time that has passed since SEC implemented the registration program, the impact of the credit crisis on the securities market, and uncertainty about changes in the regulatory environment, significant changes in industry concentration in the short term are not likely. However, whether any changes in industry concentration will materialize in the long term is still unclear. The credit rating industry continues to manifest its traditional barriers to entry, such as high fixed costs, that make gaining customers and achieving significant coverage of the securities market a challenge for new entrants. In some asset classes, such as structured finance, these barriers are especially high. Recognizing this, SEC amended the Exchange Act rule 17g-5 to increase the number of structured finance ratings by requiring issuers that contract with an NRSRO for a rating to agree to make the underlying data free to other interested NRSROs in order to encourage nonhired NRSROs to produce unsolicited ratings. As this rule became effective in June 2010, not enough time has passed to assess its impact. However, we note that the impact of competition on the quality of credit ratings is not well understood. The limited academic research conducted in this area suggests that the entry of new credit rating agencies will improve overall information available to investors and other market participants, but that the effect of entry on the quality of ratings produced by any one rating agency is not well-established. To assist Congress and others in evaluating proposed models for compensating NRSROs, we created an evaluative framework of seven factors that any compensation model should address to be effective. The Dodd-Frank Act contains a mandate for SEC to conduct a study of the feasibility of establishing a system in which a public or private utility or a self-regulatory organization assigns NRSROs to determine the credit ratings of structured finance products. SEC also must evaluate alternative means for compensating NRSROs that would create incentives for accurate credit ratings. Our framework could be used to evaluate current proposals for compensating NRSROs, develop new proposals, and identify trade offs among them. Recommendations for Executive Action: To address the concern that the current process for registering credit rating agencies may result in SEC approving applicants that do not meet the Act's requirements, the Chairman of the Securities and Exchange Commission should: * Identify the additional authorities and time frames necessary to ensure that staff can verify the accuracy of the information provided on Form NRSRO and that the applicant meets all of the Act's requirements; the Chairman should also work with Congress to ensure that SEC has the authority needed to effectively carry out its oversight responsibilities. To ensure that SEC has sufficient staff with the skills necessary to address the requirement in the Dodd-Frank Act that SEC establish an Office of Credit Ratings and examine each NRSRO every year, the Chairman of the Securities and Exchange Commission should: * Develop and implement a plan for the establishment of this office that includes the identification of the number of staff and the skills required of these staff to meet the required examination timetable and provide quality oversight of the NRSROs, including plans for the recruitment of any new hires and appropriate training. To address the inconsistencies in the NRSROs' methodologies for calculating required performance statistics and total outstanding ratings for initial and updated Form NRSRO filings, address limitations in the required 10 percent and 100 percent rating history disclosures, and increase the comparability and usefulness of these disclosures, the Chairman of the Securities and Exchange Commission should take the following eight actions: * for the disclosures of required performance statistics, - provide specific guidance for NRSROs for calculating and presenting these performance statistics, considering the impact of different methodologies on the information content of the performance statistics and the purpose for which SEC intends the statistics to be used; and: - evaluate the appropriateness of SEC's currently designated asset classes for presenting performance statistics, and where SEC determines that the asset classes are not appropriate, modify the requirements accordingly; * for the disclosures of required 10 percent and 100 percent ratings histories, - ensure that the data elements required as part of the datasets allow users to construct complete ratings histories, identifying the beginning of ratings histories, and distinguish between different types of ratings; - consider requiring NRSROs to publish a codebook to explain the variables included in the datasets; - clarify that NRSROs should include defaults in the ratings histories disclosed; - review its guidance to NRSROs for generating the 10 percent samples and modify it as needed to ensure that the samples are 10 percent of the type of ratings typically analyzed in each asset class, that withdrawn ratings are not removed from these samples, and that the samples are periodically redrawn; and: -review its guidance to NRSROs for generating the 100 percent rating history disclosures and modify it as needed to ensure that these histories include those ratings that are typically analyzed in each asset class; and that withdrawn ratings are not removed from these disclosures; * for the disclosures of total outstanding ratings required on Form NRSRO, - provide specific guidance to NRSROs to calculate their total outstanding ratings. To address the Dodd-Frank Act's requirement for SEC to remove any references to or requirements of reliance on credit ratings in its rules and substitute alternative standards of credit worthiness that it deems appropriate, the Chairman of the Securities and Exchange Commission should: * develop and implement a plan for approaching the removal of NRSRO references from SEC rules to help ensure that (1) any adopted alternative standards of creditworthiness for a particular rule facilitate its purpose and (2) examiners have the requisite skills to determine that the adopted standards have been applied. Agency Comments and Our Evaluation: We provided a draft of the report to the SEC Chairman for her review and comment. SEC provided written comments that are summarized below and reprinted in Appendix III. We also received technical comments from SEC that were incorporated, where appropriate. In its written comments, SEC agreed with our findings. With respect to our recommendation that addresses concerns about the current registration process, SEC stated that SEC staff intend to develop proposals to provide Congress with technical assistance for how the relevant portions of the Securities Exchange Act of 1934 could be amended to address the issues we identified with the registration process for NRSROs. Regarding our recommendation to develop and implement a plan for the establishment of the Office of Credit Ratings, SEC stated that it anticipates drawing upon our findings as it staffs this new office. SEC stated that it is in the process of hiring staff to meet its new responsibilities under the Dodd-Frank Act relating to NRSROs and has allocated between twenty-five and thirty- five positions to this new office. With respect to the recommendations related to improving the comparability of the NRSROs required disclosures of performance statistics, SEC noted the Dodd-Frank Act mandates that it adopt rules that require NRSRO disclosures of performance-related data to be comparable among NRSROs in order to allow users of credit ratings to compare the performance of credit ratings across NRSROs. SEC stated that our review of the existing disclosure requirements will be helpful to SEC staff in developing recommendations for the Commission in response to the congressional mandate. With respect to the recommendations related to addressing limitations in the required 10-percent and 100-percent rating history disclosures, SEC noted that on August 27, 2010, it published on its Web site the list of XBRL tags that NRSROs must use for these ratings history disclosure requirements. SEC believes this step may address some of the our concerns regarding its guidance as to the data fields NRSROs must use for these disclosures. We encourage SEC to evaluate the extent to which these tags address the limitations we identified, and, to the extent that they do not, to take further action. As we discussed, these limitations render current ratings history disclosures largely unusable. With respect to our recommendation regarding SEC's approach to address the Dodd-Frank mandate that it remove NRSRO references from its rules and substitute alternative standards of credit worthiness that it deems appropriate, SEC agreed that any proposed alternative standards of creditworthiness for a particular rule should facilitate its purpose and that replacing NRSRO ratings with alternative standards will require that SEC ensure that examiners have the requisite skills to determine that the adopted standards have been applied. SEC stated that it has already hired senior examiners with specialized expertise and skills and that it continues to increase its expertise in these areas through its recruiting and training programs. We are sending copies of this report to interested congressional committees and the Chairman of SEC. The report will also be available at no charge on the GAO Web site at [hyperlink, http://www.gao.gov]. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Signed by: Orice Williams Brown: Director, Financial Markets and Community Investment: [End of section] Appendix I: Objectives, Scope, and Methodology: To discuss the implementation of the Credit Rating Agency Reform Act of 2006 (Act), focusing on Securities and Exchange Commission (SEC) rulemaking and SEC's implementation of the registration and examination programs for Nationally Recognized Statistical Rating Organizations (NRSRO), we reviewed the rules SEC has adopted to implement the Act, including the NRSRO registration program and its oversight of NRSROs. We reviewed the SEC's Inspector General's August 2009 audit report for findings and recommendations pertaining to SEC's NRSRO registration program as well as SEC's July 2008 public report discussing its findings on examinations of selected NRSROs. We also reviewed copies of the Division of Trading and Markets' (Trading and Markets) internal memoranda to SEC documenting its review of NRSRO applications and internal memoranda outlining additional NRSRO monitoring. We obtained and reviewed the Office of Compliance Examinations and Inspections (OCIE) guidance for conducting an NRSRO examination and reviewed completed NRSRO examinations. To understand additional changes to SEC's oversight of the NRSROs, we reviewed the recently passed Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). We conducted interviews with Trading and Markets regarding the NRSRO application review process, and SEC's Division of Investment Management (Investment Management) and the Financial Industry Regulatory Authority (FINRA) to obtain information on SEC's process for registering investment advisors and FINRA's process for registering new broker-dealer members, respectively. We also conducted interviews with OCIE staff regarding the NRSRO examinations that were initiated after the Act's passage and obtained information from OCIE on its staffing of the NRSRO examination team. To evaluate the performance-related NRSRO disclosures that the Act and SEC rules require, we analyzed SEC rules requiring NRSRO disclosure of performance statistics and ratings history samples. First, we analyzed and compared the 10 NRSROs' 2009 disclosures of performance statistics, focusing on the corporate and structured finance asset classes, and we reviewed voluntary disclosures of additional performance statistics by 5 of the NRSROs. We also reviewed the Dodd- Frank Act to understand its directive to SEC to ensure the comparability of NRSRO performance disclosures. Second, we assessed NRSRO ratings history data disclosed by the 7 issuer-pays NRSROs pursuant to SEC's 10 percent disclosure requirement. From each NRSRO's Web site, we obtained its 10 percent random samples of outstanding ratings. We downloaded these data in February and March 2010.[Footnote 117] To assess the reliability and usability of the samples for generating comparative performance measures, we reviewed the samples and obtained information from each of the NRSRO's on the methods used to draw and maintain the samples. We identified a number of issues that led us to determine that the data were not in a format that allowed us to generate comparative performance statistics. For example, we found that the data fields required by the rule were not always sufficient to identify a complete rating history for ratings in each of the seven samples and did not give us enough information to identify specific types of ratings for making comparisons. We also could not consistently identify the beginning of the ratings histories in all of the samples. Furthermore, we found that the guidance provided by SEC to NRSROs for generating the random samples does not ensure that the method used will result in samples that are representative of the population of credit ratings at each NRSRO. Since it was impossible to compare performance statistics across NRSROs in any reliable manner, we focused on identifying the issues we encountered when reviewing the available data and attempting such comparisons. We also analyzed SEC's 100 percent disclosure requirement by reviewing the rule and obtaining information from Trading and Markets and two NRSROs on its implementation. As this rule did not become effective until June 2010, we did not review the data disclosed by NRSROs pursuant to the rule, with the exception of one NRSRO. We conducted a limited review of the data disclosed by this NRSRO to determine the extent to which the data included ratings of issuers rated prior to June 26, 2007. As part of this objective, we also assessed the NRSROs' disclosures of total outstanding ratings required on Form NRSRO. We reviewed these disclosures and SEC's instructions pertaining to these disclosures and obtained information from the NRSROs on their methods for determining total outstanding ratings. To evaluate the potential regulatory impact of removing NRSRO references from certain SEC rules, we reviewed SEC's proposed rules to remove references to NRSRO ratings, focusing on proposed amendments to rule 2a-7 under the Investment Company Act of 1940 (Investment Company Act) and Securities and rule 15c3-1under the Securities Exchange Act of 1934 (Exchange Act) and the comment letters submitted to SEC on these proposals. We reviewed multiple studies pertaining to the use of ratings in regulations. To understand the extent to which NRSRO ratings are used in U.S. federal securities, banking, and insurance laws, rules, and regulations we obtained a copy of a Joint Forum report documenting their use. To understand how the ratings were used in the rules and the regulatory impact the removal of the ratings might have, we reviewed OCIE's examination guidance for reviewing for compliance with rules 2a-7 and 15c3-1. We also reviewed the 65 examinations of money market funds OCIE completed in FY 2003-2009, which reviewed funds for compliance with rule 2a-7. We did so to understand how examiners ensured that funds complied with the rule's two-part test for determining if a security was eligible for purchase and how the removal of NRSRO ratings might affect oversight of this determination. We reviewed the Dodd-Frank Act to understand its directive to SEC and other federal agencies to remove NRSRO references from their rules regulations. We also interviewed staff from Investment Management to better understand the purpose of rule 2a-7, and how the removal of references might affect oversight. We conducted interviews with OCIE staff, Trading and Markets staff, and market participants and observers about the pros and cons of utilizing NRSRO references in, and the potential impact of removing them from, regulations. To evaluate the impact of the Act on competition among NRSROs, we reviewed proposed and final SEC rules intended to promote competition among rating agencies, as well as the comment letters SEC received in response to those rules. We reviewed SEC's 2008 and 2009 mandated annual reports on NRSROs, including SEC's studies on competition in the credit rating industry. We used the Herfindahl-Hirschman Index (HHI) to track industry concentration over time.[Footnote 118] The HHI is a measure of industry concentration that reflects both the number of firms in the industry and each firm's market share. It is calculated by summing the squares of the market shares of each firm competing in the market. The HHI reflects both the distribution of market shares of the top firms and the composition of the market outside the top firms. The HHI is measured on a scale of 0 to 10,000, with larger values indicating more concentration. According to the Department of Justice and Federal Trade Commission, markets in which the value of the HHI is between 1,500 and 2,500 points are considered to be moderately concentrated, and those in which the value of the HHI is in excess of 2,500 points are considered to be highly concentrated, although other factors also play a role. Calculating the HHI requires defining what constitutes the industry and specifying our measure of market share. We define the relevant industry as the set of credit rating agencies that have NRSRO status, and we use a variety of market share definitions to ensure that any trends in industry concentration we observe are robust to alternative specifications of NRSROs' market shares. A firm's market share typically is measured in terms of dollars, as either its sales or revenue as a fraction of total sales or revenue for all firms in the industry, or in terms of quantities, as its output as a fraction of total output produced by all firms in the industry. We measured an NRSRO's overall market share as its revenue as a share of total revenue for the industry. We measured an NRSRO's market share overall in each asset class as the number of debt-security issuing organizations or entities it rates as a percent of the sum of the numbers of organizations that each NRSRO rates, in which the number of organizations rated is our proxy for output. Finally, for the ABS asset class, we measured an NRSRO's market share as the dollar value of new U.S. issuance it rated as a percentage of the sum of the dollar value of issuance each NRSRO rated. We then calculated the HHI using these definitions of market share. We obtained the revenue data for 2006-2009 from the NRSROs' initial and annual Form NRSRO filings. We obtained the number of organizations or entities that issue debt securities and are rated by the NRSROs per asset class for 2006-2009 from 9 of the 10 NRSROs.[Footnote 119] We obtained data on the dollar amount of new U.S.-issued asset-backed debt rated by NRSROs for 2004--June 2010 from an industry newsletter that tracks asset-backed securitization. Because these data came from private firms, we were able to conduct only limited assessments of the data's reliability. We were not, for example, able to conduct our own reliability tests on the data. For the revenue data obtained from the NRSROs' responses to Form NRSRO, we interviewed the staff from Trading and Markets to determine the steps they took to ensure the data represented the firms' total revenues. As such, we determined that the data were sufficiently reliable for our purpose, which was to show the relative concentration of the NRSROs in the industry. For the data we obtained from the NRSROs on the number of rated organizations or entities that issue debt securities, we specified the methods by which the NRSROs were to count and classify the ratings. We did this to ensure consistency in the data across the NRSROs. We also examined the data, both within and NRSRO and among NRSROs, to determine whether there were any illogical trends that would indicate the date had been prepared incorrectly. As such, we determined that the data were sufficiently reliable for our purpose, which was to show the relative concentration of the NRSROs in the industry. For the data we obtained from the private firm tracking the dollar value of rated new U.S. asset-backed securitization, we obtained information from the firm on the processes and procedures used to collect and manage the data. We determined that the data were sufficiently reliable for our purpose, which was to show the relative concentration of the NRSROs in the industry. We also used these data to generate a series of descriptive graphs showing the NRSROs' market coverage, that is, the percentage of new U.S. asset-backed issuance that each rated. We also believe that the data are sufficiently reliable for this purpose. We also reviewed academic studies on competition in the industry. We identified three studies that analyze data to assess the impact of the number of rating agencies on some aspect of the credit rating industry. We identified these studies by searching databases of both unpublished working papers and papers published in refereed academic journals and by searching the bibliographies of studies we found in the databases.[Footnote 120] The three studies we identified are all unpublished working papers. We reviewed these studies and determined they did not raise any serious methodological concerns. However, the inclusion of these studies is purely for research purposes and does not imply that we deem them to be definitive. Finally, we obtained the views of SEC's Office of Risk Analysis, the NRSROs, credit rating agencies that are not registered NRSROs, institutional investors, issuers, and industry experts on the impact of the Act on competition. To provide an overview of proposed alternative models for compensating NRSROs and an evaluative framework for assessing the models, we identified proposals on alternative models for compensating NRSROs by reviewing white papers submitted to the SEC roundtable on credit rating agency oversight, academic and other white papers, and interviewed the authors of the proposed models. We obtained the views of Trading and Markets, SEC's Office of Risk Assessment, and NRSROs. We also spoke with credit rating agencies that are not registered NRSROs, institutional investors, issuers, and academic and industry experts. To develop the framework for evaluating the models, we reviewed prior GAO reports and obtained the views of market participants and observers to identify appropriate factors for inclusion. We then convened a panel of GAO experts (financial markets specialists, economists, an attorney and a social scientist) to review the framework and incorporated their comments. Finally, we solicited comments from NRSROs, proposal authors, industry experts, and trade associations and incorporated them as appropriate. We conducted this performance audit from May 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. [End of section] Appendix II: Other Registration Processes Provide Greater Flexibility to the Regulators: Other registration programs for securities market participants allow regulators to exercise greater oversight over applicants than does the registration program for Nationally Recognized Statistical Rating Organizations (NRSROs) while maintaining an efficient and transparent registration process. More specifically, in the Securities and Exchange Commission's (SEC) Division of Investment Management's (Investment Management) registration programs for investment advisors and the Financial Industry Regulatory Authority's (FINRA) Registrant Application process for broker-dealers, staff can request additional information from the applicant in specific circumstances, extend the review process, and reject an application for a broader set of reasons. SEC's registration process for investment advisers is authorized by section 203 of the Investment Advisers Act of 1940 (Investment Advisers Act), which identifies generally the type of information required in an application for registration as an investment adviser and prescribes a time frame, 45 days, by which SEC must act on an application.[Footnote 121] Investment advisers apply by completing and submitting Form ADV, which requires information such as the type of company that is applying, what businesses it will be involved in, its assets under management, its employees and clients, compensation and ownership arrangements, any financial industry affiliates, and whether the adviser or any person associated with the adviser is subject to certain disciplinary events.[Footnote 122] SEC must grant registration to applicants if it finds that the requirements of section 203 are satisfied. After instituting a proceeding to determine whether registration should be denied, SEC must deny a registration if it does not make such a finding or if it finds that if the applicant were so registered, its registration would be subject to suspension or revocation. Although Investment Management oversees the rules and policies regarding the registration of investment advisers, the Office of Compliance Inspections and Examinations (OCIE) is responsible for reviewing the application materials and evaluating whether the application is complete. If additional information is necessary to consider the application or clarify inconsistencies in the information provided, OCIE contacts the applicant with questions and requests additional information. If additional information is required, applicants submit an amended Form ADV, triggering a new 45-day review period. Investment Management officials characterized the process as efficient, and stated that OCIE completes application reviews in an average of 30 days, with variations due to factors such as volume of applications and complexity. Staff stated that incomplete applications can be placed in a postponed status. When postponed, OCIE contacts the applicant through an official letter, which typically states that SEC received the application and describes the parts of the filing that need to be corrected or completed. While staff await a response from the applicant, the running of the 45-day review period is automatically suspended pending receipt by SEC of the additional information necessary to complete the application. Staff said that sometimes an applicant may never provide the requested information or correct the deficiency in the application. In these situations, OCIE considers the application incomplete. Once a complete application is received, a review of disciplinary information is completed to determine if there is a reason to recommend to SEC to deny, condition, or limit the registration. If there is none, the application is approved. FINRA's registrant process also imposes a time frame on FINRA staff for reviewing broker-dealer applications for registration. Under FINRA Rule 1014, FINRA staff must make a decision no more than 180 days from the filing date unless requested by the applicant and to which FINRA otherwise agreed. Broker-dealers applying for registration must submit an application providing information demonstrating they meet the 14 standards specified by the FINRA rule. These include that they have adequate financial and managerial resources, supervisory systems, and compliance procedures designed to detect and prevent violations of securities laws and related rules; recordkeeping systems that enable compliance with regulatory recordkeeping requirements; and staff sufficient in qualifications and number to prepare and preserve required records. FINRA's Department of Member Regulation reviews and either approves, approves with restrictions or denies broker-dealer member applications, and staff have the authority, if there are any questions, to make additional requests for information. The department also conducts a membership interview to further clarify the application material, after which additional information is requested and thereafter, a final decision is made. According to FINRA officials, staff frequently have additional questions. For example, staff may have factual questions about capital, employee registrations, leases, or the location of the broker-dealer facility. While FINRA officials said that at no point should a decision on an application take more than 180 days unless agreed to by the applicant and agreed to by FINRA, they also stated that if staff did not receive, within prescribed timeframes, the information they requested to satisfy any questions they would, absent good cause, reject or lapse the application. They also can reject an application at the time it is initially filed if that application has a material deficiency (i.e. was not substantially complete); for example, if staff were unable to tell what the applicant's business was going to do, how the business would be supervised, or who was intending to fund the broker- dealer. If applicants disagree with the department's decision to deny or restrict an application, they can appeal internally to FINRA and then to SEC, and finally in Circuit court.[Footnote 123] According to FINRA officials, the department reviews more than 200 new member applications in a year and decisions often take less than 180 days. They estimated that about 80 percent are completed within 180 days. FINRA officials stated that the process allows FINRA to restrict certain activities and deny unqualified applicants. Both FINRA's registrant application process for broker-dealers and SEC's registration process for investment advisors require applicants to provide specific information and utilize deadlines to ensure an efficient process. However, unlike the NRSRO registration program, staff of these programs are provided the authorities necessary to clarify any outstanding questions they have regarding an applicant and to delay approving that applicant until the staff are satisfied that the applicant has met all of the necessary requirements. [End of section] Appendix III: Comments from the Securities and Exchange Commission: United States: Securities And Exchange Commission: Washington, D.C. 20549: September 10, 2010: Ms. Orice Williams Brown: Director: Financial Markets and Community Investment: Government Accountability Office: 441 G Street, NW: Washington, DC 20548: Dear Ms. Brown: Thank you for the opportunity to comment on the Government Accountability Office (GAO) draft report entitled Securities and Exchange Commission: Action Needed to Improve Rating Agency Registration Program and Performance-Related Disclosures (GA0-10-782). At the outset, we want to acknowledge the effort that went into the preparation of this report and the thoughtful analysis and suggestions it provides. The report details the GAO staff's concerns in connection with the registration process for nationally recognized statistical rating organizations ("NRSROs"), including in particular its concerns regarding the constraints placed on the Commission's authority to review registration applications by the strict statutory deadlines and the inability to conduct pre-registration examinations of applicants. We appreciate the report's analysis of these issues and concur with the conclusion that legislative changes would be necessary to allow the Commission staff to conduct pre-registration examinations. The Commission staff intends to develop proposals to provide Congress with technical assistance for how the relevant portions of the Securities Exchange Act of 1934 (the "Exchange Act") could be amended to address these issues based on the GAO's analysis. We anticipate that the other analyses, findings, and recommendations in the report will be of help to the Commission staff as it develops recommendations for the Commission to implement the requirements in the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (the "Dodd-Frank Act") with respect to the Commission's oversight of NRSROs. As you know, the Dodd-Frank Act contains a number of provisions impacting the Commission's regulatory program for NRSROs, several of which directly involve issues discussed in the report. For example, the Dodd-Frank Act requires the Commission to establish an Office of Credit Ratings staffed sufficiently to carry out fully the Commission's oversight responsibilities with respect to NRSROs, including the new requirement to perform annual examinations of each NRSRO. The Commission currently is in the process of hiring personnel to meet its new responsibilities under the Dodd-Frank Act relating to NRSROs, and has allocated between twenty-five and thirty-five positions to this Office. We appreciate the GAO staff highlighting the considerations that should inform this staffing process and anticipate drawing upon the report's findings and recommendations in this area. We also appreciate the GAO's recognition of our efforts to maintain our current examination cycle goals with limited resources. [Footnote 1] In addition, the Dodd-Frank Act mandates that the Commission adopt rules that require NRSRO disclosures of performance-related data to be comparable among NRSROs in order to allow users of credit ratings to compare the performance of credit ratings across NRSROs. The GAO staff's comprehensive review of the Commission's existing performance data disclosure requirements and their recommendations as to how to increase the comparability and usefulness of these disclosures will be an asset to the Commission staff in carrying out its own review of the existing disclosure requirements and developing recommendations for the Commission in response to the congressional mandate. We note that on August 27, 2010, the Commission published on its Internet Web site the List of XBRL Tags for NRSROs to be used for the ratings history disclosure requirements of paragraph (d) of Rule 17g-2 under the Exchange Act. We believe this step may address some of the GAO's concerns regarding the Commission's guidance as to the data fields NRSROs must use for these disclosures. We also note that the Dodd-Frank Act requires the Commission to undertake a study on the feasibility and desirability of, among other things, standardizing credit rating terminology across asset classes, so that named ratings correspond to a standard range of default probabilities and expected losses independent of asset class and issuing entity. The work of the GAO staff on how NRSROs calculate performance statistics will be helpful in performing this study. The Dodd-Frank Act also requires the Commission, as well as every other Federal agency, to review any regulation that requires the use of an assessment of the creditworthiness of a security or money market instrument and to remove any reference to or requirement of reliance on credit ratings, substituting such standards of creditworthiness as it determines is appropriate. As the report notes, the Commission first proposed removing references to credit ratings in its regulations in July 2008 and, in October 2009, voted to remove such references from a number of Commission rules and forms. We agree with the report's conclusions regarding the importance of ensuring that any proposed alternative standards of creditworthiness for a particular rule facilitate the purpose of that rule. The report further notes that replacing NRSRO ratings with another standard of creditworthiness will require the Commission to ensure that examiners have the requisite skills to determine that the adopted standards have been applied. We agree and note that the SEC has already hired senior examiners with specialized expertise and skills, and continues to increase its expertise in these areas through its recruiting and training programs. Finally, the Dodd-Frank Act requires the Commission to carry out a study of and prepare a report on (1) the credit rating process for structured finance products and the conflicts of interest associated with the issuer-pay and the subscriber-pay models, (2) the feasibility of establishing a system in which a public or private utility or a self-regulatory organization assigns NRSROs to determine the credit ratings of structured finance products, (3) the range of metrics that could be used to determine the accuracy of credit ratings, and (4) alternative means for compensating NRSROs that would create incentives for accurate credit ratings. We anticipate that the seven-factor framework for evaluating alternative compensation models set forth in the report will be a valuable resource to the Commission staff in carrying out this study and in making recommendations for any rulemaking determined to be necessary or appropriate in the public interest or for the protection of investors. On behalf of the Commission staff, we thank the GAO staff for its work on the report as well as for the opportunity to review and comment on the draft before the report is issued in its final form. Sincerely, Signed by: Robert W. Cook: Director: Division of Trading Markets: Signed by: Carlo di Florio: Director: Office of Compliance: Inspections & Examinations: Footnote: [1] The report states that the SEC has been unable to meet its planned routine examination cycle of examining the three largest NRSROs every two years and the remaining NRSROs every three years. We note that NRSROs did not become subject to SEC examination until 2007. We conducted examinations of the three largest NRSROs in fiscal year 2008 and initiated subsequent examinations of the same three NRSROs in fiscal year 2010. In addition, we have completed or initiated examinations of all remaining registered NRSROs. [End of section] Appendix IV: GAO Contact and Staff Acknowledgments: GAO Contact: Orice Williams Brown, (202) 512-8678 or williamso@gao.gov: Staff Acknowledgments: In addition to the individuals named above, Karen Tremba, Assistant Director; Lucas Alvarez; William Chatlos; Rachel DeMarcus; Courtney LaFountain; Elizabeth Jimenez; Stefanie Jonkman; Matthew Keeler; Omyra Ramsingh; and Barbara Roesmann made key contributions to the report. [End of section] Footnotes: [1] Section 3(a)(60) of the Exchange Act (codified at 15 U.S.C. § 78c(a)(60)). [2] 17 C.F.R. § 240.15c3-1. Rule 15c3-1, also known as the net capital rule, generally defines net capital as a broker-dealer's net worth (assets minus liabilities), plus certain subordinated liabilities, less certain assets that are not readily convertible into cash, and less a percentage of certain other liquid assets (for example, securities). In computing their net capital broker-dealers are required to deduct from their net worth certain percentages of the market value of their proprietary securities positions, known as a haircut. NRSRO ratings are used, along with other factors, to determine the haircut for each security. [3] The three largest NRSROs are Standard & Poor's, Moody's Investors Service, and Fitch Ratings. These firms were criticized for rating Enron as investment grade until only 4 days before the company filed for bankruptcy on December 21, 2001. [4] See SEC, Report on the Role and Function of Credit Rating Agencies in the Operation of the Securities Markets, As Required by Section 702 (b) of the Sarbanes-Oxley Act of 2002. (Washington, D.C.: Jan. 24, 2003). The practice of issuers paying for their ratings creates the potential for a conflict of interest. Arguably, the dependence of rating agencies on revenues from the companies they rate could induce them to rate issuers more liberally, and temper their diligence in probing for negative information. This potential conflict could be exacerbated by the rating agencies' practice of charging fees based on the size of the issuance, as large issuers could be given inordinate influence with the rating agencies. [5] Under the no-action letter process, credit rating agencies requested recognition from SEC as an NRSRO by requesting "no action" relief. If SEC staff determined that the rating agency could be considered an NRSRO, it issued a "no-action" letter stating it would not recommend enforcement action to the Commission if ratings were used by registrants for regulatory compliance purposes. If SEC staff concluded the rating agency should not be considered an NRSRO, the Commission would issue a letter denying a request for no-action relief. [6] Pub. L. No. 109-291, 120 Stat. 1327 (Sept. 29, 2006) (amending the Securities Exchange Act of 1934 and codified at various sections of title 15 of the U.S. Code). [7] Act of June 6, 1934, ch. 404, Title I (codified, as amended, at 15 U.S.C. §§78a et seq.). [8] RMBS are debt obligations that represent claims to the cash flows from pool of residential property loans. Beginning in 2007, delinquency and foreclosure rates for subprime mortgage loans in the United States dramatically increased, creating turmoil in the markets for RMBS backed by such loans and other securities products related to those securities. As the performance of these securities began to deteriorate, the three rating agencies most active in rating these instruments downgraded a significant number of their ratings. [9] See SEC (Office of Compliance Inspections and Examinations, Division of Trading and Markets and Office of Economic Analysis), Summary Report of Issues Identified in the Commission Staff's Examinations of Select Credit Rating Agencies (Washington, D.C.: Jul. 8, 2008). [10] Act of August 22, 1940, ch. 686, title II, 54 Stat. 789 (codified, as amended, at 15 U.S.C. §§ 80a-1 et seq.). Investment Company Act rule 2a-7 governs the operation of money market funds, which rely on the rule to use valuation and pricing methods different from those that other investment companies are permitted to use, to help maintain a stable share price. The rule contains conditions that restrict money market funds' portfolio investments to securities that have either received certain minimum credit ratings from NRSROs or are comparable unrated securities. [11] The HHI is one of the concentration measures that government agencies, including the Department of Justice (DOJ) and the Federal Trade Commission (FTC), use when assessing concentration to enforce U.S. antitrust laws. [12] Pub. L. No. 111-203, 124 Stat. 1376 (July 21, 2010). This act includes a number of provisions intended to improve the regulation of credit rating agencies. For example, it increases internal control requirements at NRSROs and directs SEC to issue rules requiring the NRSRO to submit to it an annual internal controls report. The Dodd- Frank Act also directs SEC to issue rules requiring greater transparency of NRSRO rating procedures and methodologies and requires SEC to establish an Office of Credit Ratings to conduct annual examinations of the NRSROs. [13] For example, one NRSRO stated in its 2010 Form NRSRO disclosure that since its ratings are typically procured by institutional investors and not issuers, it does not have ready access to management of the issuer being analyzed. As a result, it does not rely heavily on information obtained during discussion with issuer management. [14] We identified and reviewed 24 studies dated 2000 or later that analyzed the impact of credit ratings on some aspect of the financial markets. We reviewed these studies and determined that they did not raise any serious methodological concerns. However, the inclusion of these studies is purely for research purposes and does not imply that we deem them to be definitive. [15] 72 Fed. Reg. 33564, 33619-36 [16] Oversight of Credit Rating Agencies Registered as Nationally Recognized Statistical Rating Organizations, 72 Fed. Reg. 33564, 33619- 36 (June 18, 2007)(Final Rule)(codified, as amended, at 17 C.F.R. §§ 240.17g-1 - 240.17g-6 and 17 C.F.R. § 249b.300) (2010). [17] Amendments to Rules for Nationally Recognized Statistical Rating Organizations, 74 Fed. Reg. 6456, 6482-84 (Feb. 2, 2009)(Amending 17 C.F.R. §§ 240.17g-2, 240.17g-3, 240.17g-5 and Form NRSRO); Amendments to Rules for Nationally Recognized Statistical Rating Organizations, 74 Fed and Reg. 63833, 63863-65 (Dec. 4. 2009)(Amending 17 C.F.R. §§ 240.17g-2, 240.17g-5 and 243.100). In December 2009, SEC proposed rules that would, among other things, require NRSRO compliance officers to furnish an annual report to SEC, disclose additional information about sources of revenues on Form NRSRO, and make publicly available a consolidated report containing information about revenues of the NRSRO attributable to persons paying the NRSRO for the issuance or maintenance of a credit rating. Proposed Rules for Nationally Recognized Statistical Rating Organizations, 74 Fed. Reg. 63866, 63901- 04 (Dec. 4, 2009)(Proposed amendments to 17 C.F.R. §§ 240.17g-3, 249b.300 and Form NRSRO and proposed new § 240.17g-7). [18] The concerns outlined in the July 8, 2008, report were identified during SEC examinations of the three largest NRSROs. We discuss these examinations in more detail later in this section. [19] Some credit rating agencies stated they tried to obtain NRSRO status for over a decade under the previous no-action letter process. In addition, in the prior no-action letter process, SEC would conduct examinations prior to providing a credit rating agency with a no- action letter. [20] S. Rep. No. 109-326, at 7-8 (2006). [21] Section 15E(a)(1)(D) exempted from this requirement those credit rating agencies that had been designated as NRSROs by staff prior to August 2, 2006. [22] Pub. L. No. 109-291, § 4, 120 Stat. at 1329-38 (codified at 15 U.S.C. §78o-7). [23] 15 U.S.C. 78o-7(a)(2)(B). [24] To date two credit rating agencies have agreed to an extension of the application period. In its response to the SEC Inspectors General report on the NRSRO registration program, Trading and Markets staff stated that for at least one application its interactions with the applicant during the application process made clear that the applicant would not have consented to such an extension. See SEC, Office of Audits, The SEC's Role Regarding and Oversight of Nationally Recognized Statistical Rating Organizations (NRSROs). Report No. 458 (August 27, 2009). [25] 15 U.S.C. § 78o-7(a)(2)(C). [26] Section 15E(d) of the Exchange Act [15 U.S.C. § 78o-7(d)] sets out the circumstances under which the Commission can censure, place limitations on the activities, functions, or operations of; suspend; or revoke the registration of an NRSRO. For example, these include circumstances in which a person associated with the NRSRO has been convicted of certain civil or criminal offenses or has been the subject of a Commission order barring or suspending the right of the person to be associated with an NRSRO. The Act allows SEC to deny a NRSRO applicant's registration if it determines that the applicant or a person associated with it has committed or omitted any act, or is subject to an order or finding enumerated under Section 15E(d). [27] The 10 NRSROs that SEC approved for registration as NRSROs as part of the new registration program are A.M. Best Company, Inc.; DBRS Ltd.; Fitch, Inc.; Japan Credit Rating Agency, Ltd.; Moody's Investors Service; Rating and Investment Information, Inc.; Standard & Poor's Ratings Services; Egan-Jones Rating Company; LACE Financial Corp.; and Realpoint LLC. [28] The Commission has recently instituted proceedings to determine if an applicant should be denied registration as an NRSRO. We did not review the memorandum provided to the Commission for this applicant. However, the Order Instituting Administrative Proceedings identified for consideration two grounds for denial: (1) whether the applicant has sufficient connection with U.S. interstate commerce to register as an NRSRO and thereby invoke the regulatory and oversight authority of the SEC; and (2) whether the application should be denied on grounds that if registered as an NRSRO, the applicant would be subject to having its registration suspended or revoked under section 15E(d)(1) of the Exchange Act because, in light of requirements in its home jurisdiction, the applicant would be unable to comply with provisions of the U.S. securities laws and rules, including, in particular, Section 17 of the Exchange Act and Rules 17g-2 and 17g-3. See 75 Fed. Reg. 20645-46 (Apr. 20, 2010). [29] In one case, Trading and Markets staff asked an NRSRO applicant to consent to a 2-day extension of the 90-day review requirement to allow a Commissioner, who had been unable to vote on the application earlier, to vote. The applicant initially resisted granting the Commission the extension. Ultimately, it consented to the 2-day extension but made clear it would not consent to a longer time period. Trading and Markets staff said they have requested additional documents from applicants in other cases but, without express examination authority, an applicant may deny such requests. SEC does not have the authority to conduct an examination before approving a credit rating agency as an NRSRO. [30] One applicant consented to two extensions, one for 7 days and another for 14 days. [31] As indicated by SEC's public order, in the case of the applicant that SEC has instituted proceedings to determine if registration should be denied; the information provided that led to the institution of proceedings was not related to the financial or managerial resources or other types of "qualitative concerns" raised by staff in the other memoranda. SEC instituted proceedings based on the fact that due to its home jurisdiction, the applicant may not be able to comply with provisions of the U.S. securities laws and rules, in particular the requirement that credit rating agencies make their books and records available to examiners without notice. [32] See, e.g., S. Rep. No. 109-326, at 7-8. [33] See appendix II for a more detailed discussion of these registration programs. [34] The Market/Self-Regulatory Organization (SRO) Oversight groups within OCIE are responsible for examining SROs to ensure that they and their members comply with applicable federal securities laws and SRO rules. The SROs include national stock exchanges, such as the New York Stock Exchange, and national securities associations, such as FINRA. Other SROs are registered clearing agencies and the Municipal Securities Rulemaking Board. [35] Because these examinations were conducted before the establishment of the new NRSRO examination team, examiners from the Office of Market Oversight/Self-Regulatory Organizations performed them. [36] OCIE wrapped any outstanding issues and examination work from the unfinished examination into a new examination initiated to fulfill the Dodd-Frank Act requirement that every NRSRO be examined annually. [37] Section 15E(a)(1)(B)(i) requires an application for NRSRO registration to contain information regarding credit rating performance measurement statistics over short-, medium-, and long-term performance measurements. 15 U.S.C. §78o-7(a)(1)(B)(i). In addition, section 15E(a)(3) directs the SEC to require, by rule, that registered NRSROs make the information and documents submitted in its application for registrations publicly available on the NRSRO's Web site. As part of its June 2007 rules implementing section 15E, SEC required NRSRO applicants and registered NRSROs to disclose on their Form NRSRO short- , medium-, and long-term historical transition and downgrade rates for each class of credit rating for which they are registered. The Form NRSRO was revised, effective April 2009, to require default and transition rates (the latter include both upgrades and downgrades) and specify short-, medium-, and long-term as 1-, 3-, and 10-year time periods. Amendments to Rules for Nationally Recognized Statistical Rating Organizations, 74 Fed. Reg. at 6483-84. [38] Several NRSROs published their own performance measures prior to SEC's requirements. The measures published vary considerably, but most of these NRSROs published some form of transition and default rates. In some cases, their statistics encompass longer time frames or focus on particular geographic regions or industry sectors. Some NRSROs publish other types of performance statistics. For example, two NRSROs also publish Lorenz curves, also sometimes called "power curves" or "cumulative accuracy profiles." Lorenz curves are visual tools for assessing the accuracy of the rank ordering of creditworthiness that a set of ratings provides. They are considered useful for comparing the relative accuracy of different rating systems or the relative accuracy of a single rating system measured at different points of time for different cohorts. However, the NRSROs' ability to publish performance statistics beyond what is required by SEC depends on data availability. One NRSRO explained that the largest NRSROs are able to generate more performance statistics and a more granular level than the smaller NRSROs because they have many more ratings in their database that span more years. [39] Three NRSROs did not calculate the transition rates for each rating category. Two provided the number of ratings in each rating category at the beginning of the rating period and the number of ratings that transitioned during the rating period. Users could calculate transition rates from this information. The third provided the number of ratings that transitioned during the rating period, but did not provide the total number of ratings in each rating category. Users could not calculate transition rates from these data. [40] Some NRSROs created cohorts for a year based on ratings as of January 1 of that year. Other NRSROs created cohorts for a year based on ratings as of December 31 of the previous year. [41] One NRSRO provided the number of ratings that transitioned to default during the rating period, but did not provide the total number of ratings in each rating category. Users could not calculate default rates from these data. [42] Several NRSROs said they base performance statistics for the corporate, financial institution, and insurance company asset classes on issuer ratings, because some of the issuers in these asset classes are responsible for multiple issues, and the issuer's credit rating is highly correlated with the ratings on its issues. They said if performance statistics were based on issues for these asset classes, they could be biased toward the ratings performance of large issuers. On the other hand, they said that performance statistics for the structured finance asset class are based on issues. One NRSRO explained that this is because each issue has its own default probability. [43] Some NRSROs also presented lists of the firms that defaulted in the last 10 years or over the time period for which they had ratings histories, along with the initial rating assigned to the firm. [44] For example, one NRSRO reported conditional default rates relative to ratings at the beginning of a 3-year period for the 2007 cohorts. The NRSRO would group the rated entities by rating category. For each cohort, the NRSRO then would calculate first-year, second- year, and third-year survival rates. The first-year survival rate is the number of entities that did not default in 2007 divided by the number in the cohort. The number that survived the first year, 2007, is the number in the cohort minus the number that defaulted in 2007. Some NRSROs adjust for withdrawals by also subtracting the number of entities with ratings withdrawn in 2007. The second-year survival rate is the number of entities that survived the first year and did not default in 2008 divided by the number that survived the first year. The number of entities that survived the second year, 2008, is the number that survived the first year minus the number that defaulted in 2008 (with some NRSROs also subtracting the number of entities with ratings withdrawn in 2008). The third-year survival rate is the number of entities that survived the second year and did not default in 2009 divided by the number that survived the second year. The 3-year conditional default rate for a cohort is one minus the product of the first-year, second-year, and third-year survival rates. In a variation, one NRSRO reported conditional default rates relative to initial ratings, which it calculated with a similar method. [45] The Act mandated that SEC issue these rules within 9 months of the date of enactment. [46] Dodd-Frank Act § 932(q). [47] Cantor and Packer demonstrated that the observed default rates of bonds rates BBB or lower (typically the last rating in the investment- grade category) vary over time for a single NRSRO. Richard Cantor and Frank Packer, "The Credit Rating Industry," Quarterly Review, Federal Reserve Bank of New York, 19, no. 2 (1994). Blume, Lim, and Mackinlay find evidence that one rating agency applied more stringent rating standards between 1978 and 1995, so that firms with the same observable characteristics were assigned lower ratings in later years than they were assigned in earlier years. See Marshall E. Blume, Felix Lim, and A. Craig Mackinlay, "The Declining Credit Quality of U.S. Corporate Debt: Myth or Reality?" Journal of Finance, 53, no. 4, Papers and Proceedings of the Fifty-Eighth Annual Meeting of the American Finance Association, Chicago, Illinois, January 3-5, 1998 (August 1998), pp. 1389-1413. [48] We did not evaluate the adequacy of these disclosures. [49] Oversight of Credit Rating Agencies Registered as Nationally Registered Statistical Rating Organizations, 72 Fed. Reg. 6378 (Feb. 9. 2007). [50] See 74 Fed. Reg. at 6482 (codified, as amended, at 17 C.F.R. 240.17g-2(d)(2) (2010)). The 10 percent disclosure requirement became effective in August 2009. SEC did not apply the 10 percent disclosure requirement to subscriber-pays NRSROs. SEC noted in the final rule as subscriber-pays NRSROs make their ratings available only for a fee, the rule requiring them to make 10 percent of their outstanding ratings available for free could cause them to lose subscribers. [51] See, 74 Fed. Reg. at 63863-65 (codified, as amended, at 17 C.F.R. 240.17g-2(d)(3) (2010). The 100 percent disclosure requirement became effective in June 2010. As part of both rules, SEC also required that the NRSROs make the ratings history data available on their Web sites in eXtensible Business Reporting Language (XBRL) format. The XBRL format is intended to provide a uniform standard format for presenting the data and allow users to dynamically search and analyze the data. SEC published the list of XBRL tags that the NRSROs must use to comply with this requirement on August 27, 2010. The NRSROs have 60 days after this date to publish the data using this format. [52] CUSIP stands for the Committee on Uniform Securities and Identification. A CUSIP number consists of nine characters that uniquely identify a company or issuer and the type of security. CIK is the unique number that SEC's computer system assigns to individuals and corporations that file disclosure documents with SEC. CIK is an acronym for Central Index Key. All new electronic and paper filers, foreign and domestic, receive a CIK number. [53] Some securitizations--such as RMBS--are divided into different classes, or tranches. A tranche is a piece of a securitization that has specified risks and returns. [54] Sample size also may limit the kinds of comparative performance statistics that can be developed. Transition and default rates are more useful the larger the sample of data used to construct them. This is particularly true of default rates because they are rare events and may not be observed in samples that are too small. Some of the 10 percent samples have relatively small numbers of observations, particularly the samples of smaller NRSROs. For example, three of the samples had no observed defaults or impairments. [55] Some academic studies evaluate the comparative performance of NRSROs by observing instances where NRSROs offer ratings on the same entity or security. It is unlikely that in the 10 percent samples two or more NRSROs randomly will select the same entity or security for inclusion in their samples, making studies of such "split" ratings difficult. [56] For example, some NRSROs provide both financial strength and issuer credit ratings for issuers. These NRSROs varied by whether they counted just the rated entity once, and not the separate ratings, while other NRSROs count both ratings as part of their total outstanding ratings. [57] References to Ratings of Nationally Recognized Statistical Rating Organizations, 73 Fed. Reg. 40088 (July 11, 2008) (Exchange Act Proposing Release); References to Ratings of Nationally Recognized Statistical Rating Organizations, 73 Fed. Reg. 40106 (July 11, 2008) (Securities Act Proposing Release), References to Ratings of Nationally Recognized Statistical Rating Organizations, 73 Fed. Reg. 40124 (July 11, 2008) (Investment Company Act Proposing Release); and References to Ratings of Nationally Recognized Statistical Rating Organizations, 74 Fed. Reg. 52374 (Oct. 9, 2009) (Proposed Rule; re- opening of comment period; request for additional comments). [58] The Joint Forum, Stocktaking on the Use of Credit Ratings (Basel, Switzerland: June 2009). [59] supra note 58. [60] To maintain a stable share price, most money funds use the amortized cost method of valuation or the penny-rounding method of pricing permitted by rule 2a-7. Under the amortized cost method, portfolio securities are valued by reference to their acquisition cost as adjusted for amortization of premium or accretion of discount. 17 C.F.R. § 270.2a-7(a)(1). Share price is determined under the penny- rounding method by valuing securities at market value, fair value, or amortized cost and rounding the per-share net asset value to the nearest cent on a share value of a dollar, as opposed to the nearest one-tenth of 1 cent. 17 C.F.R. § 270.2a-7(a)(15). See also Valuation of Debt Instruments and Computation of Current Price Per Share by Certain Open-End Investment Companies (Money Market Funds), 48 Fed. Reg. 32555 (July 18, 1983) (Final Rule) ("Release 13380") and Investment Company Act Rel. No. 12206, 47 Fed. Reg. 5428, 5430 n. 5 (Feb. 5, 1982) (Proposed Rules) ("Release 12206"). [61] From 1971 to 2007, only one money market fund, Community Bankers U.S. Government Fund, broke the buck. On September 16, 2008, the Reserve Primary Fund broke the buck. The resulting investor anxiety caused a near run on money market funds and on September 19, 2008, the U.S. Department of the Treasury announced a program to insure the holdings of any publicly offered eligible money market fund that paid a fee to participate in the program to quell investor fears. [62] Requisite NRSROs are defined as any two NRSROs that have issued a rating with respect to a security or class of debt obligations of an issuer, or if only one NRSRO has issued a rating with respect to such security or class of debt obligations of an issuer at the time the fund purchases or rolls over the security, that NRSRO. [63] 17 C.F.R. § 270.2a-7. Short-term ratings refer to short-term debt, which has a maturity of 397 days or less. [64] 17 C.F.R. § 240.15c3-1. The net capital rule generally defines net capital as a broker-dealer's net worth (assets minus liabilities), plus certain subordinated liabilities, less certain assets that are not readily convertible into cash, and less a percentage of certain other liquid assets (for example, securities). In computing their net capital, broker-dealers are required to deduct from their net worth certain percentages of the market value of their proprietary securities positions, known as a haircut. NRSRO ratings are used, along with other factors, to determine the haircut for each security. [65] See Investment Company Act Proposing Release and Exchange Act Proposing Release, supra note 58. [66] The proposal also would have changed the definition of "first- tier security" to a security whose issuer the board has determined has the "highest capacity to meet its short-term financial obligations." Any eligible security not deemed first-tier would be deemed second tier. Under the current rule 2a-7, as amended in February 2010, a money market fund generally must limit its investments in second-tier securities to no more than 3 percent of fund assets, with investment in second-tier securities of any one issuer being limited to the half of 1 percent of fund assets. [67] Money Market Fund Reform, 75 Fed. Reg. at 10109-10120 (March 4, 2010)(Final Rule). [68] Commercial paper is an unsecured short-term obligation with maturities ranging from 2 to 270 days issued by banks, corporations, and other borrowers. [69] Non-convertible debt securities are securities that cannot be exchanged for shares of stock from the issuing corporation. [70] See generally, References to Ratings of Nationally Recognized Statistical Rating Organizations, ( 74 Fed. Reg. 52374 (Oct. 9, 2009) (Proposed Rule; re-opening of comment period; request for additional comments)). [71] See References to Ratings of Nationally Recognized Statistical Rating Organizations, 74 Fed. Reg. 52358, 52371-73 (Oct. 9, 2009)(Final Rule)(codified, inter alia, at 17 C.F.R. §§ 240.3a1-1, 242.300, 242.301, 270.5b-3, 270.10f-3). These rules include rules under the Exchange Act and under the Investment Company Act. The rules under the Exchange Act include 3a1-1, 300, 301(b)(5) and 301(b)(6) of Regulation ATS, and Forms ATS-R and PILOT. The rules under the Investment Company Act include 5b-3 and 10f-3. SEC has not taken further action on its remaining proposals to remove NRSRO references from its rules and forms. [72] The Dodd-Frank Act rescinds the exemption for NRSROs under Rule 436(g) of the Securities Act of 1933. Issuers of ABS are required to disclose the credit ratings that are a condition of the issuance of the ABS and the identity of the rating agency. Rule 436(g) had provided that ratings assigned by an NRSRO (but not other credit rating agencies) would not be deemed part of a registration statement and the NRSRO would not be subject to liability as an expert for the rating under the Securities Act. The Securities Act requires that an expert who is named as having prepared a report in connection with a registration statement must file a written consent with the registration statement. Going forward, credit ratings assigned by an NRSRO that are incorporated into registration statements or prospectuses will require consent by that NRSRO since they will be considered expert opinions. The Division of Corporation Finance issued a no-action letter on July 22, 2010, stating it will not recommend an enforcement action to the Commission if an issuer of ABS omits the ratings disclosure required by Regulation AB from a prospectus that is part of a registration statement relating to an offering of ABS. SEC noted in the no-action letter that the NRSROs have indicated they are not willing to provide their consent at this time. SEC issued the no- action letter to facilitate ABS transactions. [73] Not all of the 65 examinations had deficiencies related to the minimum credit risk determination requirement and some had multiple deficiencies in this area. [74] Since the implementation of rule 2a-7, Investment Management has provided guidance to money market funds about what it will and will not accept as evidence of an adequate minimal credit risk determination. In a May 8, 1990, letter to the industry, Investment Management states that the focus of any minimal credit risk analysis must be on those elements that indicate the capacity of the issuer to meet its short-term debt obligations. The letter provides examples of elements that the analysis could include. While funds or their advisers are not required to have these specific elements in their credit files, the guidance states that the determination that money market fund portfolio investments present minimal credit risks must be based on factors pertaining to credit quality in addition to the rating assigned to such instruments by an NRSRO. [75] OCIE staff told GAO that OCIE has approximately 450 staff dedicated to examinations of investment advisors and funds. It does not have a unit devoted specifically to conducting money market funds. Currently there are approximately 11,500 registered advisers and 860 investment company complexes (with thousands of individual funds, including money market funds). SEC staff estimates that there are less than 150 fund complexes offering investors approximately 850-900 different money market funds. [76] The net capital rule uses additional criteria to establish the appropriate haircut for a security including time to maturity and type of security (for example, government security, nonconvertible debt, and preferred stock). [77] A.M. Best, DBRS, Fitch, Japan Credit Rating Agency, Moody's, Rating and Investment Information, and Standard & Poor's received NRSRO designation prior to the Act. The Act nullified the no-action letters and required them to subsequently register as NRSROs with SEC when the NRSRO registration program became effective. [78] Egan-Jones Ratings, Realpoint, and LACE are primarily subscriber- pays NRSROs. [79] IBCA, Inc.; Duff & Phelps Credit Rating Company; and Thomson BankWatch were all recognized as NRSROs under SEC's prior no-action letter process in 1990, 1982, and 1991, respectively. A fourth credit rating agency, McCarthy, Crisanti, and Maffei, Inc., also received a no-action letter recognizing it as an NRSRO in 1983, and was later acquired by Duff & Phelps in 1991. [80] According to a press release, Morningstar does not plan to register as an NRSRO, but Realpoint will continue as an NRSRO. [81] A.M. Best provides financial strength ratings on insurance organizations and credit ratings on bonds and other financial instruments that insurers and reinsurers issue, and recently has expanded into ratings for financial institutions. [82] The HHI is one of the market concentration measures that government agencies, including the DOJ and FTC, use when assessing concentration to enforce U.S. antitrust laws. DOJ and FTC often calculate the HHI as the first step in providing insight into potentially anticompetitive conditions for an industry. However, the HHI is a function of firms' market shares, and market shares may not fully reflect the competitive significance of firms in the market. Thus, DOJ and FTC use the HHI in conjunction with other evidence of competitive effects when evaluating market concentration. [83] In this case, an NRSRO's market share is equal to its total revenue divided by the sum of all NRSROs' total revenues. [84] Data and valuation service includes in-depth market analysis for particular market segments. Proxy service includes research, recommendations, and voting services for domestic and foreign proxy proposals. [85] NRSRO applicants and registered NRSROs provide these data to SEC as part of Form NRSRO. These data are not required to be made public for each NRSRO. [86] For the NRSROs that reported their total revenue not ending on December 31 we estimated their revenue for the 12-month periods ending December 31 of 2006, 2007, 2008, and 2009. [87] In this case, an NRSRO's market share is equal to the number of its outstanding ratings divided by the sum of each NRSRO's outstanding ratings. Additionally, the HHI could be calculated using the number of new ratings assigned by NRSROs, instead of outstanding ratings. Since ratings on securities or issuers can be outstanding for many years, a rating agency that issued a lot of ratings in the past might look dominant, even if all the new ratings were being issued by different companies. Currently, NRSROs are not required to provide data on new ratings assigned on Form NRSRO. [88] In this case, an NRSRO's market share is equal to the number of issuers it rates divided by the sum of the numbers of issuers all NRSROs rate. More than one NRSRO can produce a credit rating for an issuer. Thus, the sum of the numbers of organizations rated by all NRSROs likely will be greater than the total number of issuers with a credit rating. For example, if three NRSROs rate the same issuer, then all three of those NRSROs will count that company in their numbers of rated issuers. We define market share this way so that NRSROs' market shares sum to 100 percent. As a result, our concept of market share differs from the concept of market coverage. An NRSRO's market coverage would be the number of issuers it rates as a share of the number of issuers with a credit rating. Since more than one NRSRO can rate an issuer, NRSROs' market coverage can sum to more than 100 percent. [89] Other differences between our alternative and baseline estimates are in the HHIs for the corporate issuers and insurance companies asset classes. The alternative assumption produces an estimate of the HHI for the corporate issuers asset class in 2009 that is 12 percent smaller than the baseline estimate. The alternative assumption also produces estimates of the HHI for the insurance companies asset class in 2008 and 2009 that are 13 percent and 9 percent, respectively, smaller than the baseline estimates. The alternative assumption produces HHIs that indicate that concentration in these asset classes has declined more rapidly between 2007 and 2009 than the HHIs produced by the baseline assumption indicate. [90] We obtained data on U.S.-issued ABS from Asset-Backed Alert. In this case, an NRSRO's market share is equal to the dollar value of issuance it rates divided by the sum of the dollar value of issuance that each NRSRO rates. [91] In a basic CDO a group of loans or debt securities are pooled and securities are then issued in different tranches that vary in risk and return depending on how the underlying cash flows produced by the pooled assets are allocated. [92] The HHIs for U.S. sub-prime RMBS for 2009 and 2010 are both based on a single deal rated by a single NRSRO. [93] For all ABS, it is important to note that the number of new deals decreased by 2,715 from 3,083 to 368 between 2006 and 2009, respectively. We did not assess the reasons ABS issuance declined. AM Best was recognized as a NRSRO under SEC's former no-action letter process in 2005. Realpoint was registered as an NRSRO in 2008. [94] In November 2008, the Federal Reserve Bank of New York created the Term Asset-Backed Securities Loan Facility (TALF) to increase credit availability and support economic activity by facilitating renewed issuance of ABS. ABS issued under TALF had to be rated by two TALF-eligible NRSROs. For ABS other than CMBS, TALF-eligible NRSROs included DBRS, Inc, Fitch Ratings, Moody's, and Standard & Poor's. For CMBS, TALF-eligible NRSROs also included Realpoint LLC. Realpoint has been issuing surveillance ratings since 2001; it began issuing initial ratings in December 2009. [95] The number of CMBS deals decreased significantly over the review period, from a high of 130 in 2004 to a low of 24 in 2008. From January 2010 through December 2009, 51 deals were issued, illustrating the freezing of the CMBS market. Ratings coverage data only reflect public ratings provided by issuer-pays NRSROs and do not reflect CMBS deals that are rated privately or ratings paid by investors. [96] Amendments to Rules for Nationally Recognized Statistical Organizations, 74 Fed. Reg. 63832, 63864-63865 (Dec. 4, 2009)(amending 17 C.F.R. § 240.17g-5). [97] On May 14, 2010, Ratings and Investment, Inc. issued a press release announcing it was withdrawing its NRSRO registration from the ABS asset class, effective June 28, 2010. [98] As a response to the financial crisis the Federal Reserve created the Term Asset-backed Securities Loan Facility to restore the securitization markets. The Federal Reserve program targeted securitizations in the asset-backed classes, specifically ABS. See GAO, Troubled Asset Relief Program Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility, [hyperlink, http://www.gao.gov/products/GAO-10-25] (Washington, D.C.: Feb. 5, 2010). [99] Herwig M. Langhor, "The Credit Rating Agencies and Their Credit Ratings," address given to the Bond Market Association in Paris February 2006. [100] For examples of recent academic papers that discuss the role of reputation in the credit rating industry see Lawrence J. White, "The Credit Rating Industry: An Industrial Organization Analysis," NYU Center for Law and University and Business Research Paper (April 2001), and Frank Partnoy, "The Paradox of Credit Ratings," University of San Diego Law & Economics Research Paper #20. (2001). [101] Fabian Dittrich, "The Credit Rating Industry: Competition and Regulation," Social Science Research Network, July 2007. [102] Market participants use the ratings of a particular NRSRO because other market participants use it too. When network effect is present, the value of a product or service increases as more people use it. [103] As previously noted, the recently adopted Dodd-Frank Act required SEC and other federal agencies to remove references to NRSRO ratings from its regulations and substitute an alternative standard of creditworthiness. [104] This asset manager told us that his firm is removing the references of the three largest NRSROs in contracts and investment guidelines at renewal. However, we do not know to what extent this may be occurring. [105] See appendix I for our literature review methodology. [106] The researchers hypothesized that downgrades will be considered worse news when the rating standards are low to begin with, and thus larger magnitudes will imply lower-quality ratings. [107] The subscriber-pays model is also subject to conflicts of interests because there is the potential for investors to influence the NRSRO to upgrade or downgrade securities the investors are holding to their advantage. For example, a subscriber may want to hold only investment grade securities because its investment guidelines makes this a requirement. An upgrade to investment grade of a security would allow the subscriber to hold that security. [108] See appendix1 for a detailed discussion on how we identified these models. [109] Several variations of this model have been proposed by others. [110] Under this proposed model, if the rating agency was part of a larger company, interaction between the parent company and the rating agency would be prohibited. [111] The model as proposed did not specify how ratings fees were determined, but suggested that issuers could negotiate with the NRSROs to determine the rating fee, or the NRSROs could establish a fee schedule for rating different kinds of securities. [112] Transparency in this context does not refer to the transparency or disclosure regime of the NRSROs but is specific to the transparency of the compensation model only. [113] However, we are not suggesting that the model reveal the actual rating fees that are charged by an NRSRO. [114] Section 15E(h) of the Exchange Act provides SEC with the authority to implement rules for the management of conflicts of interest relating to the issuance of credit ratings by NRSROs. 15 U.S.C. § 78o-7(h). However, we did not assess whether SEC could implement any of the proposed alternative compensation models under this authority. [115] After submission of the report, SEC is authorized to issue regulations establishing a system for the assignment of NRSROs to determine initial credit ratings of structured finance products in a manner that prevents the arranger from selecting the NRSRO that will determine the credit rating. SEC is to give thorough consideration to the provisions of the Senate-passed financial reform bill that would have required an issuer desiring an initial credit rating for structured finance products to submit a request to a credit rating agency self-regulatory organization, which would select an NRSRO from a qualified pool based on a selection method intended to reduce conflicts of interest. SEC is to implement this system unless it determines that an alternative system would better serve the public and its investors. [116] This act also mandates GAO to conduct a study on alternative means for compensating NRSROs, with the intent of creating incentives for NRSROs to provide accurate credit ratings. The Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203, §939D, 124 Stat. 1376, 1888 (2010) (to be codified at 15 U.S.C. 78o-9 note). [117] We downloaded Moody's data on February 12, 2010; Standard & Poor's data on February 26, 2010; Fitch's data on March 1, 2010; A. M. Best's, DBRS's, and Japan Credit Rating Agency's data on March 2, 2010; and Rating and Investment Information's data on March 8, 2010. [118] The HHI is one of the concentration measures that government agencies, including the Department of Justice (DOJ) and the Federal Trade Commission (FTC) use when assessing concentration to enforce U.S. antitrust laws. DOJ and FTC often calculate the HHI as the first step in providing insight into potentially anticompetitive conditions for an industry. However, the HHI is a function of firms' market shares, and market shares may not fully reflect the competitive significance of firms in the market. Thus, DOJ and FTC use the HHI in conjunction with other evidence of competitive effective effects when evaluating market concentration. [119] We were unable to obtain data on the number of rated issuers from one NRSRO that indicated that it did not keep track of which of the organizations it rates that issue debt securities. However, this NRSRO did provide us with the total number of organizations is rated. To assess the sensitivity of these results to the missing data from the NRSRO that did not track which of its rated organizations issue debt securities, we recalculated the HHIs assuming that all of the organizations this NRSRO reported rating issue debt securities. We did so because it is likely that some of the organizations this NRSRO rates do issue debt securities, but we cannot determine how many. Calculating the HHIs based on the alternative assumption that all of the organizations this NRSRO rates issue debt securities gives us a range within which the true value of the HHI is likely to fall. [120] We searched the EconLit, the National Bureau of Economic Research (NBER) Working Paper Series, and the Social Science Research Network (SSRN) databases. The three studies identified included: Becker, Bo and Milbourn, Todd T., "Reputation and Competition: Evidence from the Credit Rating Industry," Harvard Business School Finance Working Paper No. 09-051, June 21, 2009; Benmelech, Efraim and Dlugosz, Jennifer, "The Credit Rating Crisis," National Bureau of Economic Research Working Paper Series No. 15045, June 2009; and Doherty, Neil A., Kartasheva, Anastasia and Phillips, Richard D., "Competition Among Rating Agencies and Information Disclosure," February 13, 2009. [121] Act of August 22, 1940, ch. 686, title II, § 203, 54 Stat. 847, 850 (codified, as amended, at 15 U.S.C. § 80b-3(c)). Section 203(c)(2) of the Investment Advisers Act provides that within 45 days of the date of the filing of an application for registration (or within such longer period to which the applicant consents), the Commission shall either grant such registration or institute proceedings to determine whether registration should be denied. 15 U.S.C. § 80b-3(c)(2). These proceedings must include notice of the grounds for denial under consideration and opportunity for hearing. Such proceedings must conclude within 120 days of the date of filing but can be extended by the Commission for an additional 90 days if it finds good cause and publishes its reasons or if the applicant consents. [122] Form ADV has two parts. Part 1 contains information about the adviser's education, business, and disciplinary history within the last 10 years, and is filed electronically through FINRA's IARD system. Part 2 includes information on an adviser's services, fees, and investment strategies. Currently, SEC does not require advisers to file Part 2 electronically. [123] The appeal process does happen but FINRA staff stated it is not frequent. [124] FINRA provided an informal estimate of the percentage of applications it completes within 180 days. [End of section] GAO's Mission: The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's Web site [hyperlink, http://www.gao.gov]. Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to [hyperlink, http://www.gao.gov] and select "E-mail Updates." Order by Phone: The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s Web site, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]: E-mail: fraudnet@gao.gov: Automated answering system: (800) 424-5454 or (202) 512-7470: Congressional Relations: Ralph Dawn, Managing Director, dawnr@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, D.C. 20548: Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, D.C. 20548: