This is the accessible text file for GAO report number GAO-14-279 entitled 'Medicare: Certain Physician Feedback Reporting Practices of Private Entities Could Improve CMS's Efforts' which was released on March 26, 2014. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. United States Government Accountability Office: GAO: Report to Congressional Committees: March 2014: Medicare: Certain Physician Feedback Reporting Practices of Private Entities Could Improve CMS's Efforts: GAO-14-279: GAO Highlights: Highlights of GAO-14-279, a report to congressional committees. Why GAO Did This Study: Health care payers—-including Medicare-—are increasingly using VBP to reward the quality and efficiency instead of just the volume of care delivered. Both traditional and newer delivery models use this approach to incentivize providers to improve their performance. Feedback reports serve to inform providers of their results on various measures relative to established targets. The American Taxpayer Relief Act of 2012 mandated that GAO compare private entity and Medicare performance feedback reporting activities. GAO examined (1) how and when private entities report performance data to physicians, and what information they report; and (2) how the timing and approach CMS uses to report performance data compare to that of private entities. GAO contacted nine entities—-health insurers and statewide collaboratives-—recognized for their performance reporting programs. Focusing on physician feedback, GAO obtained information regarding report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of performance reports. GAO obtained similar information from CMS about its Medicare feedback efforts. What GAO Found: Private entities GAO reviewed for this study selected a range of measures and benchmarks to assess physician group performance, and provided feedback reports to physicians more than once a year. Private entities almost exclusively focused their feedback efforts on primary care physician groups participating in medical homes and accountable care organizations, which hold physicians responsible for the quality and cost of all services provided. They limited their feedback reporting to those with a sufficient number of enrollees to ensure the reliability of reported measures. The entities decided on the number and type of measures for their reports, and compared each group's performance to multiple benchmarks, including peer group averages or past performance. All the entities used quality measures, and some also used utilization or cost measures. Because of the variety of quality measures and benchmarks, feedback report content differed across the entities. Some entities noted that in addition to national benchmarks, they compared results to state or regional level rates to reflect local patterns of care which may be more relevant to their physicians. Most health insurers spent from 4 to 6 months to generate their performance reports, a period that allowed them to amass claims data as well as to make adjustments and perform checks on the measure calculations. Commonly, private entities issued interim feedback reports, covering a 1-year measurement period, on a rolling monthly, quarterly, or semiannual schedule. They told GAO that physicians valued frequent feedback in order to make changes that could result in better performance at the end of the measurement period. Feedback from the Centers for Medicare & Medicaid Services (CMS) included quality measures determined by each medical group, along with comparison to only one benchmark, and CMS did not provide interim reports to physicians. The agency has phased in performance feedback in order to meet its mandate to apply value-based payment (VBP) to all physicians in Medicare by 2017, a challenge not faced by private entities. In September 2013, CMS made feedback reports available to 6,779 physician groups. While private entities in this study chose the measures for their reports, CMS tied the selection of specific quality measures to groups' chosen method of submitting performance data. Although both CMS and private entities focused their feedback on preventive care and management of specific diseases, CMS's reports contained more information on costs and outcomes than some entities. While private entities employed multiple benchmarks, the agency only compared each group's results to the national average rates of all physician groups that submitted data on any given measure. CMS's use of a single benchmark precludes physicians from viewing their performance in fuller context, such as relative to their peers in the same geographic areas. CMS's report generation process took 9 months to complete, several months longer than health insurers in the study, although it included more steps. In contrast to private entity reporting, CMS sent its feedback report to physicians once a year, a frequency that may limit physicians' opportunity to make improvements in advance of their annual payment adjustments. The Department of Health and Human Services generally concurred with GAO 's recommendations and asked for additional information pertaining to the potential value of using multiple benchmarks to assess Medicare physicians' performance. What GAO Recommends: The Administrator of CMS should consider expanding performance benchmarks to include state or regional averages, and disseminating feedback reports more frequently than the current annual distribution. View [hyperlink, http://www.gao.gov/products/GAO-14-279]. For more information, contact James Cosgrove at (202) 512-7114 or cosgrovej@gao.gov. [End of section] Contents: Letter: Background: Private Entities Selected a Range of Measures and Benchmarks to Assess Physician Group Performance, and Provided Feedback More than Once a Year: CMS Feedback Included Group-Determined Physician Quality Measures and Only One Benchmark; CMS Issued Reports Less Frequently than Private Entities: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: Private Entity and Medicare Performance Feedback for Hospitals: Appendix II: Quality Measures Used in Sample Physician Feedback Reports Provided by Selected Private Entities: Appendix III: Comments from the Department of Health and Human Services: Appendix IV: GAO Contact and Staff Acknowledgments: Related GAO Products: Tables: Table 1: Key Features of Hospital Feedback Reporting by Selected Private Entities and CMS: Table 2: Number of Quality Measures Categorized by Type Found in Sample Physician Feedback Reports Provided by Selected Private Entities: Figures: Figure 1: CMS's Phased Approach to Providing Value Modifier (VM) Reports to Eligible Professionals (EP): Figure 2: Private Entity Examples of Hemoglobin A1C Measures as Presented in Four Sample Reports: Figure 3: Private Entity's Display of Information on Specialty Referral Patterns in a Sample Report: Figure 4: Private Entity Examples of Emergency Department (ED) Visit Measures as Presented in Two Sample Reports: Figure 5: Private Entity Examples of Benchmark Comparisons as Presented in Three Sample Reports: Figure 6: Illustration of Timelines to Report Production for Health Insurers and Statewide Health Care Collaboratives: Figure 7: Cost Measures Displayed in CMS's 2013 Quality and Resource Use Reports: Figure 8: Performance Benchmarks Displayed in CMS's 2013 Quality and Resource Use Reports: Figure 9: CMS Sample Composite Scores, Quality Tier, and Value Modifier: Figure 10: Report Generation Timeline for CMS Performance Feedback Reports, September 2013: Abbreviations: ABC: Achievable Benchmark of Care: ACO: accountable care organization: ACSC: Ambulatory Care Sensitive Condition: CAH: critical access hospital: CMS: Centers for Medicare & Medicaid Services: COPD: chronic obstructive pulmonary disease: EHR: electronic health record: EP: eligible professional: FFS: fee-for-service: HHS: Department of Health and Human Services: PQRS: Physician Quality Reporting System: QRUR: Quality and Resource Use Reports: VBP: value-based payment: VM: Value Modifier: [End of section] United States Government Accountability Office: GAO: 441 G St. N.W. Washington, DC 20548: March 26, 2014: The Honorable Ron Wyden: Chairman: The Honorable Orrin G. Hatch: Ranking Member: Committee on Finance: United States Senate: The Honorable Fred Upton: Chairman: The Honorable Henry Waxman: Ranking Member: Committee on Energy and Commerce: House of Representatives: The Honorable Dave Camp: Chairman: The Honorable Sander Levin: Ranking Member: Committee on Ways and Means: House of Representatives: Increasingly, health care payers--including Medicare--are rethinking the way they reimburse providers in an attempt to shift away from paying solely for the volume of care delivered and toward paying them for the value of their care. One such approach--known as value-based payment (VBP)--links a portion of physician compensation to achieving specified levels of performance. VBP can be used as a means to improving quality and efficiency in the traditional health care delivery environment by encouraging providers to address gaps in patient care and consider the likely costs and benefits of care. It can also be used with newer care delivery models, such as accountable care organizations (ACO) and patient-centered medical homes.[Footnote 1] Under these arrangements, payers hold teams of providers responsible for all of a patient's care. They reward those who coordinate services across providers and make cost-effective referral decisions, among other practices. While physicians and other providers may intend to furnish consistently high-quality, efficient care, they may not always know how well they do or where practice changes are needed. Therefore, a key element of the VBP approach is for payers to develop performance feedback reports to indicate specific opportunities for improvement.[Footnote 2] Performance feedback entails collecting data on measures of quality and cost of care, assessing performance against benchmarks, and communicating results to providers. For example, periodic feedback reports can make providers aware of the percentage of their patients receiving appropriate screening tests, or those with potentially avoidable emergency department visits. The expectation is that giving detailed, timely feedback to providers will enhance their ability to take actions that improve performance.[Footnote 3] The American Taxpayer Relief Act of 2012 mandated that GAO compare how private entities and the Centers for Medicare & Medicaid Services (CMS)--the agency within the Department of Health and Human Services (HHS) that administers the Medicare program--conduct performance feedback reporting for health care providers.[Footnote 4] To meet this requirement, we conducted briefings for congressional staff on our preliminary findings in September 2013. This report contains information we provided during those briefings, updated with additional information, addressing: 1. how and when private entities--such as health insurers--report performance data to physicians, and what information they report; and: 2. how the timing and approach CMS uses to report performance data to physicians compare to that of private entities. In addition, appendix I contains information on how private entities and Medicare provide performance feedback to hospitals. To examine how and when private entities report performance data to physicians, and to identify what information they provide, we contacted nine private entities that had experience with VBP programs or that had innovative features in their performance feedback programs. To make our selection, we asked representatives of America's Health Insurance Plans,[Footnote 5] Blue Cross Blue Shield Association,[Footnote 6] and Network for Regional Healthcare Improvement to suggest leading organizations that met those criteria. [Footnote 7] We also considered programs profiled in peer-reviewed literature, as well as those operating in varying geographic areas across the country. We chose six health insurers, and three statewide health care collaboratives--organizations comprising providers, payers, and employers that focus on quality improvement activities--as follows: [Footnote 8] * Aetna: * Blue Shield of California: * Blue Cross and Blue Shield of Florida, Inc. * Highmark Blue Cross Blue Shield: * Horizon Blue Cross Blue Shield of New Jersey: * Iowa Healthcare Collaborative: * Maine Health Management Coalition: * Oregon Health Care Quality Corporation: * WellPoint: We interviewed representatives of these private entities regarding the feedback report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of their performance reports. We also requested sample physician feedback reports to learn how the data were presented. In some cases, entities had multiple performance reporting initiatives. We focused on the reports that were most similar to the type of reporting to physicians that CMS provided to medical groups. Our findings regarding private entity performance reporting to physicians are limited to the entities we interviewed and cannot be generalized to other health insurers and health care collaboratives. In this report, we describe private entities' feedback programs in operation in 2013, although performance reporting continues to evolve as organizations adopt newer payment and delivery models. We did not evaluate the effectiveness of the feedback in altering physician practice patterns. Also, we did not gather information on the payment incentives, if any, associated with these entities' reporting initiatives, as that issue was beyond our scope.[Footnote 9] Because nationwide interest in VBP has been largely aimed at physician care, we primarily focused our review on performance reporting to physicians, and as noted, present additional information on our methodology and findings related to hospital feedback reporting in appendix I.[Footnote 10] To learn how the timing and approach CMS uses to report performance data to physicians compare to that of private entities, we obtained CMS documentation similar to that received from the selected entities. We analyzed information regarding CMS report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of the reports. We also examined a prototype of the report CMS provided to physicians in September 2013. We spoke with CMS officials about their report preparation process and about components of the feedback program that differ from those of private entities. We conducted this performance audit from April 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background: Laws enacted since 2006 have directed CMS to collect performance information on providers and eventually reward quality and efficiency of care rather than reimburse for volume of services alone. * The Tax Relief and Health Care Act of 2006 required the establishment of the Physician Quality Reporting System (PQRS) to encourage physicians to successfully report data needed for certain quality measures.[Footnote 11] PQRS applies payment adjustments to promote reporting by eligible Medicare professionals (EP)--including physicians, nurses, physical therapists, and others.[Footnote 12] In 2013, EPs could report data to PQRS using claims, electronic health records (EHR)[Footnote 13] or a qualified registry,[Footnote 14] or opt for CMS to calculate quality measures using administrative claims data. Under its group practice reporting option, CMS allows EPs to report to PQRS as a group, either through a registry or a web-based interface. * The Medicare Improvements for Patients and Providers Act of 2008 established the Physician Feedback Program, under which CMS was required, beginning in 2009, to distribute confidential feedback reports, known as Quality and Resource Use Reports (QRUR), to show physicians their performance on quality and cost measures.[Footnote 15] * The Patient Protection and Affordable Care Act required HHS to coordinate the Physician Feedback Program with a Value Modifier (VM) that will adjust fee-for-service (FFS) physician payments for the relative quality and cost of care provided to beneficiaries.[Footnote 16] In implementing the VM, CMS's Center for Medicare intends to use PQRS and cost data from groups of EPs defined at the taxpayer identification number level to calculate the VM and then report the payment adjustments in the QRURs. As required in the act, CMS plans to apply the VM first to select physicians in 2015 and to all physicians in 2017. As required by law, CMS implemented a performance feedback program for Medicare physicians, which serves as the basis for eventual payment adjustments. (See figure 1.) Figure 1: CMS's Phased Approach to Providing Value Modifier (VM) Reports to Eligible Professionals (EP): [Refer to PDF for image: illustration] Legislative actions: 2006: The Tax Relief and Health Care Act of 2006 required the establishment of the Physician Quality Reporting System. 2008: The Medicare Improvements for Patients and Providers Act of 2008 established the Physician Feedback Program. 2009: The Patient Protection and Affordable Care Act added the VM requirement. CMS actions: 2008 and 2009: CMS tested certain measures and developed a feedback report prototype to be distributed to some physicians. [Feedback report] 2009 and 2010: CMS distributed feedback reports with both quality and resource use measures to certain EPs. [Feedback report] 2011: CMS made group-level reports available to 35 groups nationwide. [Feedback report] December 2012: CMS made reports available to groups of 25 or more EPs in nine states and others nationwide. [Feedback report] September 2013: CMS made reports available to 6,779 groups of 25 or more EPs nationwide. [Feedback report] September 2013: Reports include a preview of each group's VM. [Value modifier] Fall 2014: CMS expects to provide reports to all groups in all states, including small groups and solo practitioners. [Feedback report] January 1, 2015: VM to be applied to physicians in groups of 100 or more EPs. [Value modifier] January 1, 2016: VM to be applied to physicians in groups of 10 or more EPs. [Value modifier] 2017: VM to be applied to all physicians. [Value modifier] Source: GAO. [End of figure] In our December 2012 report on physician payment incentives in the VM program, we found that CMS had yet to develop a method of reliably measuring the performance of physicians in small practices, that CMS planned to reward high performers and penalize poor performers using absolute performance benchmarks, and that CMS intended to annually adjust payments 1 year after the performance measurement period ends. We recommended that CMS develop a strategy to reliably measure the performance of small physician practices, develop benchmarks that reward physicians for improvement as well as for meeting absolute performance benchmarks, and make the VM adjustments more timely, to better reflect recent physician performance.[Footnote 17] CMS agreed with our recommendations, but noted that it was too early to fully implement these changes. Private Entities Selected a Range of Measures and Benchmarks to Assess Physician Group Performance, and Provided Feedback More than Once a Year: Private entities we reviewed provided feedback mostly to groups of primary care physicians practicing within newer delivery models. Each entity decided which measures to report and which performance benchmarks to use, leading to differences in report content across entities. Largely relying on claims data, health insurers spent from 4 to 6 months to produce the annual reports. To meet the information needs of physicians, they all provided feedback throughout the year. The entities also generally offered additional report detail and other resources to help physicians improve their performance. Private Entities Focused Their Feedback Efforts on Groups of Primary Care Physicians Practicing within Newer Delivery Models: The private entities in our review had discretion in determining the number and type of physicians to be included in their performance reporting initiatives, and their feedback programs generally included only physician groups participating in newer delivery models--medical homes and ACOs--with which they contract.[Footnote 18] Within this set of providers, the entities used various approaches to further narrow the physician groups selected to receive performance feedback. For example, one entity told us that only physician groups accredited by a national organization focused on quality were eligible for participation in its medical home program, which included physician feedback reports. Private entities' feedback programs were generally directed toward primary care physician practices. One entity defined primary care as family medicine, internal medicine, geriatrics, and pediatrics; and included data on the services furnished by nurse practitioners and physician assistants in its medical group reports. The entities indicated that they rarely provided reports directly to specialty care physician groups. Among those that did, the programs typically focused on practice areas considered significant cost drivers--obstetrics/ gynecology, cardiology, and orthopedics. Entities further limited their physician feedback programs to groups participating in medical homes with a sufficient number of attributed enrollees to ensure the reliability of the reported measures. In medical home models, enrollees are attributed to a physician (or physicians) responsible for their care, who is held accountable for the quality and cost of care, regardless of by whom or where the services are provided. Among those entities we spoke with, the minimum enrollment size for feedback reporting varied widely, with most requiring a minimum of between 200 and 1,000 attributed enrollees to participate in the program. For example, one entity had two levels of reporting in its medical home program, differentiated by the number of attributed enrollees. In one medical home model, the entity required more than 2,000 attributed enrollees for participation and rewarded the practices through shared savings. In a second medical home model, the entity included practices with fewer than 1,000 attributed enrollees, but these practices did not share in any savings. According to the entities in our study, small physician practices (including solo practitioners) typically received performance reports for quality improvement purposes only. Because smaller practices may not meet minimum enrollment requirements needed for valid measurement,[Footnote 19] private entities generally did not link their performance results to payment or use them for other purposes. For example, one entity provided feedback to practices of one to three primary care physicians upon request, but did not publicly report these practices' data on its website. To increase the volume of patient data needed for reliable reporting, some entities pooled data from several small groups and solo practitioners and issued aggregate reports for those small practices. Most of the entities that used this method said they applied their discretion in forming these "virtual" provider groups; however, another entity commented that allowing small practices to voluntarily form such groups for measurement purposes would be advantageous. Private Entities Decided Which Measures to Report and Compared Physician Performance to Various Benchmarks, Leading to Differences in Report Content across Entities: Because each private entity in our study determined the number and types of measures on which it evaluated physician performance, the measures used in each feedback program differed. Each entity decided on quality measures to include, and many also identified utilization or cost measures for inclusion.[Footnote 20] In addition, one entity noted that it allowed ACOs to choose 8 to 10 measures from among a set of about 18 measures. To assess physicians' quality and utilization/cost results, the entities used absolute or relative performance benchmarks. Quality Measures: Private entities generally report on physician quality using many more process of care measures than outcomes of care measures.[Footnote 21] Entities in our review commonly included indicators of clinical care in areas such as diabetes care, cardiovascular health, and prevention or screening services for both their adult and pediatric patients. The most common measure reported by all entities was breast cancer screening, followed by hemoglobin A1C measures,[Footnote 22] a service used to monitor diabetes. We found wide variation in the number and type of measures in private entities' quality measure sets.[Footnote 23] The total number of quality measures used in the feedback reports ranged from 14 to 51. [Footnote 24] Measures typically fell into one of several measurement areas, each with as few as one or as many as 20 individual measures. For example, in the quality measurement areas for pulmonary and respiratory conditions, one private entity reported on a single measure (appropriate use of medications for asthma), while another reported three measures (appropriate use of medications for asthma, appropriate testing for pharyngitis, and avoidance of antibiotic treatment for adults with acute bronchitis). Although primarily focused on clinical quality measures, entities also included nonclinical measures, such as patient safety and patient satisfaction. (See appendix II for more information on the number and types of quality measures included in sample reports provided by the entities we reviewed.) Even when entities appeared to report on similar types of measures in common areas, we found considerable variability in each measure's definition and specification. For this reason, results shown in physician feedback reports may not be comparable across entities. As shown in figure 2, the diabetes hemoglobin A1C measure was defined and used in different ways in our selected entities' reports. In some cases, entities calculated the percentage of enrollees with diabetes within a certain age range that received the test. In other cases, the entities calculated the percentage of enrollees with diabetes within a certain age range that had either good or poor control of the condition, as determined from a specified hemoglobin A1C result. In addition, some entities defined their diabetic patient population as enrollees from 18 to 75 years of age, while another did not indicate the age range, and one entity set the age range from 18 to 64 years of age. Figure 2: Private Entity Examples of Hemoglobin A1C Measures as Presented in Four Sample Reports: [Refer to PDF for image: 4 report examples] Example 1: Measure Grouping: Diabetes; Measure Number: 100024; Measure Short Description: Diabetes: Hemoglobin Alc testing; Measure Long Description: This measure calculates the percentage of members age 18 to 75 with diabetes receiving annual HbA1c testing; Eligible Population in Denominator: 589; Numerator: 525; Rate = Numerator divided by Eligible Population: 89.13%. Measure Grouping: Diabetes; Measure Number: 100244; Measure Short Description: Diabetes Hemoglobin Alc poor control (>9.0%; Measure Long Description: This measure calculates the percentage of members age 18 to 75 with diabetes that demonstrate poor glycemic control, based on a HbAlc level greater than 9%; Eligible Population in Denominator: 390; Numerator: 61; Rate = Numerator divided by Eligible Population: 15.64%. Measure Grouping: Diabetes; Measure Number: 100336; Measure Short Description: Diabetes: Hemoglobin A1c control (<8.0%); Measure Long Description: This measure calculates the percentage of members age 18 to 75 with diabetes that demonstrate good glycemic control, based on a HbA1c level less than 8%; Eligible Population in Denominator: 390; Numerator: 287; Rate = Numerator divided by Eligible Population: 73.59%. Example 2: Diabetes Management: Alc Testing: The percentage of members 18 to 64 years of age with diabetes (type 1 and type 2) who had the following: Hemoglobin Alc (HbAlc) testing; Total Eligible Members: 591 Members with Care: 548 Rate: 92.72%. Example 3: Diabetes: HbA1C control: 59%. Example 4: Measure: Diabetes Care, HbAlc Test (age 18-75); Number of Patients: 149; Your Quality Scores: 75.8%; 95% Confidence Interval: (68% - 82%). Source: Withheld. Images used with permission. [End of figure] Utilization or Cost Measures: Some, but not all, private entities in our review included utilization or cost measures in their performance reports to physicians. Total cost of care per enrollee was the most commonly used measure, but cost measures disaggregated by type of service--facility, pharmacy, primary care physician, and specialty--were also used. Some entities described how they limited their reporting of a total cost of care measure to those medical groups with a large number of enrollees. In one case the minimum enrollment size was 20,000 enrollees and in another it was 2,500 enrollees. Officials from one entity also told us that they allowed smaller physician practices to combine their data in order to meet the required number of enrollees for receiving feedback on cost of care. In addition to feedback on the total cost of care per enrollee, some reports given to groups of primary care physicians contained information on the cost of care provided by specialists in the entity's network. For example, one entity provided trend data that included the number of specialist visits (total and by type) and the number of patients with one or more visits for these specialty areas. (See figure 3.) For the two specialties with the most enrollee visits during the measurement period--orthopedic surgery and dermatology--the entity also provided the medical group with data on which specialists were seen most frequently and their cost per visit. This information was intended to encourage cost-efficient referrals. Another entity said it was focused on a program in July 2013 to provide feedback to primary care physicians on cardiologists' performance showing where care was being delivered most efficiently. By providing such information, the entity expected primary care physicians to take cost differences into account when making referrals, rather than basing referrals solely on historical habits. Disseminating information to primary care physicians about the relative cost of specialty care providers is a key aspect of medical home and ACO programs. Figure 3: Private Entity's Display of Information on Specialty Referral Patterns in a Sample Report: [Refer to PDF for image: table] Total Specialist Visits: Prior: 18,585; Current: 19,226; Trend: 3%. Unique Patients: Prior: 7,002; Current: 7,334; Trend: 5%. Multiple Visits[A]: Prior: 3,941; Current: 4,015; Trend: 2%. Visits by Specialty: Orthopedic Surgery; Prior: 2,765; Current: 3,145; Trend: 14%. Dermatology; Prior: 2,326; Current: 2,525; Trend: 9%. Radiology; Prior: 2,988; Current: 2,400; Trend: -20%. Ophthalmology; Prior: 1,669; Current: 1,791; Trend: 7%. Cardiovascular Disease; Prior: 1,523; Current: 1,551; Trend: 7%. [A] less than 1 admission in measurement period. Source: Withheld. Image used with permission. [End of figure] The entities were fairly consistent in the number and types of utilization measures they selected for feedback reporting. The most common utilization measures reported by our private entities were physicians' generic drug prescribing rates, followed by emergency department visits, inpatient visits, hospital readmissions, and specialist visits. One entity provided additional detail under the emergency department visits measure to show the number of patients that repeatedly seek care at emergency departments. Officials from the entity told us that this measure was included to alert physicians of potentially avoidable hospital visits so that they can encourage patients to use office-based care before seeking care in more costly settings. (See examples of this measure as presented by private entities in their sample reports in figure 4.) Figure 4: Private Entity Examples of Emergency Department (ED) Visit Measures as Presented in Two Sample Reports: [Refer to PDF for image: 2 report examples] Example 1: Period: 1/2011-12/2011; Member Months: 59,459; Average membership: 4,955; Actual ED Visits/1,000 Measurement Year: 165.90. Example 2: Practice: Emergency Department Utilization: ED Visits/1000: 171.7; % ACSE ED Visits: 8.8%; Count Frequency ED Users (less than 2): 10; % Frequency Users (less than 2): 2%. Figure: Emergency Department Vists by Quarter: Vist log for time period of 2009 Q3 through 2012 Q2. Source: Withheld. Image used with permission. Note: In example 2, Ambulatory Care Sensitive Conditions (ACSC) refers to conditions where appropriate ambulatory care prevents or reduces the need for admission to the hospital. [End of figure] Benchmarks: To evaluate physician performance, the selected private entities compared the measures data to different types of benchmarks. Some entities compared each physician group's performance results to that of a peer group (e.g., others in the entity's network or others in the collaborative's state or region); some entities compared physician groups' results to a pre-established target; and others gauged physician groups' progress relative to their past performance. (See figure 5.) Entities generally used two or three such benchmarks in their feedback reports. For example, one entity separately displayed results for the medical home's commercially insured, Medicare insured, and composite patient population. Within each of these population groups, it compared the practice's performance to the average for nonmedical home practices, as well as to the practice's performance in the prior measurement year. The entity also gave narrative detail to indicate favorable or unfavorable performance. The most common benchmark for the entities in our study was a physician group's performance relative to the previous measurement period. However, some entities used this benchmark only for utilization/cost measures and not for quality measures. Figure 5: Private Entity Examples of Benchmark Comparisons as Presented in Three Sample Reports: [Refer to PDF for image: 3 report examples] Example 1: Region: Western; Specialty: Family Practice; Clinical Quality Performance Detail: Clinical Category/Quality Measure: Acute Pharyngitis Testing: Throat culture or antigen agglutination test for streptococcus 3 days before through 3 days after a sole diagnosis of acure pharyngitis is identified and after which an antibiotic was dispensed within 3 days; Practice Quality %: 91%; Specialty Quality %: 78%; Practice to Specialty %: 100%; Earned Points: 1.00. Clinical Category/Quality Measure: Adolescent Well-Care Vistis; 1 or more comprehensive well-care visits with a PCP or an OB/FYN; Practice Quality %: 52%; Specialty Quality %: 59%; Practice to Specialty %: 88%; Earned Points: 0.00. Example 2: Adult Quality Measures - Medical Group Results: Contqct Name: Sample Q. Sample; Group Name: The Sample Group. Measure: Breast Cancer Screening (age 40-69); Number of Patients: 140; Your Quality Score: 64.3%; 95% Confidence Level: (56%-72%); Average Quality Score for State: 70.4%; State ABC Benchmark: 87.0%; HEDIS 2011 Commercial PPO 90th Percentile: 72.5%. Measure: Cervical Cancer Screening (age 21-64); Number of Patients: 430; Your Quality Score: 69.8%; 95% Confidence Level: (65%-74%); Average Quality Score for State: 71.15; State ABC Benchmark: 88.5%; HEDIS 2011 Commercial PPO 90th Percentile: 79.0%. Measure: Chlamydia Screening (age 16-24); Number of Patients: 111; Your Quality Score: 50.5%; 95% Confidence Level: (41%-60%); Average Quality Score for State: 41.4%; State ABC Benchmark: 72.2%; HEDIS 2011 Commercial PPO 90th Percentile: 51.0%. Example 3: PCHM - ED Visits: Practice vs. Non PCMH Comparison. ED visits per 1,000 members: (based on claims incurred in 5 months and paid in 8 months vs. same time period in prior period). Commercial: Practice: Prior: 149; Current: 150; 0%. Non PMH: Prior: 177; Current: 192; +8%. Medicare: Practice: Prior: 336; Current: 361; +8%. Non PMH: Prior: 297; Current: 337; +14%. Composite: Practice: Prior: 158; Current: 160; +1%. Non PMH: Prior: 184; Current: 201; +9%. Favorable 8% variance when compared to Non PCMH. Source: Withheld. Image used with permission. Note: In example 2, the state Achievable Benchmark of Care (ABC) refers to the health care collaborative's calculated benchmark for the state's clinics based on its own criteria. [End of figure] Largely Relying on Claims Data, Health Insurers Spent 4 to 6 Months to Produce Annual Reports and Typically Provided Feedback in the Interim: Private entity officials told us they relied on claims as their primary data source for performance reporting. However, several private entities noted shortcomings in relying solely on claims data-- the billing codes that describe a patient's diagnoses, procedures, and medications--for performance reporting.[Footnote 25] Some entities supplemented their claims data by obtaining information from EHRs, patient satisfaction surveys, or chart extractions.[Footnote 26] Entities noted that using EHR data was resource-intensive for both providers and payers, because they depended on physician groups to submit the information. The entities we spoke to have had limited success in using EHR data as a primary data source, although many saw it as complementary to claims data. Another entity supplemented its claims data with data from registries that compile information from administrative data sets, patient medical records, and patient surveys, and thus have the capacity to track trends in quality over time. The health insurers in our review typically spent from 4 to 6 months to produce and distribute annual performance reports; in contrast, the health care collaboratives spent 9 to 10 months. (See illustrations of these timelines in figure 6.) As is common in the health insurance industry, payers require a 3-month interval after the performance period ends--referred to as the claims run-out--to allow claims for the services furnished late in the measurement period to be submitted and adjudicated for the report. The claims run-out was followed by 1 to 3 months to prepare the data, a period that allowed for provider attribution, risk-adjustment,[Footnote 27] measure calculation, [Footnote 28] and quality assurance.[Footnote 29] One statewide health care collaborative stated that the quality assurance process is helpful in increasing physician trust because the group is able to compare its own data with the collaborative's data before results are final. The statewide health care collaboratives we spoke with required additional time to collect and aggregate data from multiple health insurers, and their final reports were issued at least 9 months after the end of the performance period. The time needed for some or all of these report production steps varied depending on the entity and the types of measures included. Figure 6: Illustration of Timelines to Report Production for Health Insurers and Statewide Health Care Collaboratives: [Refer to PDF for image: 2 timeline illustrations] Illustration of health plan timeline: Performance period: 12 months; 4-6 month lag time: * Final claims submission: 3 months; * Report production: 1-3 months; Report issued. Illustration of health care collaborative timeline:; Performance period: 12 months; 9-10 month lag time: * Final claims submission: 3 months; * Data submission: 1 month; * Quality assurance: 4 months; * Measure calculation: 1 month; * Results review: 1 month; Report issued. Source: GAO analysis of private entity information. [End of figure] Collaboratives often used all-payer claims databases--centralized data collection where each payer submits claims data on that state's health care providers--for aggregate reporting to providers. Officials from entities told us that all-payer claims databases are helpful because they provide physicians with a better picture of their entire patient panel, not just results determined by individual payers for limited sets of patients. One entity noted that it aggregates its quality data with other payers in its commercial market through a statewide organization, and no one payer can provide statistically meaningful data to a physician group on its own. Officials from one entity with all-payer claims database experience told us that the addition of Medicare data into these databases would improve the information available for measurement and feedback. In addition, one entity suggested that a multipayer database could help with feedback to physicians in groups of all sizes, including small practices, because the higher number of patients would generate sufficient data for calculating reliable measures. However, one entity acknowledged that using all-payer databases requires more time for merging data from different payers in different formats, and another entity noted the challenges of customizing reports for each medical groups' patient population. Private entities told us that physicians valued frequent feedback on their performance so that they have time to make practice changes that may result in better performance by the end of the measurement period. In response, these entities typically provided feedback reports on an interim basis throughout the measurement period. Interim reports typically covered a 1-year performance period, and were commonly issued on a rolling monthly, quarterly, or semiannual schedule. Entities also noted that frequent reporting throughout the period updated physicians on their performance so that year-end results were better expected and understood. Some entities in our study elected to issue interim reports that build up to the 12-month performance period by continually adding data from month to month. Those that used preliminary data that may not account for all final claims in building reports told us that such data starts to become useful about 3 to 6 months into the performance year. They also stated that, although the interim reports may be limited by the use of rolling or incomplete data, providers generally seek this information for early identification of gaps in care. Private Entities Generally Offered Access to Additional Report Detail and Other Resources to Help Physicians Improve Performance: Private entities generally offered additional report detail intended to enhance physicians' understanding of the information contained in their reports or in response to physician requests for more data. Private entity officials told us that, because physicians prefer dynamic reports with as much detail as possible, they generally sent reports that can be expanded to show individual physician or patient- level data. Some entities formatted their reports to include summary- level information on quality and cost measures in labeled sections, with supplemental information following the summary data. Other entities provided additional reports or supplemental data through a web portal that allowed providers to see individual physician or patient-level detail. Private entities sent reports in multiple file formats, such as in a spreadsheet, some of which allowed report recipients to sort their data.[Footnote 30] Entities in our study also offered resources designed to assist physician groups with actionable steps they can take to improve in the next performance period. Most entities told us they offered resources to physician groups, such as consultations with quality improvement professionals, forums for information-sharing, and documents on best practices. For example, one entity's staff worked directly with practices to improve their results by distributing improvement guidelines for each performance measure included in the feedback report. In addition, the entity's officials told us they also convened workgroups to review trend information and paid particular attention to differences between medical homes and nonmedical homes. CMS Feedback Included Group-Determined Physician Quality Measures and Only One Benchmark; CMS Issued Reports Less Frequently than Private Entities: CMS has provided feedback to increasing numbers of physician practices each year in order to eventually reach all physicians. Each medical group's chosen method of quality data submission determined the quality measures included in its report, to which CMS added health care costs and certain outcomes measures. CMS's report generation process took slightly longer than that of most private entities in our study, and the agency did not provide interim performance data during the measurement period. CMS feedback reports have included information to assist providers in interpreting their performance results. CMS Provided Feedback to Physicians in Groups with 25 or More Eligible Professionals in 2013: Unlike the private entities we contacted, which selected a limited set of physicians to receive feedback reports, CMS is mandated to apply the VM to all physicians by 2017. Therefore, the agency faces certain challenges not faced by private entities as it has expanded its feedback program to reach increasing numbers of physicians. In preparation for implementation of the VM, CMS provided performance reports to nearly 4,000 medical groups in September 2013. In 2014, CMS plans to disseminate reports to physicians in practices of all sizes. As of September 2013, CMS had not yet determined how to report to smaller groups and physicians in solo practices. According to CMS, the decision not to present VM information to smaller groups stemmed from concerns regarding untested cost metrics and administrative complexity. CMS agreed with a 2012 GAO recommendation to develop a strategy to reliably measure the performance of solo and small physician practices, but has not yet finalized such a strategy. [Footnote 31] CMS Feedback Contained Varying Quality but Consistent Cost and Outcomes Measures and Assessed Performance Against a Single Benchmark: Under the CMS approach to performance reporting, the content of feedback reports related to quality measures may vary across providers. Unlike our selected private entities, the agency has allowed physician groups to select the method by which they will submit quality-of-care data, which, in turn, determines the measures on which they receive feedback. CMS used claims data for a consistent set of measures in all of its feedback reports for performance on cost and outcomes. Quality Measures: For the CMS 2013 reports, medical groups submitted data on quality measures to CMS via a web interface or through a qualified registry; if a group did not select either of these options, the agency calculated quality measures based on claims data. Both CMS and private entities focused on preventive care and management of specific diseases.[Footnote 32] * Web interface. Quality measures under this method pertain to care coordination, disease management, and preventive services.[Footnote 33] CMS required groups reporting via the web interface to submit data on 17 quality measures--such as hemoglobin A1C levels for control of diabetes--for a patient sample of at least 218 beneficiaries.[Footnote 34] * Registries. Some groups submitted data for quality measures via qualified registries--independent organizations, typically serving a particular medical specialty, that collect and report these data to CMS. CMS required groups reporting to a qualified registry to submit at least three measures--such as whether cardiac rehabilitation patients were referred to a prevention program--for at least 80 percent of patients.[Footnote 35] * Administrative claims. As a default, if a group did not report via web interface or qualified registry, CMS calculated quality measures using claims data. In September 2013, the majority of groups with 25 or more EPs--nearly 90 percent--received quality scores based on claims data. CMS calculated performance on a set of 17 quality indicators, including several composite measures. For example, the diabetes composite measure included several different measures of diabetes control. Regardless of the method a group selected to submit quality-of-care data, CMS used claims to calculate three outcomes measures--two ambulatory care composite measures and hospital readmission. One ambulatory care composite included hospitalization rates for three acute conditions: bacterial pneumonia, urinary tract infections, and dehydration. Another composite included hospitalization rates for three chronic conditions: diabetes, chronic obstructive pulmonary disease (COPD), and heart failure. Cost Measures: CMS included cost measures--several of which differed from the measures private entities in our study reported to physicians--in all 2013 feedback reports (see figure 7). Using claims data, CMS calculated an overall measure of the cost of care as the total per capita costs for all beneficiaries attributed to each physician group.[Footnote 36] In addition, CMS separately reported total per capita costs for attributed beneficiaries with any of four chronic conditions: diabetes, heart failure, COPD, or coronary artery disease. This contrasts with the private entities that typically measured a more limited set of measures focused on physicians' generic drug prescribing rates and hospital utilization. Figure 7: Cost Measures Displayed in CMS's 2013 Quality and Resource Use Reports: [Refer to PDF for image: table] Per Capita Costs for All Attributed Beneficiaries (Domain Score = +0.41): Cost Categories: All Beneficiaries: Your Medical Group Practice's Performance: Number of Eligible Cases: 1,351; Per Capita Costs Before Risk Adjustment: $19,135; Per Capita Costs After Risk Adjustment: $10,898; Performance of All 1,032 Groups with at Least 100 Eligible Professionals: Benchmark Per Capita Costs (Risk-Adjusted): $10,265; Average Range: Benchmark -1 Standard Deviation: $8,722; Benchmark +1 Standard Deviation: $1,808. Per Capita Costs for Beneficiaries with Specific Conditions (Domain Score = -0.03) Cost Categories: Diabetes; Your Medical Group Practice's Performance: Number of Eligible Cases: 373; Per Capita Costs Before Risk Adjustment: $25,396; Per Capita Costs After Risk Adjustment: $14,732; Performance of All 1,032 Groups with at Least 100 Eligible Professionals: Benchmark Per Capita Costs (Risk-Adjusted): $14,788; Average Range: Benchmark -1 Standard Deviation: $12,379; Benchmark +1 Standard Deviation: $17,198. Cost Categories: COPD: Your Medical Group Practice's Performance: Number of Eligible Cases: 149; Per Capita Costs Before Risk Adjustment: $36,685; Per Capita Costs After Risk Adjustment: $24,396; Performance of All 1,032 Groups with at Least 100 Eligible Professionals: Benchmark Per Capita Costs (Risk-Adjusted): $24,153; Average Range: Benchmark -1 Standard Deviation: $19,840; Benchmark +1 Standard Deviation: $28,466. Cost Categories: Coronary Artery Disease: Your Medical Group Practice's Performance: Number of Eligible Cases: 418; Per Capita Costs Before Risk Adjustment: $26,036; Per Capita Costs After Risk Adjustment: $17,750; Performance of All 1,032 Groups with at Least 100 Eligible Professionals: Benchmark Per Capita Costs (Risk-Adjusted): $17,265; Average Range: Benchmark -1 Standard Deviation: $14,415; Benchmark +1 Standard Deviation: $20,115; Cost Categories: Heart Failure: Your Medical Group Practice's Performance: Number of Eligible Cases: 227; Per Capita Costs Before Risk Adjustment: $34,857; Per Capita Costs After Risk Adjustment: $24,467; Performance of All 1,032 Groups with at Least 100 Eligible Professionals: Benchmark Per Capita Costs (Risk-Adjusted): $26,013; Average Range: Benchmark -1 Standard Deviation: $21,237; Benchmark +1 Standard Deviation: $30,788. Source: CMS. Note: Each domain score is an average of the standardized scores for all measures in the domain with at least 20 cases. Up to six equally weighted quality domain scores make up a medical group practice's quality composite score. [End of figure] Benchmarks: While private entities typically displayed performance information relative to multiple benchmarks for certain quality measures--such as peer groups, pre-established targets, or past performance--CMS compared each physician group's performance only to the national average (see figure 8). National average rates comprised all EPs and provider groups that reported on that particular quality measure. For quality measures, the agency compared a group's current performance to the prior-year national average of all groups. CMS reported calculating benchmarks by weighting the performance rate of each physician and group of physicians submitting data through any reporting mechanism for that specific quality measure, regardless of specialty, by the number of beneficiaries used to calculate the performance rate. According to CMS, where data for a measure are only available through one reporting option, such as a registry, the benchmark is the performance of all other groups that used the same reporting method to submit data for that quality measure. For cost measures, CMS compared a group's current performance to current-year national averages.[Footnote 37] Figure 8: Performance Benchmarks Displayed in CMS's 2013 Quality and Resource Use Reports: [Refer to PDF for image: table] Chronic Obstructive Pulmonary Disease (COPD): Performance Measures: COPD-1 COPD: Bronchodilator Therapy; Your Medical Group Practice's Performance: Number of Eligible Cases: 32; Performance Rate: 87.5%; Performance of All POPS Participants Reporting the Measure: Benchmark Rate: Not Available; Average Range: Benchmark -1 Standard Deviation: Not Available; Benchmark +1 Standard Deviation: Not Available. Coronary Artery Disease (CAD): Performance Measures: CAD-1 CAD: Antiplatelet Therapy; Your Medical Group Practice's Performance: Number of Eligible Cases: 401; Performance Rate: 92.8%; Performance of All POPS Participants Reporting the Measure: Benchmark Rate: 82.8%; Average Range: Benchmark -1 Standard Deviation: 76.9%; Benchmark +1 Standard Deviation: 88.8%. Performance Measures: CAD-2 CAD: Lipid Control; Your Medical Group Practice's Performance: Number of Eligible Cases: 401; Performance Rate: 72.1%; Performance of All POPS Participants Reporting the Measure: Benchmark Rate: 88.8%; Average Range: Benchmark -1 Standard Deviation: 73.9%; Benchmark +1 Standard Deviation: 100.0%. Performance Measures: CAD-7: CAD: ACE Inhibitor or ARB Therapy for Patients with CAD and Diabetes and/or LVSD; Your Medical Group Practice's Performance: Number of Eligible Cases: 246; Performance Rate: 52.9%; Performance of All POPS Participants Reporting the Measure: Benchmark Rate: 69.0%; Average Range: Benchmark -1 Standard Deviation: 53.8%; Benchmark +1 Standard Deviation: 84.1%. Source: CMS. [End of figure] According to CMS, this benchmarking approach should spur cost and quality improvement nationwide. However, this method may not always produce a national or nationally representative benchmark for quality of care, as it only captures providers who reported that measure in that year. In addition, a private entity in our review noted that national benchmarks do not reflect more local patterns of care. Several entities calculate benchmarks at the state or regional level; the performance of peers in the same geographic area may differ from national averages and have more relevance to the providers. For example, absent multiple performance benchmarks--both national and local--a physician group might be unaware that while its performance exceeds national benchmarks, it is below the average within the state. Moreover, as we concluded in a 2012 report,[Footnote 38] without consistent measures showing performance over time, a physician group cannot gauge whether its performance was improving. CMS also provided broader benchmarks, in the form of composite scores. CMS's 2013 feedback reports also included a ''first look'' at the VM, which did not yet affect the physician group's payment, but showed each group how its payments could be affected with the implementation of the VM in 2017.[Footnote 39] The agency determined the VM based on each group's performance on quality and cost measures relative to other Medicare providers. To determine the VM, CMS classified each quality measure into one of six domains, following the national priorities related to clinical care, patient experience, population/community health, patient safety, care coordination, and efficiency established in its National Quality Strategy. CMS then calculated a quality composite score, weighing each domain equally. Similarly, to produce a cost composite score, CMS aggregated the per capita costs for beneficiaries with four chronic diseases into one composite, and weighted this composite equally with total overall costs to produce a cost composite. CMS then combined the quality and cost composite scores and compared them to national averages to determine an overall quality tier,[Footnote 40] which then determined the group's VM.[Footnote 41] (See figure 9.) Figure 9: CMS Sample Composite Scores, Quality Tier, and Value Modifier: [Refer to PDF for image: illustration] Illustration depicts the following: Your Quality Composite Score: Average; Standard deviations from National Mean: -0.3. Your Quality Composite Score: Average; Standard deviations from National Mean (Negative scores are better): 0.04. Your beneficiaries Average Risk Score: 83rd Percentile. Your Quality Tiering Performance: Average Quality, Average Cost. Your Value-Based Payment Adjustment Based On Quality Tiering. Source: CMS. [End of figure] Also, unlike the private entities we interviewed, CMS's feedback reports did not provide information on a group's performance relative to prior years. CMS plans to expand to multiple benchmarks in the future--for example, CMS finalized for 2014 adjusting the benchmark based on the specialty composition of the groups. In 2012, GAO recommended that CMS develop benchmarks that assess the degree of performance improvement as well as the extent to which performance meets absolute targets.[Footnote 42] However, according to CMS, it is premature to establish such benchmarks without more data from medical groups, increased participation in PQRS, and an understanding by providers that the reported data will be used to adjust payment. CMS Spent 9 Months to Produce the 2013 Feedback Reports, and Did Not Provide Interim Reports: CMS's report generation process took longer than that of most private entities in our study because it required more steps. While most health insurers generated performance reports in 4 to 6 months, CMS issued reports about 9 months after the end of the January to December 2012 reporting period. To produce its 2013 physician feedback reports using administrative claims, CMS began with the standard claims run-out period followed by intervals for provider attribution, measure calculation, risk- adjustment, and quality assurance. (See figure 10.) CMS officials said they allowed a 3-month run-out interval to account for providers' late- year claims submissions. After the run-out period, CMS required 5 to 6 months for a series of additional tasks needed to prepare the data for reporting. For groups that submitted data to CMS via the web interface or registry options, CMS gave these groups 3 months to submit such data after the end of the 12-month performance period. CMS then calculated the measures for these options over a period of the next several months. Although FFS beneficiaries see multiple physicians, CMS attributed each beneficiary to a single medical group through its yearly attribution process.[Footnote 43] It used the claims for the 12- month reporting period to determine which groups provided the beneficiary the most primary care and then assigned responsibility for performance on quality and cost measures to that group. Following attribution, the agency risk-adjusted the cost measures to account for differences in beneficiary characteristics[Footnote 44] and complexity, and standardized the cost measures by removing all geographic payment adjustments. Finally, CMS officials said they performed data checks to ensure accuracy before the reports were disseminated. Figure 10: Report Generation Timeline for CMS Performance Feedback Reports, September 2013: [Refer to PDF for image: timeline illustration] Performance period: 12 months; 9-10 month lag time: * Final claims and quality data submission: 3 months; * Provider attribution: less than 1 month; * Geographic adjustment removal: 1 month; * Measure calculation: 2 months; * Risk adjustment and quality assurance: 3 months; Report issued. Source: GAO analysis of CMS information. [End of figure] According to health insurers and collaboratives, physicians find that frequent feedback enables them to improve their performance more quickly;[Footnote 45] however, CMS did not provide physicians interim performance feedback.[Footnote 46] However, with only annual feedback from CMS, physicians may be missing an opportunity to improve their performance on a more frequent basis. Asked if more frequent reporting was considered, CMS officials cited concerns about the time it would take to generate each set of reports. With each round, the agency would need to attribute all beneficiaries to a medical group, risk- adjust and standardize the cost measures, and compute the benchmarks for each measure. In addition, providing interim reports on quality data would require certain providers to report more frequently. For example, providers who submit via registry would need to finalize their data more often than annually. However, experts and CMS officials have stated that, with continued adoption of advanced data reporting technology, CMS may be able to generate reports more frequently. CMS Included Explanatory Information in Its Reports and Offered Other Resources Online: CMS provided general information on its website and through the Medicare Learning Network, to assist providers in understanding the performance feedback and VM.[Footnote 47] Unlike private entities, CMS has not provided tailored guidance or action steps to help providers improve their scores. However, CMS resources included steps to access reports, a review of methodology, suggested ways to use the data in reports, and contact information for technical support. A representative acting on behalf of a medical group could access the group's QRUR. In addition, CMS's web-based reports allowed providers to access further detail on the Medicare beneficiaries attributed to the group. For example, physicians could view their patients' percentage of total cost by type of service and hospital admission data. CMS included explanatory information within the reports for providers. In addition to comparative performance data, reports made available in September 2013 included a description of the attribution methods, the number of providers billing in each medical group, information about each attributed patient's hospitalizations during the year, and other details about the group's performance. In addition, CMS included within the QRUR a glossary of terms used in the feedback report. Conclusions: Payers have been refining their performance reports for physicians, a key component of their VBP initiatives. Private entities have selectively rolled out their feedback programs, generally applying them to relatively large groups of primary care physicians participating in medical homes and ACOs. Although they are not uniform in their approaches, the entities in our study used their discretion to select a limited number of quality and utilization/cost measures, calculated them using claims data, and used them to assess performance against a variety of benchmarks. In response to physicians' needs, their feedback reports tended to be frequent, timely, and dynamic. CMS's approach to performance reporting faces some unique challenges. First, it is driven by the statutory requirement that, by 2017, Medicare pay FFS physicians in groups of all sizes, including specialists, using a VM. Second, the agency has had to develop the feedback program in the context of pre-existing incentive programs, such as PQRS. CMS finalized several key changes to the feedback program for future reporting periods, as it expands the application of the VM to all physicians. Specifically, CMS continues to modify program components such as measures and reporting mechanisms as it works to align the reporting and feedback aspects of multiple programs. Despite these program modifications, we found that certain features of private entities' feedback programs, which are lacking in CMS's program, could enhance the usefulness of the reports in improving the value of physician care. * CMS's use of a single nationwide benchmark to compare performance on quality and cost ignores richer benchmarking feedback that could benefit physicians. Private entities in our study measured provider performance against several benchmarks. CMS's reliance on a national average as the sole benchmark precludes providers from gauging their performance relative to their peers in the same geographic area. Without such contextual information, providers lack the feedback to better manage their performance and target improvement efforts. * Additionally, CMS disseminates feedback reports only once a year (for example, September 2013). This gives physicians little time (October through December) to analyze the information and make changes in their practices to score better in the next measurement period. The private entities we reviewed sent reports more than once a year, and reported that greater frequency of reporting enabled more frequent improvements. Without interim performance reports, providers may not be able to make needed changes to their performance in advance of their annual VM payment modifications. Our findings also support past GAO recommendations that CMS reward physicians for improvement as well as performance against absolute benchmarks, and develop a strategy to reliably measure solo and small practices, such as by aggregating data. Recommendations for Executive Action: As CMS implements and refines its physician feedback and VM programs, the Administrator of CMS should consider taking the following two actions to help ensure physicians can best use the feedback to improve their performance: * Develop performance benchmarks that compare physicians' performance against additional benchmarks such as state or regional averages; and: * Disseminate performance reports more frequently than the current annual distribution--for example, semiannually. Agency Comments and Our Evaluation: We provided a draft of this report to HHS for comment. In its written response, reproduced in appendix III, the department generally agreed with our recommendations, and reiterated our observation that the agency faces unique challenges with its mandate to report to Medicare FFS providers in groups of all sizes that encompass all specialty care areas. HHS conditionally agreed with our recommendation that reporting physician performance using multiple benchmarks would be beneficial, but asked for further information on private entities' practices and their potential use for Medicare providers. As we stated in the report, private entities generally use two or three different types of benchmarks to provide a variety of performance assessments. We found alternative benchmarks that could enhance Medicare feedback reporting by allowing physicians to track their performance in their own historical and geographic context. For example, some entities' reports included physician group performance on certain measures relative to their past performance, a recommendation we previously made to HHS in December 2012. Although it agreed to consider developing benchmarks for performance improvement, HHS has yet to do so. A comparison to past performance allows a medical group to see how much, if at all, it has improved regardless of where it stands relative to its peers. In this way, CMS can motivate physicians to continuously improve their performance. In addition, some entities in our review compared physician performance data to statewide or regional-level benchmarks. Because of the number of Medicare physicians, CMS has extensive performance data, which could enable more robust localized peer benchmarks than any individual health plan could generate. As we noted, such benchmarks reflect more local patterns of care that may be more relevant to physicians than comparisons to national averages alone. HHS further asserted that, because the physician feedback program's key purpose is to support the national VM program, it is appropriate to limit reporting to a single national benchmark. HHS expressed concern that displaying other benchmarks could be misleading and confusing for the purposes of the VM. However, CMS's reports provide a group's VM payment adjustment in a concise, one-page summary, as shown in figure 9. We do not believe that additional benchmark data, displayed separately, would detract from the information provided on the summary page, and could enhance the value of the reports for physicians. HHS agreed with our second recommendation to disseminate feedback reports more frequently than on an annual basis. As seen in the private entity practices of using rolling or preliminary data for interim reporting, disseminating reports more frequently can assist physicians in making improvements to their performance before CMS determines their VM payment adjustment. HHS commented that producing more frequent reports would first require modifying the PQRS data collection schedules. For example, groups of EPs that use the web interface and registry options currently are only required to submit data to CMS once a year. The registry option will eventually require groups to submit data to CMS on a quarterly or semiannual basis, and HHS noted that these requirements would have to be synchronized with the timing of data submission through the web interface and EHR options. The agency also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Administrator of CMS. The report also is available at no charge on GAO's website at [hyperlink, http://www.gao.gov]. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Signed by: James Cosgrove: Director, Health Care: [End of section] Appendix I: Private Entity and Medicare Performance Feedback for Hospitals: This appendix contains information on the similarities and differences between private entities' and Medicare's performance reporting to hospitals. The private entities in our study provided feedback through a variety of value-based payment (VBP) initiatives and several entities have made accountable care organizations the focus of their feedback programs. Payers' efforts to provide feedback to hospitals on their performance are centered on rewarding higher-quality and lower- cost providers of care. We followed the same methodology for comparing how private entities and the Centers for Medicare & Medicaid Services (CMS) conduct performance feedback reporting for hospitals as we did for examining physician-focused feedback programs. We interviewed representatives of the nine selected private entities about their feedback reporting to hospitals, if any, with regard to report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of performance reports. One statewide health care collaborative in our review was established through a partnership between the state medical society and hospital association, and only provides feedback reports to hospitals. We similarly requested sample feedback reports for hospitals. We interviewed CMS officials and obtained CMS documentation on its hospital feedback reporting activities, and compared these to private entity efforts. We also reviewed a sample CMS hospital feedback report from July 2013. CMS's hospital VBP efforts over the past decade have evolved to provide performance feedback to a range of hospital types, with a focus on acute care hospitals. In 2003 the agency began with a quality incentive demonstration program designed to see whether financial incentives to hospitals were effective at improving the quality of inpatient care, and to publicly report that information. Since then, a number of laws have required CMS to conduct both feedback reporting and VBP programs for hospitals. These included the following: * The Medicare Prescription Drug, Improvement, and Modernization Act of 2003, which required the establishment of the Hospital Inpatient Quality Reporting Program, a pay-for-reporting initiative.[Footnote 48] The act also required CMS to make downward payment adjustments to hospitals that did not successfully report certain quality measures. That downward payment adjustment percentage was increased by the Deficit Reduction Act of 2005.[Footnote 49] * The Patient Protection and Affordable Care Act established Medicare's Hospital VBP Program for inpatient care provided in acute care hospitals.[Footnote 50] Under this program, CMS withholds a percentage of all eligible hospitals' payments and distributes those funds to high-performing hospitals. In reviewing current feedback reporting practices, we found that private entities and CMS report to hospitals on similar performance measures and that entities' feedback generally contains publicly available data. Table 1 compares features of the hospital feedback produced by those private entities in our study that report to hospitals through a VBP initiative and CMS's hospital VBP program. Table 1: Key Features of Hospital Feedback Reporting by Selected Private Entities and CMS: Report recipients: Private entities in our study reported to a range of hospital types: * Private entities reported to acute care general hospitals, including critical access hospitals (CAH)[A]; * One entity limited the data reported to small hospitals, such as CAHs, because of their small volume and limited services, and another entity used 5 years of combined data in its annual report to increase data reliability for CAHs; CMS reported to hospitals covered under statute: * CMS provided reports to more than 3,000 acute care general hospitals nationwide; * CMS did not report to hospitals excluded from its statutory mandate, including those excepted from CMS's Hospital Inpatient Quality Reporting (IQR) Program,[B] as well as those that do not meet minimum volume requirements.[C] Data sources: Private entities compiled information from Medicare, states, and other sources: * Private entities generally used publicly available Medicare, state hospital association, and other data to report on performance; * Medicare data sources included CMS's Hospital Compare website,[D] as well as CMS's patient satisfaction surveys; * Some entities used other national data sources such as a quality indicator tool set from the Agency for Healthcare Research and Quality or national registries; * One entity used hospital-reported all-payer data, and another entity used voluntary hospital-reported data on infections; CMS used claims, patient survey, medical record, and other administrative data: * CMS used data collected through its Hospital IQR Program, which included claims and administrative data, its patient satisfaction surveys, and other data. Measures: Private entities reported similar quality measures on processes of care, outcomes, patient safety, and patient satisfaction: * Private entities generally used similar measures on quality and utilization for hospital reporting, including patient safety and patient satisfaction; * Entities used as few as 15 and as many as 49 total measures in the four sample reports we obtained; * No entities reported to hospitals on cost measures; * One entity stated it focused on readmissions and emergency department care because reductions in these areas gave the entity a greater ability to control costs; CMS reported quality measures on clinical processes of care and patient satisfaction: * CMS will use three weighted quality domains in fiscal year 2014, covering clinical process (45% weighted), patient satisfaction (30% weighted), and outcomes; (25% weighted) measures; * CMS included 13 clinical process measures, 8 patient satisfaction measures, and 3 outcomes measures; (acute myocardial infarction, heart failure, and pneumonia 30-day mortality rates); * CMS removed measures through annual rulemaking that outlines specific criteria, such as measures that have "topped-out"--where there is not a significant difference among all hospitals. Benchmarks: Private entities generally compared hospital indicators to network, state, and national averages: * Private entities generally compared hospital performance against absolute or relative benchmarks at the state and national levels; * One entity compared hospitals against network, regional, and national levels, as well as against previous measurement period performance; CMS compared hospitals to achievement against all hospitals' rates and to improvement over time: * CMS used both an achievement benchmark (a hospital's performance against all other hospitals) and an improvement benchmark (a hospital's performance from the baseline period to the performance period); * CMS rewarded the higher of the two benchmark scores and aggregated these scores for each quality domain. Timeliness and frequency: Private entities generally reported to hospitals on an annual basis, with some interim reporting: * Private entities generally reported to hospitals annually based on a 9-month or 12-month performance period; * Some private entities provided hospitals with interim feedback reports; CMS generally reported annual data on a quarterly basis: * CMS provided annual reports to hospitals; * CMS's reporting period varied by measure to ensure sufficient reliability; * CMS used a performance period of 9 months for value-based payment (VBP) in fiscal year 2013 and plans to use a 12-month performance period beginning in 2014. Enhancing utility: Private entities offered resources to assist hospitals in acting on performance data: * Some private entities offered consultations with quality improvement professionals, forums for information-sharing, and mentoring opportunities with hospital peers; * One entity hosted an annual best practices forum in 2012 where more than 400 hospitals shared their quality improvement experiences; CMS has offered educational resources on its website and through national provider calls: * CMS posted information on its hospital reporting and VBP programs on its website; * CMS held national provider calls to solicit hospitals' feedback. Source: GAO analysis of information from CMS and private entities. Note: We did not independently verify the information we obtained in our interviews with private entities. [A] The Balanced Budget Act of 1997 established the CAH designation to target small rural hospitals with low patient volumes and short patient stays. A number of criteria are used to apply the CAH designation, including an average annual length of stay of 96 hours or less per patient for acute care services. Pub. L. No. 105-33, § 4201(c), 111 Stat. 251, 373-374 (1997). [B] The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required the establishment of the Hospital IQR Program, a pay- for-reporting initiative. Pub. L. No. 108-173, § 501(b), 117 Stat. 2066, 2289 (2003). [C] Certain hospitals are excluded from participating in CMS's hospital VBP program. These include hospitals that pose immediate jeopardy to the health or safety of patients; hospitals that do not meet the minimum requirements for cases, measures, or surveys during the measurement period; and hospitals that are exempted by the Secretary of Health and Human Services (for example, Maryland was exempt in fiscal year 2013 by submitting a report to the Secretary describing how its similar state program achieves or surpasses CMS's hospital VBP program). [D] Medicare's Hospital Compare website was started in 2005 through its Hospital Quality Initiative in collaboration with other stakeholders, such as the American Hospital Association and the National Quality Forum. On Hospital Compare, consumers can review quality-of-care data for over 4,000 hospitals nationwide. [End of table] [End of section] Appendix II: Quality Measures Used in Sample Physician Feedback Reports Provided by Selected Private Entities: Table 2 summarizes the number of quality measures included in sample physician feedback reports we received from private entities in our study. These entities used their discretion to determine which measures to include in their reports. We analyzed the measures focused on quality of care and categorized them into common areas. Table 2: Number of Quality Measures Categorized by Type Found in Sample Physician Feedback Reports Provided by Selected Private Entities: Clinical areas reported: Cardiovascular; Entity A: 1; Entity B: 2; Entity C: 7; Entity D: 1; Entity E: 3; Entity F: 1; Entity G: 5; Entity H: 2. Diabetes; Entity A: 4; Entity B: 3; Entity C: 8; Entity D: 4; Entity E: 9; Entity F: 4; Entity G: 3; Entity H: 4. Medication management; Entity A: 0; Entity B: 0; Entity C: 6; Entity D: 0; Entity E: 0; Entity F: 2; Entity G: 5; Entity H: 0. Mental health; Entity A: 2; Entity B: 0; Entity C: 0; Entity D: 0; Entity E: 0; Entity F: 4; Entity G: 0; Entity H: 0. Musculoskeletal conditions; Entity A: 1; Entity B: 1; Entity C: 1; Entity D: 2; Entity E: 1; Entity F: 3; Entity G: 0; Entity H: 1. Pediatric care; Entity A: 4; Entity B: 0; Entity C: 3; Entity D: 5; Entity E: 4; Entity F: 11; Entity G: 10; Entity H: 17. Prevention and screening; Entity A: 3; Entity B: 5; Entity C: 3; Entity D: 3; Entity E: 4; Entity F: 7; Entity G: 3; Entity H: 3. Pulmonary and respiratory conditions; Entity A: 3; Entity B: 0; Entity C: 1; Entity D: 2; Entity E: 2; Entity F: 5; Entity G: 3; Entity H: 2. Nonclinical areas reported: Patient safety; Entity A: 0; Entity B: 0; Entity C: 3; Entity D: 3; Entity E: 0; Entity F: 0; Entity G: 0; Entity H: 0. Patient satisfaction; Entity A: 0; Entity B: 3; Entity C: 0; Entity D: 0; Entity E: 8; Entity F: 8; Entity G: 0; Entity H: 0. Other[A]; Entity A: 0; Entity B: 0; Entity C: 0; Entity D: 4; Entity E: 20; Entity F: 0; Entity G: 7; Entity H: 0. Total number of quality measures; Entity A: 18; Entity B: 14; Entity C: 32; Entity D: 21; Entity E: 51; Entity F: 37; Entity G: 36; Entity H: 29. Source: GAO analysis of private entity information: Note: For this analysis, we examined the measures included in sample physician performance feedback reports obtained from private entities. We categorized measures into common areas based on predominant patterns. One of the nine entities included in our review only provided feedback reports to hospitals, and is not shown here. [A] This area included measures related to information technology, access to care, participation in quality improvement initiatives or external recognition programs, and assessment and care plans for urinary incontinence. [End of table] [End of section] Appendix III: Comments from the Department of Health and Human Services: Department of Health & Human Services: Office of The Secretary: Assistant Secretary for Legislation: Washington, DC 20201: March 13, 2014: James Cosgrove: Director, Health Care: U.S. Government Accountability Office: 441 G Street NW: Washington, DC 20548: Dear Mr. Cosgrove: Attached are comments on the U.S. Government Accountability Office's (GAO) draft report entitled, "Medicare: Certain Physician Feedback Reporting Practices of Private Entities Could Improve CMS's Efforts" (GAO-14-279). The Department appreciates the opportunity to review this report prior to publication. Sincerely, Signed by: Jim A. Esquea: Assistant Secretary for Legislation: Attachment: General Comments Of The Department Of Health And Human Services (HHS) On The Government Accountability Office's (GAO) Draft Report: "Medicare: Certain Physician Feedback Reporting Practices Of Private Entities Could Improve CMS's Efforts" (GAO-14-279): The Department of Health and Human Services (HHS) appreciates the opportunity to review and respond to this GAO draft report. In the report, GAO examined (1) how and when private health entities report performance data to providers, and what information is provided; and (2) how the timing and approach CMS uses to report performance data to providers compare to that of private health care entities. Section 1848(n) of the Social Security Act (the Act) requires CMS to provide confidential reports to physicians and, as appropriate, to groups of physicians, that measure the resources involved in furnishing care to Medicare beneficiaries. Section 1848(n)(1)(A)(iii) of the Act also authorizes CMS to include information on the quality of care furnished to Medicare beneficiaries by the physician or group. CMS has phased in the production and dissemination of these reports. On September 16, 2013, CMS made available calendar year (CY) 2012 reports to 6,779 physician groups nationwide with 25 or more physicians and other practitioners. These reports covered approximately 400,000 physicians practicing in large medical groups. Not only did these reports provide comparative quality of care and cost information, but they also previewed how the groups of physicians might fare under the value-based payment modifier (VM) (authorized by section I848(p) of the Act to begin January 1, 2015) and how the VM could affect their payments under the Medicare Physician Fee Schedule (PFS). Additionally, and in response to feedback we received from recipients of prior year reports, the 2012 reports contained detailed, confidential beneficiary-specific data on each group's attributed beneficiaries and their hospitalizations, and the group's associated eligible professionals (EP). Complementing the 2012 reports were three tables, downloadable only by an authorized and registered group representative that provided information on each beneficiary attributed to the group and each eligible EP billing under the group's Taxpayer Identification Number. In a final rule issued in November 2013, CMS described its plans to provide feedback reports to all Medicare-enrolled physicians during the summer of 2014. We note that these reports will go to hundreds of thousands of physicians nationwide — a population that is substantially larger than the population to which any one private entity will provide reports. The Medicare physician feedback program is multi-faceted and broader than these private entity programs in terms of scope, magnitude, and participation. While there are lessons to be learned from these private entities, it is important to note that the subject GAO report indicates these entities "almost exclusively focused their feedback efforts on primary care physician groups participating in medical homes and accountable care organizations." In comparison, Medicare's physician feedback program covers all physicians paid under the Medicare Physician Fee Schedule and provides feedback reports to groups nationwide regardless of specialty or size. The GAO recommendations and HHS responses are discussed below. Recommendation: The GAO recommends that CMS develop performance benchmarks that compare physicians' performance against additional benchmarks such as state or regional averages. HHS Response: HHS conditionally concurs with this recommendation pending additional information from GAO regarding the types of information being provided by private entities and an assessment of whether and to what degree that information would be useful to providers serving Medicare beneficiaries. To the extent that such additional benchmark information could be helpful in this regard, CMS will work with its quality improvement organizations to utilize such benchmarks in the technical assistance they provide to practitioners, particularly those subject to the VM, at the local level. The VM is a national program that will affect payment of all physicians under the Medicare PFS. Accordingly, we believe that it is appropriate to measure performance for all physicians and other EPs that are paid under the Medicare PFS and subject to the VM using the same national benchmarks. The key purpose of the physician feedback reports is to support the VM and, therefore, the reports should focus on the information practitioners need in order to understand how and why the VM payment adjustment is applied to their payments under the PFS. We believe that including additional benchmark information in those reports could be misleading and confusing for the purposes of the VM and could also be misconstrued to imply that poor performance against national benchmarks is acceptable so long as performance is comparable to a more local benchmark. We also believe that all Medicare FFS beneficiaries are entitled to high-quality care, as measured against national benchmarks, regardless of where they reside or receive care. Recommendation: The GAO recommends that CMS disseminate performance reports more frequently than the current annual distribution — for example, semi- annually. HHS Response: HHS concurs with the recommendation and is examining how to do so in future years. We believe that before we begin to provide feedback data more frequently, the first step is to complete the phase-in of the feedback reporting program to ensure the reports go to all Medicare- enrolled physicians as required by the statute. As described above, we anticipate that the CY 2013 reports (available summer of 2014) will accomplish this goal. We note that producing the reports more frequently than on an annual basis would require CMS to make changes in Physician Quality Reporting System (PQRS) data collection, including but not limited to requiring physicians to submit data more frequently than annually. Currently, physicians reporting under the PQRS through the registry, electronic health record (EHR) and web interface reporting methods are required to submit data to CMS only once a year. It is worth noting, however, that we do require registries that participate in PQRS to provide reports more frequently than annually to the EPs on whose behalf they report. Specifically, two reports per year are required for traditional registry reporting and four per year for the new Qualified Clinical Data Registry option implemented for 2014 PQRS. Therefore, any increase in the frequency of the feedback reports may have less impact for physicians reporting through the registry methods. Increasing the frequency of feedback reports for physicians and other EPs that report using the EHR and web interface methods under the PQRS may require a more significant change to current PQRS reporting practices. A necessary first step would be to increase the frequency of PQRS data collection from EPs and data intermediaries such as registries. Once this change is successfully implemented, CMS can explore more frequent dissemination of quality performance feedback in reports. [End of section] Appendix IV: GAO Contact and Staff Acknowledgments: GAO Contact: James Cosgrove, (202) 512-7114 or cosgrovej@gao.gov: Staff Acknowledgments: In addition to the contact named above, individuals making key contributions to this report include Rosamond Katz, Assistant Director; Sandra George; Katherine Perry; and E. Jane Whipple. [End of section] Related GAO Products: Electronic Health Record Programs: Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care. [hyperlink, http://www.gao.gov/products/GAO-14-207]. Washington, D.C.: March 6, 2014. Clinical Data Registries: HHS Could Improve Medicare Quality and Efficiency through Key Requirements and Oversight. [hyperlink, http://www.gao.gov/products/GAO-14-75]. Washington, D.C.: December 16, 2013. Medicare Physician Payment: Private-Sector Initiatives Can Help Inform CMS Quality and Efficiency Incentive Efforts. [hyperlink, http://www.gao.gov/products/GAO-13-160]. Washington, D.C.: December 26, 2012. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. [hyperlink, http://www.gao.gov/products/GAO-13-102]. Washington, D.C.: November 13, 2012. Medicare Physician Feedback Program: CMS Faces Challenges with Methodology and Distribution of Physician Reports. [hyperlink, http://www.gao.gov/products/GAO-11-720]. Washington, D.C.: August 12, 2011. Value in Health Care: Key Information for Policymakers to Assess Efforts to Improve Quality While Reducing Costs. [hyperlink, http://www.gao.gov/products/GAO-11-445]. Washington, D.C.: July 26, 2011. Medicare: Per Capita Method Can Be Used to Profile Physicians and Provide Feedback on Resource Use. [hyperlink, http://www.gao.gov/products/GAO-09-802]. Washington, D.C.: September 25, 2009. Medicare: Focus on Physician Practice Patterns Can Lead to Greater Program Efficiency. [hyperlink, http://www.gao.gov/products/GAO-07-307]. Washington, D.C.: April 30, 2007. [End of section] Footnotes: [1] In an ACO, groups of physicians and hospitals share responsibility for providing care to a designated group of patients, and financial risk. Payers generally require providers to meet predetermined performance targets on quality and cost in order to share in savings or receive other incentives. Medical homes allow physicians to earn incentives for reducing utilization of high-cost settings, such as emergency departments and hospitals, through care coordination activities. Medical homes unite physicians of various specialties, especially targeting patients with chronic illnesses. [2] Private feedback reports are generally designed to identify differences between providers' current practices and desired performance and may be combined with financial incentives to encourage improvement. Other performance reporting makes provider information available to the public through recognition programs or websites, thus using professional reputation to promote high-quality care. In this report, performance feedback reports refer to the private reports sent to providers from a payer or other entity. [3] How well this approach will work is still unclear. Experts have noted that the evidence of VBP's effectiveness is limited. Proponents of VBP contend that, with sufficient financial incentive, VBP can improve overall quality of care and promote efficient practice patterns. However, critics assert that fundamental issues, such as developing appropriate performance measures for all provider types, have not yet been adequately addressed. [4] Pub. L. No. 112-240, § 609(b)(5), 126 Stat. 2313, 2350 (Jan. 2, 2013). [5] America's Health Insurance Plans is an industry trade association representing private entities that provide health insurance coverage through employers, the individual market, and public programs. [6] Blue Cross and Blue Shield Association coordinates legislative, regulatory, and political strategy on behalf of a nationwide group of 37 independent, locally operated insurance companies. [7] Network for Regional Healthcare Improvement represents health care collaboratives at the city, state, and regional level that provide programs for over 110 million Americans. It brings together multiple stakeholders--physicians, hospitals, health insurers, and employers-- to address various health care topics, including performance measurement and payment and delivery system reform. [8] Nearly all of our selected entities reported to physicians on their performance. One private entity only reported to hospitals on performance, and is included in appendix I. [9] We reported on physician payment incentives in 2012. See GAO, Medicare Physician Payment: Private-Sector Initiatives Can Help Inform CMS Quality and Efficiency Incentive Efforts, [hyperlink, http://www.gao.gov/products/GAO-13-160] (Washington, D.C.: Dec. 26, 2012). [10] CMS also includes other types of providers, such as long-term care hospitals, under separate VBP programs. These programs are not included in our scope because private entities typically do not have reporting programs for such provider types. [11] Pub. L. No. 109-432, § 101(b), 120 Stat. 2922, 2975 (Dec. 20, 2006). [12] Through 2014, CMS plans to provide an upward payment adjustment to physicians who satisfactorily report quality data. In 2015, groups that do not satisfactorily report data are to be subject to a downward percentage payment adjustment. [13] Starting in 2011 and continuing through 2015, eligible Medicare physicians who demonstrate meaningful use of certified EHR technology consistent with CMS requirements are eligible to receive Medicare EHR incentive payments. About 36 percent of EPs received incentive payments in 2012. See GAO, Electronic Health Record Programs: Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care, [hyperlink, http://www.gao.gov/products/GAO-14-207] (Washington, D.C.: Mar. 6, 2014). [14] Registries qualified for reporting to CMS differ from clinical data registries--entities that collect and analyze detailed information on the therapies that patients receive and changes in their clinical condition over time--in that their sole function is to compile and submit data to PQRS. See GAO, Clinical Data Registries: HHS Could Improve Medicare Quality and Efficiency through Key Requirements and Oversight, [hyperlink, http://www.gao.gov/products/GAO-14-75] (Washington, D.C.: Dec. 16, 2013). [15] Pub. L. No. 110-275, § 131(c), 122 Stat. 2494, 2526 (July 15, 2008). [16] Pub. L. No. 111-148, §§ 3003(a), 3007, 124 Stat. 119, 367, 373 (Mar. 23, 2010). [17] [hyperlink, http://www.gao.gov/products/GAO-13-160]. [18] Some entities in our study had previous experience in giving feedback to providers in these delivery models, while other entities were in the process of transitioning their incentive programs from pay for performance programs to these newer delivery models. Most entities told us they disseminated performance reports to a designated point of contact for subsequent distribution to the group's physicians. Many entities said they tracked report receipt or website downloads, and they stated that report use is high, especially when tied to incentive payments. [19] Practices with small numbers of attributed enrollees can receive skewed results that do not appropriately reflect the practice's average performance over time. One entity told us that even one or two catastrophic cases in small patient populations could disproportionately affect the results and could be misleading. [20] Entities told us they generally used nationally endorsed quality measures, but have considered adding measures to meet the particular needs of their program and physicians. Nationally endorsed measures could include measures from the National Quality Forum or the National Committee for Quality Assurance's Healthcare Effectiveness Data and Information Set. For example, at one entity, a multistakeholder Measurement and Reporting Committee supplemented the National Quality Forum-endorsed measures with nonendorsed generic drug measures. [21] Process measures include clinical activities such as tests, screenings, and immunizations. In contrast, outcomes measures require data about patient health, such as the percentage of a physician's patients at target blood pressure. [22] The hemoglobin A1C test is a blood test used to manage type 1 or type 2 diabetes by looking at the patient's average blood sugar level for a period of time. A high hemoglobin A1C level indicates poor control of diabetes. [23] In a study of 23 health insurers accounting for 66 percent of U.S. commercial enrollment, America's Health Insurance Plans found wide variation in measures under common measurement areas. See A. Higgins, G. Veselovskiy, and L. McKown, "Provider Performance Measures in Private and Public Programs: Achieving Meaningful Alignment With Flexibility to Innovate," Health Affairs, vol. 32, no. 8 (2013). [24] According to one entity, physicians complained that receiving feedback on as many as 29 performance measures did not support meaningful patient care. [25] Some services a patient receives are not paid for by the entity (such as care furnished at a Department of Veterans Affairs facility) and therefore the entity's claims data may give an incomplete picture of a patient's health. [26] Since claims are designed for reimbursement purposes, the data are likely to encompass all services and supplies furnished by a provider. However, because payment in an FFS setting is not directly related to the number or type of conditions for which the physician codes, an EHR may have a more complete set of patient diagnoses than claims data. In addition, EHRs contain clinical data--lab results or vital signs--that can be used to identify conditions that may not have been recognized or coded by the physician. [27] Data on costs per enrollee are risk-adjusted to reflect the enrollee's demographics and health status. This adjustment helps ensure that providers are not disadvantaged in the comparisons for serving patients with poor health status. [28] To calculate the percentage of a defined patient population that receives a particular process of care or achieves a particular outcome, entities define a denominator population by demographic information or diagnosis. For example, to calculate the rate of dilated macular examinations, the denominator could be specified as all patients aged 50 years and older with a diagnosis of age-related macular degeneration. [29] The quality assurance process may entail several steps in reviewing data for errors and accuracy, including data submission audits and internal quality checks. Some entities said that they included a physician group review period that allows groups to appeal their results using supporting data. [30] In response to provider needs, entities were flexible in how they distributed reports. Entities told us they primarily distributed reports over secure email or through a web portal. One entity sent copies by mail to reach small, rural providers. [31] See [hyperlink, http://www.gao.gov/products/GAO-13-160]. [32] Although specific measures vary by reporting method, CMS places all quality measures within an agencywide national quality strategy. In 2011, HHS issued a National Quality Strategy establishing six broad priority domains: patient and family engagement, patient safety, care coordination, population/public health, efficient use of health care resources, and clinical process/effectiveness. Experts have recommended that CMS align measures among its various Medicare payment incentive programs, such as PQRS and the EHR Incentive Programs, and CMS has begun to do so. See C. Damberg, Physician Payment Reform: Designing a Performance-Based Incentive Program. Testimony before the House Energy and Commerce Committee, Subcommittee on Health, on June 5, 2013. [33] CMS's disease management measures relate to coronary artery disease, diabetes, heart failure, hypertension, and ischemic vascular disease; and the preventive care measures include screenings for breast cancer, colorectal cancer, body mass index, tobacco use, high blood pressure, and clinical depression; and influenza and pneumococcal vaccinations. [34] If the pool of eligible assigned beneficiaries is less than 218, then providers must report on all assigned beneficiaries. [35] Depending on the specialty focus, qualified registries may collect and submit data from over 200 CMS-determined measures. [36] Total per capita costs include services covered under Medicare Part A and services covered under Medicare Part B furnished by all providers seen by the beneficiary. [37] For cost measures, the peer group for medical groups of 100 or more EPs is all groups of 100 or more EPs, while the peer group for groups of 25 to 99 EPs is all groups of 25 or more EPs. CMS compares cost measures after risk-adjustment and price standardization. [38] See [hyperlink, http://www.gao.gov/products/GAO-13-160]. [39] The 2015 VM, which will impact FFS payment in 2015 for groups with 100 or more EPs is to be based on the groups' performance in 2013. [40] CMS determines whether a group's performance was significantly different from average performance, with a 95 percent confidence interval or one standard deviation above or below the mean. [41] Although the feedback report is based on claims from all EPs in the medical group, only payments for physician services are affected by the VM. [42] See [hyperlink, http://www.gao.gov/products/GAO-13-160]. [43] Private entities providing reports in a collaborative care setting such as a medical home do not have to complete this step, as each beneficiary has already been associated with a provider. [44] Patient characteristics include age, gender, Medicaid eligibility, history of medical conditions, and end-stage renal disease. [45] In addition to evidence from our selected entities, recent research on private feedback reporting found that timeliness of data is critical for management of patient care, as well as for tracking performance and monitoring progress toward improvement goals. For example, based on a review of literature on private reports, an examination of selected examples of community quality collaborative feedback reports, and a case study of Cincinnati's experience with private reporting, one study recommended that private feedback reports should be updated at least quarterly. See Dale Shaller and David Kanouse, Private "Performance Feedback" Reporting for Physicians: Guidance for Community Quality Collaboratives, Publication No. 13-0004 (Rockville, MD: Agency for Healthcare Research and Quality, Nov. 2012). [46] Providers can access quarterly reports from CMS based on claims data. However, these interim reports only show the number of measures (or measures groups) reported, and the number of those accurately reported. They do not show performance rates. [47] CMS runs the Medicare Learning Network as an outreach program to educate health care professionals on CMS programs, using e-mail, calls, web-based courses, and other methods. [48] Pub. L. No. 108-173, § 501(b), 117 Stat. 2066, 2289 (Dec. 8, 2003). [49] Pub. L. No. 109-171, § 5001(a), 120 Stat. 4, 28 (Feb. 8, 2006). [50] Pub. L. No. 111-148, § 3001(a)(1), 124 Stat. 119, 353 (Mar. 23, 2010). Other hospital types, such as long-term care hospitals, are included under separate VBP programs. [End of section] GAO's Mission: The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's website [hyperlink, http://www.gao.gov]. Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to [hyperlink, http://www.gao.gov] and select "E-mail Updates." Order by Phone: The price of each GAO publication reflects GAO's actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO's website, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO: Connect with GAO on facebook, flickr, twitter, and YouTube. Subscribe to our RSS Feeds or E mail Updates. Listen to our Podcasts. Visit GAO on the web at [hyperlink, http://www.gao.gov]. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Website: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]; E-mail: fraudnet@gao.gov; Automated answering system: (800) 424-5454 or (202) 512-7470. Congressional Relations: Katherine Siggerud, Managing Director, siggerudk@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, DC 20548. Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, DC 20548. [End of document]