This is the accessible text file for GAO report number GAO-09-676 
entitled 'Results-Oriented Management: Strengthening Key Practices at 
FEMA and Interior Could Promote Greater Use of Performance Information' 
which was released on September 24, 2009. 

This text file was formatted by the U.S. Government Accountability 
Office (GAO) to be accessible to users with visual impairments, as part 
of a longer term project to improve GAO products' accessibility. Every 
attempt has been made to maintain the structural and data integrity of 
the original printed product. Accessibility features, such as text 
descriptions of tables, consecutively numbered footnotes placed at the 
end of the file, and the text of agency comment letters, are provided 
but may not exactly duplicate the presentation or format of the printed 
version. The portable document format (PDF) file is an exact electronic 
replica of the printed version. We welcome your feedback. Please E-mail 
your comments regarding the contents or accessibility features of this 
document to Webmaster@gao.gov. 

This is a work of the U.S. government and is not subject to copyright 
protection in the United States. It may be reproduced and distributed 
in its entirety without further permission from GAO. Because this work 
may contain copyrighted images or other material, permission from the 
copyright holder may be necessary if you wish to reproduce this 
material separately. 

Report to Congressional Requesters: 

United States Government Accountability Office: 
GAO: 

August 2009: 

Results-Oriented Management: 

Strengthening Key Practices at FEMA and Interior Could Promote Greater 
Use of Performance Information: 

GAO-09-676: 

GAO Highlights: 

Highlights of GAO-09-676, a report to congressional requesters. 

Why GAO Did This Study: 

Since 1997, periodic GAO surveys indicate that overall, federal 
managers have more performance information available but have not made 
any greater use of this information for decision making. Based on GAO’s 
most recent survey in 2007, GAO was asked to (1) identify agencies with 
relatively low use of performance information and the factors that 
contribute to this condition; and (2) examine practices in an agency 
with indications of improvement in use of performance information. GAO 
analyzed results from its surveys of federal managers across 29 
agencies, reviewed key agency documents related to using performance 
information—such as Performance and Accountability Reports—and 
interviewed agency and selected subunit managers about their management 
practices. GAO also compared management practices, at selected agencies 
with those GAO has identified as promoting the use of performance 
information for decision making. 

What GAO Found: 

According to GAO’s 2007 survey of federal managers on their use of 
performance information for decision making, the Federal Emergency 
Management Agency (FEMA) and the Department of the Interior (Interior), 
ranked 27 and 28 out of 29 agencies. Several factors contributed to 
this relatively low use. At both FEMA and Interior, the demonstrated 
commitment of agency leaders to using performance information—a key 
management practice—was inconsistent. While some FEMA programs and 
regions encouraged use of performance information to plan for and 
respond to unpredictable events, others expressed uncertainty as to how 
they could use performance information in the face of uncontrollable 
external factors. FEMA managers were also hampered by weak alignment 
among agency, program, and individual goals, as well as limited 
analytic capacity to make use of performance information. At Interior 
and the National Park Service (NPS), managers reported a proliferation 
of measures, including some that, while meaningful for department-level 
accountability, were not relevant to their day-to-day management. 
Managers at NPS and the Bureau of Reclamation also said that poorly 
integrated performance and management information systems contributed 
to an environment where the costs of performance reporting—in terms of 
time and resources—outweighed what they described as minimal benefits. 
While both FEMA and Interior have taken some promising steps to make 
their performance information both useful and used, these initiatives 
have thus far been limited. 

Figure: Survey question: 

[Refer to PDF for image: horizontal bar graph] 

My agency's top leadership demonstrates a strong commitment to using 
performance information to guide decision making: 

Percentage responding to a “great” or “very great” extent: 
FEMA: 32%; 
Interior: 37%; 
Rest of Government: 50%. 

Source: GAO. 

[End of figure] 

The experience of the Centers for Medicare & Medicaid Services (CMS) 
highlights the role that strengthened management practices can play. 
According to GAO’s 2000 and 2007 survey results, the percentage of 
managers at CMS reporting use of performance information for various 
management decisions increased by nearly 21 percentage points—one of 
the largest improvements among agencies over that period. CMS officials 
attributed this change to a combination of key management practices 
they had employed, including, but not limited to: leadership commitment 
to using performance information; alignment of strategic and 
performance goals; improving the usefulness of performance information; 
and building the analytic capacity to collect and use performance 
information. 

What GAO Recommends: 

GAO is making recommendations to the Departments of Homeland Security 
and the Interior for improvements to key management practices to 
promote greater use of performance information at FEMA, NPS, 
Reclamation, as well as at Interior. Interior agreed in principle and 
DHS generally concurred, but disagreed that FEMA should develop an 
interim performance management plan. GAO clarified this recommendation 
to address the concern. 

View [hyperlink, http://www.gao.gov/products/GAO-09-676] or key 
components. For more information, contact Bernice Steinhardt at (202) 
512-6806 or steinhardtb@gao.gov. 

[End of section] 

Contents: 

Letter: 

Background: 

FEMA and Interior Were Hindered in Using Performance Information for 
Decision Making by Weak or Inconsistent Application of Key Management 
Practices: 

Officials at CMS Headquarters and Selected Programs Said Key Management 
Practices Had Promoted Use of Performance Information for Decision 
Making: 

Conclusions: 

Recommendations for Executive Action: 

Agency Comments and Our Evaluation: 

Appendix I: Objectives, Scope, and Methodology: 

Appendix II: Agency Rankings Based on Index of 2007 Survey Results: 

Appendix III: Timeline of Major Government Results-Oriented Management 
Reforms: 

Appendix IV: Comments from the Department of Homeland Security: 

Appendix V: Comments from the Department of the Interior: 

Appendix VI: GAO Contact and Staff Acknowledgments: 

Related GAO Products: 

Figures: 

Figure 1: Percentage of Federal Managers Who Reported Having 
Performance Measures: 

Figure 2: Percentage of Federal Managers Who Reported Using Information 
Obtained from Performance Measurement for Various Management Decision- 
Making Functions: 

Figure 3: Practices That Can Promote the Use of Performance Information 
for Decision Making: 

Figure 4: Percentage of Federal Managers Who Reported That Agency's Top 
Leadership Demonstrated a Strong Commitment to Using Performance 
Information to Guide Decision Making: 

Figure 5: Percentage of Federal Managers Who Identified Lack of 
Leadership Commitment to Using Performance Information as a Hindrance 
to Measuring Performance or Using Performance Information: 

Figure 6: Percentage of Federal Managers Who Reported They Were Held 
Accountable for Their Agency's Accomplishments of Strategic Goals: 

Figure 7: Percentage of Federal Managers Who Reported Agency Investment 
in Performance Data Capacity: 

Figure 8: Percentage of Federal Managers Who Reported Top Leadership 
Commitment to Using Performance Information to Guide Decision Making: 

Figure 9: Interior Managers Reported Being Held Accountable to a 
Similar Extent as Rest of Government: 

Figure 10: Interior Managers Reported Using Performance Information to 
Identify Problems, Take Corrective Actions, or Develop Strategy to a 
Lesser Extent than Rest of Government: 

Figure 11: Percentage of Federal Managers Who Reported That Difficulty 
in Determining Meaningful Measures Hinders Using Performance 
Information: 

Figure 12: Percentage of CMS Managers Who Reported Top Leadership 
Demonstrated Commitment to Achieving Results: 

Figure 13: CMS Region IV Communicated Performance Information with 
Stakeholders to Improve Quality of Care in Nursing Homes: 

Figure 14: Percentage of CMS Managers Who Reported That Agency Managers 
at Their Level Are Held Accountable for the Results of Their Programs: 

Figure 15: CMS Reported Alignment among Department and Agency Goals and 
Individual Performance Objectives: 

Figure 16: Percentage of CMS Managers Who Reported That Difficulty 
Determining Meaningful Measures Hinders Using Performance Information: 

Figure 17: A CMS Region IV Manager Described How Easier Access to 
Performance Data Contributed to Improved Nursing-Home Survey Frequency 
in Alabama: 

Figure 18: Percentage of CMS Managers Who Reported That Training Was 
Provided to Help Accomplish Key Management Tasks: 

Figure 19: Average Change in Percentage of Federal Managers Reporting 
Use of Performance Information to a Great or Very Great Extent, 2000- 
2007: 

Figure 20: Agency Ranking Based on 2007 Survey Results on Use of 
Performance Information: 

Abbreviations: 

CMS: Centers for Medicare & Medicaid Services: 

CQISCO: Consortium for Quality Improvement and Survey & Certification 
Operations: 

DHS: Department of Homeland Security: 

FEMA: Federal Emergency Management Agency: 

FIRM: Flood Insurance Rate Maps: 

GPRA: Government Performance and Results Act: 

Interior: Department of the Interior: 

NPS: National Park Service: 

OCSQ: Office of Clinical Standards and Quality: 

OMB: Office of Management and Budget: 

PAR: Performance and Accountability Report: 

PART: Program Assessment Rating Tool: 

PMA: President's Management Agenda: 

PMAP: Performance Management Appraisal Program: 

PMDS: Performance Management Data System: 

PMIS: Project Management Information System: 

PPS: Performance Plan System: 

Recovery Act: American Recovery and Reinvestment Act: 

SCHIP: State Children's Health Insurance Program: 

SES: Senior Executive Service: 

[End of section] 

United States Government Accountability Office:
Washington, DC 20548: 

August 17, 2009: 

The Honorable Thomas R. Carper: 
Chairman: 
The Honorable John McCain: 
Ranking Member: 
Subcommittee on Federal Financial Management, Government Information, 
Federal Services, and International Security: 
Committee on Homeland Security and Governmental Affairs: 
United States Senate: 

The Honorable Tom Coburn: 
United States Senate: 

How the federal government performs and the results it achieves have a 
significant effect on many of the most pressing issues of concern to 
the American public--whether it be the creation of jobs by providing 
timely and targeted aid for recovery programs, rigorous oversight of 
financial markets, effective responses to natural disasters, reduction 
in pollutants that contribute to climate change, or delivery of water 
to arid regions of the country. Given increasing public demands for a 
more effective, transparent, and accountable federal government, it is 
more important than ever that federal agencies establish meaningful 
goals for improving performance, monitor progress in achieving their 
goals, and use information about performance to make decisions that can 
improve results. 

For the purposes of this report, we define performance information to 
mean data collected to measure progress toward achieving an agency's 
established mission or program-related goals. Performance information 
can focus on various dimensions of performance such as outcomes, 
outputs, quality, timeliness, customer satisfaction, or efficiency. It 
can inform key management decisions such as setting program priorities, 
allocating resources, identifying program problems and taking 
corrective action to solve those problems; or it can help determine 
progress in meeting the goals of programs or operations. Performance 
information may be collected to address internal management needs or 
external reporting requirements such as the Government Performance and 
Results Act of 1993 (GPRA),[Footnote 1] or the Program Assessment 
Rating Tool (PART), used by the Office of Management and Budget (OMB) 
under the previous administration.[Footnote 2] 

Our periodic surveys on performance and management issues since 1997 
[Footnote 3] have indicated that federal managers today have 
significantly more performance information available for the programs 
they manage than they did 10 years ago. However, on the whole, federal 
managers have shown little or no progress in increasing their use of 
performance information to manage for results. While some agencies have 
reported significant improvements, others remain unchanged.[Footnote 4] 
In an effort to increase the use of performance information by agency 
managers, you asked that we conduct reviews at selected agencies to 
better understand what may hinder their use of performance information 
in managerial decision making and to identify opportunities for 
improvement. Our objectives were to: (1) identify agencies with 
relatively low use of performance information and the factors that 
contribute to this condition; and (2) examine practices in an agency 
where there were indications of improvement in its use of performance 
information. 

To address both of our objectives, we reviewed our prior work on 
results-oriented management, including key practices that can promote 
greater use of performance information. We also reviewed prior reports 
and other relevant materials on GPRA and PART. 

To address our first objective, we first drew on our 2007 survey 
results to identify agencies where relatively fewer managers reported 
making extensive use of performance information. Based on this ranking 
and other considerations, we chose the Department of the Interior 
(Interior) and the Federal Emergency Management Agency (FEMA), which 
ranked 27th and 28th respectively out of 29 agencies (see appendix II, 
figure 20). We then conducted interviews with senior-level officials 
responsible for operations, budget, human capital, and performance- 
reporting functions at each agency to gain an understanding of the 
performance-based management policies and practices established at the 
top levels of their organizations. We also asked officials and managers 
to identify areas where they faced difficulties in using performance 
information for decision making. To obtain the perspective of bureau, 
program, and field managers on challenges they faced in using 
performance information at their level, we interviewed officials from 
selected component organizations that covered significant and diverse 
aspects of each agency's mission. At Interior, we selected the National 
Park Service (NPS) and Bureau of Reclamation (Reclamation) for review; 
and at FEMA, we selected the Disaster Assistance and Mitigation 
Directorates. 

To address our second objective of examining practices in an agency 
where improvement in the managers' use of performance information 
appears to have progressed, we selected the Centers for Medicare & 
Medicaid Services (CMS). Comparing survey results from 2000 and 2007, 
CMS managers' reported use of performance information across selected 
areas of key decision making increased by nearly 21 percentage points. 
However, because CMS scored significantly below the rest of government 
in responses to survey items on managerial use of performance 
information in 2000, their 2007 responses to these items reflect a 
significant turnaround (see appendix I, figure 19). At CMS, we 
interviewed top headquarters officials, officials and managers in 
Regions IV and IX, and in two lines of business--the Consortium for 
Quality Improvement and Survey & Certification Operations (CQISCO) and 
the Consortium for Financial Management and Fee for Service Operations. 
It should be noted that we did not systematically assess the quality of 
the performance information used in the examples we cite. In addition, 
although we describe how performance information was used to make 
decisions in our examples, we did not examine whether such use 
ultimately resulted in improved outcomes. 

At all three agencies, we interviewed selected officials and managers 
to gauge the extent to which key management practices--that we 
previously reported can promote use of performance information to 
manage for results--had been implemented. We also reviewed agency 
policies, procedures, and documentation related to results-oriented 
management such as their strategic plans, performance measures, and 
individual performance-management systems. See appendix I for a more 
detailed discussion of our scope and methodology. 

We performed our work in the Washington, D.C., metropolitan area; 
Boston, Massachusetts; San Francisco and Sacramento, California; and 
Atlanta, Georgia; from March 2007 to May 2009 in accordance with 
generally accepted government auditing standards. Those standards 
require that we plan and perform the audit to obtain sufficient, 
appropriate evidence to provide a reasonable basis for our findings and 
conclusions based on our audit objectives. We believe that the evidence 
obtained provides a reasonable basis for our findings and conclusions 
based on our audit objectives. 

Background: 

Over the past 16 years, a succession of legislative reforms and 
executive guidance have been aimed at improving the effectiveness of 
federal programs by transforming the departments and agencies that 
administer those programs to be more results-oriented and performance- 
based. A key element of these reforms is the Government Performance and 
Results Act of 1993 (GPRA), which among other things required executive 
agencies to establish results-oriented goals and performance measures 
and report on the progress achieved. More recently, OMB created the 
Program Assessment Rating Tool (PART), a diagnostic tool intended to 
provide a consistent approach for evaluating federal programs. (See 
appendix III for a timeline of results-oriented-management reforms.) As 
we reported in July 2008,[Footnote 5] we have seen a positive 
transformation in the capacity of the federal government to manage for 
results. This capacity includes an infrastructure of outcome-oriented 
strategic plans, performance measures, and accountability reporting 
that provides a solid foundation for improving the performance of 
federal programs. In particular, significantly more federal managers 
reported to a great or very great extent having the types of 
performance measures called for by GPRA and PART than they did 10 years 
ago (see figure 1).[Footnote 6] 

Figure 1: Percentage of Federal Managers Who Reported Having 
Performance Measures: 

[Refer to PDF for image: multiple horizontal bar graph] 

Survey question: Output measures[A]: 	
1997, percentage responding to a “great” or “very great” extent: 37.8; 
2007, percentage responding to a “great” or “very great” extent: 54.2. 

Survey question: Efficiency measures[A]: 	
1997, percentage responding to a “great” or “very great” extent: 25.9; 
2007, percentage responding to a “great” or “very great” extent: 44.1. 

Survey question: Customer Service measures[A]: 	
1997, percentage responding to a “great” or “very great” extent: 31.5; 
2007, percentage responding to a “great” or “very great” extent: 41.6. 

Survey question: Quality measures[A]: 	
1997, percentage responding to a “great” or “very great” extent: 30.9; 
2007, percentage responding to a “great” or “very great” extent: 40.2. 

Survey question: Outcome measures[A]: 	
1997, percentage responding to a “great” or “very great” extent: 31.8; 
2007, percentage responding to a “great” or “very great” extent: 48.9. 

Source: GAO. 

Notes: Data are from GAO 1997 and 2007 surveys. 

[A] There is a statistically significant difference between 1997 and 
2007 surveys. Hereafter, the differences in percentages reported are 
statistically significant unless otherwise indicated. 

[End of figure] 

However, the ultimate benefit of collecting performance information-- 
improved decision making and results--is only fully realized when this 
information is used to support management planning and decision-making 
functions. The results of our 2007 survey showed that despite having 
more performance measures available, the extent to which managers make 
use of this information to improve performance has remained relatively 
unchanged. As shown in figure 2, six of the eight categories of 
management activities we asked about in both 2000 and 2007 showed no 
statistically significant change over the past 10 years: 

Figure 2: Percentage of Federal Managers Who Reported Using Information 
Obtained from Performance Measurement for Various Management Decision- 
Making Functions: 

[Refer to PDF for image: multiple horizontal bar graph] 

Survey question: Setting program priorities[A]; 
1997, percentage responding to a “great” or “very great” extent: 65.8; 
2007, percentage responding to a “great” or “very great” extent: 58.1. 

Survey question: Allocating resources[A]; 
1997, percentage responding to a “great” or “very great” extent: 62.5; 
2007, percentage responding to a “great” or “very great” extent: 59. 

Survey question: Adopting new program approaches or changing work 
processes; 
1997, percentage responding to a “great” or “very great” extent: 66.1; 
2007, percentage responding to a “great” or “very great” extent: 53. 

Survey question: Coordinating program efforts with other internal or 
external organizations[A]; 
1997, percentage responding to a “great” or “very great” extent: 56.8; 
2007, percentage responding to a “great” or “very great” extent: 50.5. 

Survey question: Refining program performance measures[A]; 
1997, percentage responding to a “great” or “very great” extent: 51.5; 
2007, percentage responding to a “great” or “very great” extent: 46.3. 

Survey question: Setting new or revising existing performance goals[A]; 
1997, percentage responding to a “great” or “very great” extent: 58.5; 
2007, percentage responding to a “great” or “very great” extent: 52.1. 

Survey question: Setting individual job expectations for the government 
employees I manage or supervise[A]; 
1997, percentage responding to a “great” or “very great” extent: 60.8; 
2007, percentage responding to a “great” or “very great” extent: 61.9. 

Survey question: Rewarding government employees I manage or supervise; 	
1997, percentage responding to a “great” or “very great” extent: 52.6; 
2007, percentage responding to a “great” or “very great” extent: 60.9. 

Survey question: Developing and managing contracts[B]; 
1997, percentage responding to a “great” or “very great” extent: 0; 
2007, percentage responding to a “great” or “very great” extent: 40.5. 

Source: GAO. 

Notes: Data are from GAO 1997 and 2007 surveys. 

[A] Differences in percentages between 1997 and 2007 were not 
statistically significant. 

[B] This question was not asked in 1997. 

[End of figure] 

As our survey results showed, despite legislative and administration 
efforts to focus federal management decisions on maximizing the results 
achieved with federal funds, changing the way federal managers make 
decisions is not simply a matter of making more program performance 
information available. Based on our work on management reform efforts 
as well as analysis of federal managers' responses to our surveys, we 
have identified several key management practices that can promote the 
use of performance information (see figure 3).[Footnote 7] 

Figure 3: Practices That Can Promote the Use of Performance Information 
for Decision Making: 

[Refer to PDF for image: illustration] 

Practices: 
Demonstrating management commitment; 
Aligning agencywide goals, objectives, and measures; 
Improving the usefulness of performance information; 
Developing capacity to use performance information; 
Communicating performance information frequently and effectively. 

Uses: 
Identify problems and take corrective action; 
Develop strategy and allocate resources; 
Recognize and reward performance; 
Identify and share effective approaches. 

Practices promote uses, which result in improved results. 

Source: GAO. 

[End of figure] 

Our prior report grouped these practices into five categories as 
described below. 

Demonstrating Management Commitment: 

The commitment of agency managers to results-oriented management is 
critical to increased use of performance information for policy and 
program decisions. Demonstrating the willingness and ability to make 
decisions and manage programs on the basis of results, and inspiring 
others to embrace such a model, are important indicators of 
management's commitment. Management can show this type of commitment by 
leading frequent, regular performance-review meetings to discuss 
progress made toward the achievement of results, and by involving staff 
from different organizational levels in performance-review meetings. 
These methods can assist agencies in identifying performance problems 
and in developing performance-improvement plans based on collected 
performance information. 

Aligning Agencywide Goals, Objectives, and Measures: 

Agencies can encourage greater use of performance information by 
aligning agencywide goals and objectives, and by aligning program 
performance measures at each operating level with those goals and 
objectives. GPRA requires that agencies use performance measurement to 
reinforce the connection between their long-term strategic goals and 
the day-to-day activities of their managers and staff. To meet the GPRA 
requirements, an agency should cascade its goals and objectives 
throughout the organization and should align performance measures to 
the objectives from the executive level down to the operational levels. 
Furthermore, a greater focus on results can be created by cascading 
organizational goals and objectives down to the individual performance 
level. This alignment increases the usefulness of the performance 
information collected to decision makers at each level, and reinforces 
the connection between strategic goals and the day-to-day activities of 
managers and staff. 

Improving the Usefulness of Performance Information to Better Meet 
Management's Decision-Making Needs: 

To ensure that performance information will be both useful and used in 
decision making throughout the organization, agencies need to consider 
users' differing policy and management information needs. To be useful, 
performance information must meet users' needs for completeness, 
accuracy, consistency, timeliness, validity, and ease of use. Other 
attributes that affect the usefulness of information include, but are 
not limited to, relevance, credibility, and accessibility. Measures 
should be selected specifically on the basis of their ability to inform 
the decisions made at each organizational level, and should be 
appropriate to the responsibilities and control at each level. In that 
regard, involving managers in the development of performance goals and 
measures is critical to increasing the relevance and therefore the 
usefulness of performance information to their day-to-day activities. 

Developing Agency Capacity: 

The practice of building analytical capacity to use performance 
information--both in terms of staff trained to do analysis and 
availability of research and evaluation resources--is critical to using 
performance information in a meaningful fashion. Such capacity can be 
enhanced by training to develop the competencies and skills of managers 
to plan strategically, develop robust measures of performance, and 
analyze what the performance data mean. Performance management 
literature also states that training is a key factor in improving 
employees' capabilities and enabling employee involvement in achieving 
performance improvements. 

Communicating Performance Information Frequently and Effectively: 

Improving the communication of performance information among staff and 
stakeholders can facilitate the use of performance information by 
agency managers. Improvements can be achieved through frequent and 
routine communication, and the use of effective communication tools, 
such as visual aids. Frequent, regular communication is key for 
managers to inform staff and other stakeholders of their commitment to 
achieve the agency's goals and to keep these goals in mind as they 
pursue their day-to-day activities. Frequently reporting performance 
information also allows managers to review the information in time to 
take action to make improvements. Program managers can also communicate 
performance information upward through the management hierarchy, and 
across operating units. Vehicles for such communication include poster 
displays, performance scorecards, intranet sites, e-mail, and 
distribution of monthly performance-review meeting minutes. 

Agencies Reviewed: 

As noted above, to better understand what may hinder use of performance 
information and to identify opportunities for improvement, we selected 
FEMA, Interior, and CMS for more extensive review: 

Federal Emergency Management Agency: 

Although an independent agency originally, FEMA has been a part of the 
Department of Homeland Security (DHS) since 2003. FEMA's primary 
mission is to reduce the loss of life and property and protect the 
nation from all hazards, including natural disasters, acts of 
terrorism, and other man-made disasters, by leading and supporting the 
nation in a risk-based, comprehensive emergency-management system of 
preparedness, protection, response, recovery, and mitigation. There are 
eight directorates within FEMA, each dedicated to formulating policy 
and administering programs from the headquarters office located in 
Washington, D.C., and from 10 regional offices located across the 
United States. For fiscal year 2009, Congress appropriated 
approximately $15.6 billion for FEMA and FEMA programs, including $610 
million in the American Recovery and Reinvestment Act (Recovery Act). 
[Footnote 8] For this study, we reviewed the Mitigation Directorate, 
which manages a range of programs designed to reduce future losses to 
homes, businesses, schools, public buildings, and critical facilities 
from floods, earthquakes, tornadoes, and other natural disasters; and 
the Disaster Assistance Directorate. Within the Disaster Assistance 
Directorate, we focused on the Public Assistance program, which 
administers FEMA's grants for emergency work, such as debris removal, 
and permanent work such as repairing and replacing damaged buildings 
following major disasters to state and local governments, authorized 
tribal organizations, and specific types of nonprofit organizations. 

Department of the Interior: 

Interior oversees nine separate agencies and bureaus with a wide range 
of responsibilities including resource use and protection, providing 
recreation opportunities on public lands, and honoring the nation's 
obligations to American Indians and Alaskan Natives. For this study, we 
examined the National Park Service (NPS), which is responsible for 
preserving the natural and cultural resources and protecting the 
wildlife of the national parks so that they will remain unimpaired for 
the enjoyment of this and future generations, and the Bureau of 
Reclamation (Reclamation), which is responsible for managing, 
developing, and protecting water and related resources in an 
environmentally and economically sound manner. In the Omnibus 
Appropriations Act, 2009, Congress appropriated more than $10 billion 
for Interior, with NPS and Reclamation receiving approximately $2.56 
billion and $1.11 billion respectively.[Footnote 9] In addition, the 
Recovery Act provided approximately $3 billion for Interior, including 
$750 million for NPS and $1 billion for Reclamation.[Footnote 10] 

Centers for Medicare & Medicaid Services: 

CMS, a component of the Department of Health and Human Services, is the 
largest purchaser of health care in the United States, serving about 95 
million Medicare, Medicaid, and State Children's Health Insurance 
Program (SCHIP) beneficiaries. CMS' fiscal year 2009 budget of $776.3 
billion supports the entitlement programs of Medicare, Medicaid, and 
SCHIP and includes $35.9 billion in the Recovery Act. CMS has 
approximately 4,570 employees located in its headquarters and in 10 
regions throughout the country. In February 2007, CMS reorganized its 
regional management from a geography-based reporting structure to a 
consortia structure based on the agency's key lines of business: 
Medicare health plans, Medicare financial management, Medicare fee for 
service operations, Medicaid and children's health, survey and 
certification of health care providers, and quality improvement. For 
this study, we reviewed the Consortium for Quality Improvement and 
Survey & Certification Operations (CQISCO), which is responsible for, 
among other activities, oversight of state surveys and activities 
intended to monitor the quality of nursing homes and other types of 
health care facilities that participate in Medicare and Medicaid. 
[Footnote 11] Our prior work had identified more than 40 
recommendations to CMS to improve oversight of nursing homes, which 
included recommendations intended to improve CMS' use of performance 
data.[Footnote 12] Also, both Medicare and Medicaid programs are on 
GAO's high-risk list due to their size and complexity, as well as their 
susceptibility to mismanagement and improper payments.[Footnote 13] 

FEMA and Interior Were Hindered in Using Performance Information for 
Decision Making by Weak or Inconsistent Application of Key Management 
Practices: 

According to our 2007 survey of federal managers, FEMA and Interior 
were two of the lowest users of performance information among the 29 
federal agencies surveyed. Several factors contributed to this 
relatively low use. At both FEMA and Interior, the demonstrated 
commitment of agency leaders to using performance information--a key 
management practice--was inconsistent. FEMA managers were also hampered 
by weakly aligned goals and limited analytic capacity to make use of 
performance information. At Interior and NPS, we observed, and 
officials and managers reported, a proliferation of measures, including 
some that, while meaningful for department-level accountability, were 
not relevant to their day-to-day management. Field managers we 
interviewed at NPS also said that poorly integrated performance and 
management information systems contributed to an environment where the 
costs of performance reporting--in terms of time and resources-- 
outweighed what they described as minimal benefits to their decision 
making. While both FEMA and Interior have taken some promising steps to 
make their performance information both useful and used, these 
initiatives have thus far been limited. 

FEMA Leadership Inconsistently Demonstrated Commitment to Using 
Performance Information in Certain Directorates, Regions: 

As we have previously reported, demonstrating the willingness and 
ability to make decisions and manage programs on the basis of results 
and inspiring others to embrace such a model are important indicators 
of leadership's commitment to using performance information.[Footnote 
14] Our 2007 survey results indicated that, compared to the rest of 
government, a smaller percentage of FEMA managers agreed their top 
leadership demonstrated a strong commitment to using performance 
information to guide decision making (see figure 4). 

Figure 4: Percentage of Federal Managers Who Reported That Agency's Top 
Leadership Demonstrated a Strong Commitment to Using Performance 
Information to Guide Decision Making: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: My agency’s top leadership demonstrates a strong 
commitment to using performance information to guide decision making: 

Percentage responding to a “great” or “very great” extent: 
FEMA: 32%; 
Rest of Government: 50%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

At the same time, a significantly greater percentage of FEMA managers--
27 percentage points more than managers across the rest of government--
agreed that lack of leadership commitment is a hindrance to their use 
of performance information (see figure 5). 

Figure 5: Percentage of Federal Managers Who Identified Lack of 
Leadership Commitment to Using Performance Information as a Hindrance 
to Measuring Performance or Using Performance Information: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Lack of ongoing top executive commitment or support 
for using performance information to make program/funding decisions. 

Percentage responding to a “great” or “very great” extent: 
FEMA: 52%; 
Rest of Government: 25%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

Our interviews with officials at FEMA were consistent with these survey 
results, indicating that management commitment was demonstrated 
inconsistently across the program directorates and regions we reviewed. 
Leaders and managers we spoke to throughout the management hierarchy 
were clearly committed to carrying out FEMA's mission. The level of 
commitment to using performance information for decision making, 
however, appeared to vary among those we interviewed. Further, several 
FEMA headquarters officials said that top leadership commitment to 
using performance information, demonstrated by one of the agency's 
former leaders, had not permeated the organizational culture and that 
it was unclear whether recently instituted practices would be sustained 
under the new administration. 

A former Deputy Administrator, who served as FEMA's chief operating 
officer before he left his position as part of the change in 
administration, said that he believed strongly in using performance 
information to identify areas for improvement. He said that when he 
arrived at FEMA in 2006, the agency culture was response driven and 
there was little recognition that performance information could be used 
to make improvements. As a first step, he said he had focused on 
improving FEMA's business practices in areas such as hiring, financial 
management, and information systems. He also introduced performance 
information briefings on disaster assistance-related areas of concern-
-such as housing for Hurricane Ike victims--in an effort to improve 
performance. For example, he said that by reviewing data on various 
aspects of the post-Hurricane Ike replacement housing situation--such 
as individual housing needs, the inventory of mobile homes, and the 
rate of mobile home installation--he was able to identify and fix a 
bottleneck in providing an adequate supply of temporary housing. 
Another FEMA official with responsibilities for performance reporting 
said that such disaster assistance metrics were important first steps 
in using performance information to improve disaster response. However, 
he noted that these metrics were specific to that disaster and that 
more work is needed to develop performance measures that can be applied 
to all disaster response situations. 

More recently, the former Deputy Administrator said he had begun to 
turn his attention to FEMA's performance at the regional level. For 
example, he said he had begun to work with the regions to require 
strategic plans and quarterly performance reporting to his office. He 
acknowledged, however, that some regions are better than others at 
using performance information and that these quarterly reporting 
efforts were still immature. Further, our interviews with other 
officials indicated that only one reporting cycle had been completed 
before the former Deputy Administrator had left. 

Our interviews with top officials and regional program managers in the 
Disaster Assistance and Mitigation Directorates indicated that 
leadership commitment to using performance information varied among 
directorates and regions. In the Disaster Assistance Directorate, one 
headquarters official told us that he does not need performance targets 
to help him determine whether his directorate is accomplishing its 
mission. He says he relies primarily on verbal communications with the 
leadership and from FEMA's regions, joint field offices, and members of 
Congress to identify issues to be addressed and areas that are running 
well. Although he said he does use data to monitor how well the 
directorate is responding to post-disaster inquiries for assistance, 
such as the call center statistics from FEMA's National Processing 
Service Center, his description indicated that these data mostly 
reflect workload and activity levels, rather than performance against 
goals. 

Another headquarters official within the Disaster Assistance 
Directorate's Public Assistance program said he does not receive formal 
performance reports from regional program managers, nor are any 
performance reports required of him by his supervisors. He noted that 
goals for the Public Assistance program are established for the field 
at the regional level and that the directorate's role is primarily one 
of developing policies and guidance. Although regional staff completed 
reports on various activities related to his programs, he said he was 
uncertain how he would use these regional reports since performance 
monitoring was not his focus. Instead, he said that he spoke to the 
regions on an ad hoc basis as performance problems arose. 

Officials responsible for Disaster Assistance Directorate programs in 
two of three regions we reviewed similarly described the ad hoc nature 
of performance reporting to headquarters. One said that, although he 
has begun to issue quarterly reports on various data, such as funding 
obligation rates, to his Regional Administrator that are shared with 
headquarters, he had not received any comments or feedback on what is 
reported. Another regional Disaster Assistance Directorate official 
told us that although regional program supervisors and staff meet 
quarterly to discuss timeliness related to funding obligations and 
project worksheet completion,[Footnote 15] among other items, they do 
not communicate performance information to the directorate head or 
headquarters on a regular basis. Officials from the third region we 
interviewed said that, although they anticipated that new strategic 
planning initiatives might change how they report on performance in the 
future, they did not review performance information with their regional 
staff, nor did they communicate it to headquarters. Officials we 
interviewed within the Disaster Assistance Directorate expressed 
reluctance toward holding their staff accountable for meeting 
performance goals due to external factors, such as the unpredictability 
of disasters beyond their control. Further, some expressed uncertainty 
as to how they could use performance information in the face of 
uncontrollable external factors. 

In contrast, according to several officials from the Mitigation 
Directorate, they had begun to use performance information to more 
effectively manage under unpredictable circumstances. These officials 
said that the former Mitigation Administrator's commitment to 
performance and accountability helped change the directorate culture to 
one that encouraged use of performance information to plan for and 
respond to factors outside of their control. For example, storms and 
other natural events can disrupt the Mitigation Directorate's 
production work related to flood-plain maps modernization. To plan for 
possible disruptions, Mitigation Directorate officials said they review 
performance information on progress toward map modernization goals on a 
monthly basis, in connection with weather forecasts. This review helps 
them to determine in advance if they are at risk of missing performance 
targets and to identify corrective actions or contingency plans in 
order to get back on track toward achieving their goals. They also 
described their own commitment to using performance information as a 
means to demonstrate the value of their programs, make improvements, 
and achieve results (see example 1). 

[Text box: Example 1: Mitigation Directorate Leadership Used 
Performance Information to Engage Partners in Improving Map 
Modernization Outcomes: 

The Mitigation Directorate works with multiple stakeholders, including 
state and local governments, the insurance community, and private 
contractors to ensure that flood-prone communities have the most 
current and reliable flood data available, and that those communities 
are in compliance with regulations referencing current Flood Insurance 
Rate Maps (FIRM), which are used to regulate land development through 
flood-plain management and for flood insurance purposes. Communities 
that fail to adopt the new maps by the FIRM effective date are 
suspended from the National Flood Insurance Program, which can 
negatively affect local real-estate transactions and limit the 
community’s eligibility for disaster assistance. In an effort to 
improve community compliance, the Mitigation Directorate set an annual 
performance target of 93 percent of communities adopting new maps by 
the FIRM effective date. They closely monitored performance by 
incorporating the map adoption rate into state grant agreements, map 
modernization contracts, and FEMA’s regional performance scorecards. 
According to Mitigation Directorate officials, they frequently reviewed 
map modernization performance information with their external 
stakeholders and FEMA’s regional management, which sent a clear signal 
that they were paying attention to outcomes. According to these 
headquarters officials, they were able to meet or exceed their 
performance target of 93 percent, in part as a result of their frequent 
communication and review of performance information. End of text box] 

In addition, Mitigation Directorate officials from both the national 
office and from two of the three regions we reviewed noted frequent 
communication and review of program performance information. For 
example, each of the Mitigation Directorate's three divisions developed 
score cards including performance measures that are reviewed quarterly. 
Mitigation Directorate officials also told us they involved staff 
throughout the directorate and their external stakeholders including 
insurance companies, the lending community, and state and local 
officials in their efforts to establish preliminary performance goals 
and measures. One regional manager we spoke to said that directorate 
and regional leadership fostered collaboration by encouraging regional 
staff to develop and share ideas for metrics and targets through weekly 
conference calls devoted to discussing performance information. 

Mitigation Directorate officials said that developing measures and 
holding staff and contractors accountable for their performance was not 
an easy transformation. They said that one key to this culture change 
was for the leadership to strike an appropriate balance between holding 
managers accountable for agency goals and building trust among managers 
and staff that performance information would be used as an improvement 
tool, rather than as a punitive mechanism. Finally, Mitigation 
Directorate officials said that once managers and staff began to see 
that measuring performance can actually help them to improve results, 
they became more supportive of their leadership's efforts to use 
performance information in their decision making. 

FEMA Has Not Consistently Aligned Agency, Program, and Individual 
Performance Goals: 

As we reported previously,[Footnote 16] agencies can promote use of 
performance information by aligning agencywide goals and objectives, 
and by aligning program performance measures at each operating level 
with those goals and objectives. FEMA's current strategic plan includes 
high-level strategic goals, such as, "deliver easily accessible and 
coordinated assistance for all programs." FEMA officials said they had 
recently completed an addendum to the strategic plan that can be 
updated to reflect evolving circumstances. These officials 
acknowledged, however, that these goals and measures are high level and 
that establishing performance goals at the regional or division level 
would help FEMA to cascade organizational goals down to individual 
staff. 

According to our interviews with other top officials, FEMA had started 
to develop regional performance measures and align them with different 
operating levels of the agency. In mid-2008, FEMA's leadership directed 
all regions to begin developing regional strategic plans; however, 
several officials described challenges in ensuring that these plans 
included meaningful performance measures that aligned with those at the 
directorate and agency level. The regions' early efforts produced 
performance measures that, from the program directorates' perspective, 
were often not aligned with their program goals. Subsequently, FEMA 
performance and operations officials worked with the regions and 
national program directorate leaders to refine the measures and to 
provide training, which included information on aligning activities and 
outputs with strategic goals. As of late 2008, FEMA had developed an 
initial set of 38 regional performance measures that are intended to 
link to the agency's strategic goals. It was unclear, however, when the 
regions will begin collecting data on these metrics to establish 
baseline performance levels and targets for improvement. Top officials 
acknowledged that efforts to align regional goals with the agency's 
strategic objectives is a work in progress and will take some time to 
complete. 

We have reported that a greater focus on results can be created by 
cascading organizational goals and objectives down to the individual 
performance level, helping individuals to see the connection between 
their daily activities and organizational goals and providing a basis 
for holding individuals accountable for their results.[Footnote 17] 
However, our 2007 survey results indicated 44 percent of FEMA managers 
reported being held accountable for agency strategic goals, compared to 
60 percent of their counterparts in the rest of government (see figure 
6). 

Figure 6: Percentage of Federal Managers Who Reported They Were Held 
Accountable for Their Agency's Accomplishments of Strategic Goals: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Agency managers/supervisors at my level are held 
accountable for agency accomplishment of its strategic goals. 

Percentage responding to a “great” or “very great” extent: 
FEMA: 44%; 
Rest of Government: 60%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

FEMA lacked a performance-management system that can cascade 
organizational goals to the individual performance level--that is, 
create a "line of sight" linking individual goals and organizational 
success. In September 2008, FEMA human capital management officials 
told us that aligning agency goals with non-Senior Executive Service 
(SES) performance objectives was being accomplished through the 
implementation of DHS' ePerformance Management System.[Footnote 18] At 
that time, according to officials, approximately 30 percent of 
permanent full-time employees had been converted to the new system. 
However when Congress disallowed DHS from further implementing the 
human-resources-management system then in place, in October 2008, 
[Footnote 19] DHS rescinded its ePerformance Management and related 
human-resources systems. For fiscal year 2009, FEMA chose to revert to 
its previous appraisal system, which it had established in the mid-
1990s. Although human-capital officials told us that this appraisal 
system encourages supervisors and employees to develop work plans, our 
review of policies and appraisal documents indicated no requirement to 
align individual performance objectives with agency or program 
performance goals. A human-capital official at FEMA noted that, in an 
effort to forestall multiple, time-consuming changes to FEMA's 
performance-management system, the agency did not want to invest 
further resources in this area until the agency is able to ascertain 
how DHS is proceeding with department-level performance-management 
policies and systems. This official also explained that changes to 
FEMA's current performance management policies or guidance--which 
covers both bargaining-unit and non-bargaining-unit employees--would be 
subject to negotiation with its employee labor unions. 

Interviews with management officials from three regions further 
indicated that the practice of cascading organizational goals to the 
individual performance level was applied inconsistently across regions. 
The top official from one region said that the region does not include 
GPRA goals or other agency-level goals in individual performance 
agreements and noted that FEMA is not very mature in individual 
performance management. Although the top official from another region 
said that in the absence of direction from headquarters, he had worked 
to link individual performance objectives to FEMA's strategic goals, 
the performance agreement example provided to us included expected 
outcomes that were not easily measurable, such as, "… build trust and 
confidence with the state and local partners." An official from a third 
region said that while the now-defunct ePerformance Management System 
was useful in helping the region to establish such linkages, FEMA's 
current performance management system is a "sham" because it provides 
no tool to measure performance against goals. Limited goal alignment in 
the area of individual performance management may hinder managers' 
ability to understand how their roles and responsibilities affect 
broader results. For example, one official told us that without such 
alignment, it was difficult to show staff how their efforts supported 
FEMA's mission. It was also difficult to hold managers accountable for 
results. 

FEMA Officials Said Inadequate Information Systems, Analytic Skills 
Hindered Use of Performance Information: 

The practice of building analytic capacity is critical to using 
performance information in a meaningful fashion. Our review of FEMA's 
analytic capacity to use performance information--in terms of both 
management information systems and trained employees--revealed some 
weaknesses. According to our 2007 survey, the percentage of FEMA 
managers reporting that their agency is investing in resources to 
improve the agency's capacity to use performance information is lower 
than the rest of government (see figure 7). 

Figure 7: Percentage of Federal Managers Who Reported Agency Investment 
in Performance Data Capacity: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: My agency is investing in resources to improve the 
agency’s capacity to use performance information. 

Percentage responding to a “great” or “very great” extent: 
FEMA: 19%; 
Rest of Government: 33%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

These survey results were consistent with the perspective we heard from 
key headquarters officials who told us that poorly integrated systems 
made it difficult for FEMA managers to use performance information. 
According to one official, in order to gather performance information, 
it was necessary to write programs to generate specific reports for 
each of the systems and then manually integrate the information, making 
it difficult to produce repeatable and verifiable reports. For example, 
in order to pull performance information together at the program- 
directorate level, this official told us he had to ask the few 
technical staff capable of working with the systems to devote a 
significant amount of time and effort to producing the information. 
FEMA officials told us they were pursuing a new budget system, RM 
Online, which includes a component intended to make high-level program 
and program performance information readily available to senior 
managers. However, the agency was still evaluating the system and it 
was unclear when it might be implemented. 

In addition to weaknesses in information-systems capacity, according to 
several officials we interviewed, there were a limited number of staff 
with the analytic capacity necessary to work with performance metrics. 
The former Deputy Administrator said that when he joined FEMA in 2006, 
managers and staff did not use or understand performance data. A high- 
ranking directorate official told us that he was trying to increase use 
of performance information to improve workflow and other operations in 
his area. However, he said he lacked staff with the skills to analyze 
information for decision-making purposes. Another official with 
responsibilities for performance measurement said that the lack of 
analytically-skilled staff throughout the agency posed a challenge to 
using performance information. According to this same official, in 
order to improve the agency's capacity to use performance information, 
FEMA has begun to provide training on performance measurement to 
directorate and regional managers. Our review of the training materials 
indicated that they addressed specific areas that we have identified as 
critical to using performance information, including strategic 
planning, developing robust performance measures, and analyzing what 
the performance data mean. FEMA has also developed strategic planning 
guidance that outlines an approach for developing performance measures 
and evaluating performance, among other goals. However, so far, the 
training has been provided only to representatives from each region and 
directorate managers in headquarters, and a key official acknowledged 
that not all directorates have been equally effective at pushing the 
training out to their managers and staff in the regions. 

Interior, NPS Officials and Managers Reported Uneven Leadership 
Commitment to Using Performance Information for Decision Making: 

We have previously reported that to drive continuous improvement 
throughout an agency and inspire employees to accomplish challenging 
goals, it is critical that leadership demonstrates its commitment to 
results-oriented management.[Footnote 20] We have also reported that 
top leadership can demonstrate such commitment by clearly communicating 
how they use performance information for decision making. On survey 
items related to managers' perceptions of their leadership's commitment 
to using performance information, Interior's 2007 results were lower 
than those in the rest of government (see figure 8). 

Figure 8: Percentage of Federal Managers Who Reported Top Leadership 
Commitment to Using Performance Information to Guide Decision Making: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: My agency’s top leadership demonstrates a strong 
commitment to using performance information to guide decision making. 

Percentage responding to a “great” or “very great” extent: 
Interior: 37%; 
Rest of Government: 50%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

Our interviews with top leadership and managers provided further 
insight into these survey results. At all levels, we observed that 
leaders and managers conveyed a strong commitment to accomplishing the 
agency's mission. However, their commitment to using performance 
information for decision making was less evident. For example, the 
former Deputy Secretary of the Interior said that, although she 
reviewed performance information at the end of the year in connection 
with preparing the department's annual performance report, she was not 
involved in regularly monitoring the performance information reported 
under GPRA and PART.[Footnote 21] Another Interior official 
characterized the department's strategic plan as more of a vehicle for 
communicating high-level goals and accomplishments than as a tool for 
management decision making. This view that top leadership did not use 
performance information to make decisions was supported by several NPS 
managers who referred to the performance reporting process as "feeding 
the beast," because they receive little or no communication from either 
Interior or NPS headquarters in response to the information they are 
required to report, leading them to assume that no one with authority 
reviews or acts on this information. 

As we have previously reported, leaders can demonstrate their 
commitment to using performance information for various management 
functions in a number of ways, such as: 

1. holding individuals accountable for results by evaluating their 
performance against goals; 

2. identifying problems in existing programs, to try to identify the 
causes of problems and to develop corrective actions; 

3. developing strategies, planning and budgeting, identifying 
priorities, and making resource allocation decisions to affect programs 
in the future; and: 

4. identifying more effective approaches to program implementation and 
sharing those approaches more widely across the agency.[Footnote 22] 

At Interior our 2007 survey results on questions related to 
accountability indicated that Interior's managers were similar to those 
in the rest of government. For example, 75 percent of Interior 
managers--similar to those at other agencies--reported being held 
accountable for results (see figure 9). 

Figure 9: Interior Managers Reported Being Held Accountable to a 
Similar Extent as Rest of Government: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Agency managers/supervisors at my level are held 
accountable for agency accomplishment of its strategic goals[A]. 

Percentage responding to a “great” or “very great” extent: 
Interior: 56%; 
Rest of Government: 60%. 

Survey question: Agency managers/supervisors at my level are held 
accountable for the result(s) of the program(s)/operation(s)/project(s) 
they are responsible for[A]. 

Percentage responding to a “great” or “very great” extent: 
Interior: 75%; 
Rest of Government: 72%. 

Source: GAO. 

Notes: Data are from GAO 2007 survey. 

[A] The differences in responses between Interior and the rest of 
government on these two items are not statistically significant. 

[End of figure] 

Several NPS managers we interviewed corroborated these survey results, 
citing how performance information was used to hold them individually 
accountable for achieving certain performance goals. However, this 
focus on individual accountability did not appear to extend to using 
performance information for other management functions that leading 
organizations employ, such as identifying problems, taking corrective 
actions, or developing strategy (see figure 10). 

Figure 10: Interior Managers Reported Using Performance Information to 
Identify Problems, Take Corrective Actions, or Develop Strategy to a 
Lesser Extent than Rest of Government: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Developing program strategy. 

Percentage responding to a “great” or “very great” extent: 
Interior: 38%; 
Rest of Government: 51%. 

Survey question: Identifying program problems to be addressed. 

Percentage responding to a “great” or “very great” extent: 
Interior: 40%; 
Rest of Government: 61%. 

Survey question: Taking corrective action to solve program problems. 

Percentage responding to a “great” or “very great” extent: 
Interior: 41%; 
Rest of Government: 60%. 

Source: GAO. 

Notes: Data are from GAO 2007 survey. 

[End of figure] 

Our interviews with top leaders and managers at NPS may help to explain 
these survey results. A senior headquarters official at NPS responsible 
for park operations said that he is not involved in regularly 
monitoring the performance of the park system in achieving GPRA and 
PART-related performance goals and does not use this information to 
manage park operations. Rather, he regularly communicated with his 
staff about other management issues, such as the rate at which funds 
are obligated.[Footnote 23] However, obligation rates, while helpful in 
assessing the pace at which projects are progressing, do not provide 
information about the results achieved with these funds and therefore 
may not be useful in driving performance improvements. He also said 
that although GPRA and PART-related goals were not useful to him in 
making operational and program management decisions, NPS had included 
these goals in managers' performance agreements in order to comply with 
Interior's individual performance management policies.[Footnote 24] 
However, this official--and other NPS senior officials and managers we 
interviewed--was concerned that GPRA and PART information was sometimes 
used to evaluate individual performance without appropriate context or 
recognition of what is outside managers' control. Some noted, for 
example, that storms and flooding can have an effect on maintenance and 
repair performance targets, and air and water quality goals are 
dependent on environmental factors outside of the parks. 

We have previously reported[Footnote 25] that successful organizations 
typically create ambitious performance goals aimed at achieving 
significant improvements in performance, rather than marginal 
improvements of just a few percentage points. However, several NPS 
officials and managers told us that GPRA and PART targets were not 
always set at ambitious levels. An NPS headquarters official explained 
that targets set at the park and regional levels are aggregated at the 
agency level and subject to evaluation in the context of past 
performance and anticipated funding. If necessary, headquarters will 
adjust performance targets to appropriate levels. He also noted that 
parks and regions are advised at the beginning of every planning cycle 
that targets should be based on what can be realistically accomplished 
during the course of the fiscal year. However, some park managers we 
spoke to said that they were directed by their superiors to set targets 
at levels they can safely meet because they and their superiors could 
otherwise be penalized. Without sufficiently ambitious goals, managers 
may not have incentives to use performance information to identify 
opportunities for significant improvement. 

Although our interviews indicated some concerns that NPS leadership had 
focused more on using performance information for accountability than 
to help make improvements, some regional and park managers did note 
examples of specific program areas, such as cultural resources and 
facilities management, where they had begun to see their NPS leadership 
take a broader approach to using performance information. For example, 
one regional director said that the cultural-resources program 
leadership based in headquarters had effectively communicated to the 
regions how they used information from the PART review to inform their 
funding decisions. He went on to say that even though some of the 
decisions they made about prioritizing certain projects over others 
were painful, staff in the field understood how the performance 
information they submitted contributed to the program's funding 
decisions. He noted that his park superintendents appreciated when 
program leaders communicated that the information they provided was 
useful and demonstrated how it informed their decision-making 
processes. 

Measures That Lacked Credibility to Bureau Level Managers, in 
Combination with a Proliferation of Measures, Detracted from Usefulness 
of Performance Information: 

We reported previously that performance information must be useful and 
meet differing policy and management information needs in order to 
encourage its use for decision making.[Footnote 26] Our 2007 survey 
results show that, compared to the rest of government, a significantly 
higher percentage of Interior managers reported that difficulty 
determining meaningful measures is a barrier to using performance 
information (see figure 11). 

Figure 11: Percentage of Federal Managers Who Reported That Difficulty 
in Determining Meaningful Measures Hinders Using Performance 
Information: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Difficulty determining meaningful measures hinders 
measuring performance or using the performance information. 

Percentage responding to a “great” or “very great” extent: 
Interior: 58%; 
Rest of Government: 39%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

Our interviews at NPS and Reclamation may help to explain these survey 
results, with managers describing several types of problems that 
detract from the usefulness of the performance information they are 
required to report. 

Some GPRA, PART Measures Lacked Credibility for Decision Making at the 
Bureau Level: 

We reported previously that, to be useful, performance information 
should be consistent, timely, valid, relevant, and credible, among 
other attributes.[Footnote 27] At both NPS and Reclamation, managers we 
interviewed said they do not consider certain GPRA and PART measures to 
be meaningful indicators of performance. For example, Reclamation 
managers report on a performance measure of the amount of water they 
deliver, but according to a number of managers within the bureau, this 
measure is not meaningful because it reflects unpredictable changes in 
weather conditions that could affect their customers' needs, rather 
than how well they are performing (see example 2). 

[Text box: Example 2: Reclamation Managers Described How a Performance 
Measure That Lacked Relevance Was Not Useful for Decision Making: 

Reclamation delivers approximately 10 trillion gallons of water to more 
than 31 million customers including farmers, municipalities, and 
irrigation districts each year. According to Reclamation officials, it 
is critical that they frequently monitor performance information 
related to water delivery in order to mitigate the effect of water 
shortages on their customers. Reclamation is responsible for 
identifying potential water shortages as early as possible and 
restricting delivery when supply is running low. To identify potential 
shortages, Reclamation managers need information that can tell them how 
well they are managing their supply of water. Instead, Reclamation 
managers report against a target for the amount of water delivered, 
which is not adjusted for changes in critical factors such as weather 
conditions or customer needs. For example, if there is a lot of 
rainfall, Reclamation customers do not need the bureau to deliver 
water. By not delivering water, Reclamation is meeting customer needs 
and being a responsible steward of a natural resource, but it is also 
failing to meet its performance goal. Under these circumstances, 
Reclamation officials pointed out, if a manager were to make a decision 
based on a performance goal related to the amount of water delivered, 
he would have a perverse incentive to deliver water that is not needed. 
End of text box] 

Some managers suggested it would make more sense to have a measure that 
takes into account how well they manage water supply in relation to 
customer needs, rather than amount delivered. Headquarters officials at 
Interior and Reclamation said they view the measure as an important 
statistic related to one of Reclamation's primary missions, to deliver 
water, and that they do not expect managers to meet the water-delivery 
targets if circumstances change. One official also noted that 
performance information on this measure is accompanied by narrative 
that explains whatever factors may have affected the amount of water 
delivered. However, Interior included annual goals for this measure for 
fiscal years 2009 and 2010, and reported on the results of this goal 
for the past four years in its fiscal year 2008 Performance 
Accountability Report. Several Reclamation managers we interviewed said 
that this measure lacked credibility and was not used to make 
decisions. Despite the concerns expressed by regional and frontline 
managers, an Interior official noted that Reclamation had not proposed 
an alternative measure, which may be due to the difficulty of defining 
a quantifiable, verifiable measure. 

At both NPS and Reclamation, managers we interviewed described 
performance information that lacked credibility because the measures do 
not accurately define comparable elements or do not take into account 
different standards across bureaus or units. For example, several NPS 
managers noted that one of the measures they report, "percent of 
historic structures in good condition," does not differentiate between 
a large, culturally significant structure such as the Washington 
Monument and a smaller, less significant structure such as a group of 
headstones. Consequently, a manager could achieve a higher percentage 
by concentrating on improving the conditions of numerous less 
significant properties. Similarly, measures related to trail 
maintenance do not take into account the baseline conditions or the 
purpose of the trails, which can vary greatly and affect the level of 
resources needed for maintenance. Some park managers expressed concern 
that certain performance measures falsely imply a consistency among 
units being compared, which could lead to inaccurate conclusions if 
decision makers do not take the proper context into consideration when 
reviewing information. Some managers also said that the long-term 
nature of some of the GPRA goals--many of which they acknowledged as 
critical--made it difficult to use the related performance information 
for daily decision making. For example, one headquarters official 
explained that the goal of restoring damaged park lands to a desired 
condition is an important aspect of NPS' mission, but progress may be 
slow and the result may not be achieved for many years. 

As we have reported,[Footnote 28] involving managers in the development 
of performance goals and measures is critical to increasing the 
credibility and therefore the usefulness of performance information to 
their day-to-day activities. A couple of NPS managers acknowledged that 
in the early stages of GPRA implementation, they were more involved, 
and felt more committed to the process of developing goals and measures 
than they do currently, but that is no longer the case. Several 
Reclamation managers also said that they believed they had little 
influence in defining or revising GPRA and PART measures and that as a 
result, they do not take ownership of the performance information. 
According to a senior Interior headquarters official, however, the 
department has continued to provide a forum for bureaus to propose 
modifications or alternatives to existing measures. For example, every 
3 years, Interior requests proposals for changes or revisions, which 
are then reviewed by bureau and department officials, OMB, and the 
general public. However, several managers we interviewed at NPS and 
Reclamation said that when they have proposed changes to measures that 
they find misleading or irrelevant, they did not always see evidence 
that their suggestions were considered or acted upon. 

An Interior headquarters official explained that suggestions received 
from the bureau level may not always be appropriate for adoption at the 
department level, where it is important to have measures that can 
provide Interior decision makers with a view over its bureaus' 
aggregate performance against mission goals. In these cases, the 
department forwards the suggestions to the bureau policy makers 
responsible for performance measurement and reporting for their 
consideration. This official pointed out that Interior's aggregated 
performance measures do not preclude the bureaus from reviewing 
disaggregated performance information; or other performance information 
that is more meaningful or relevant to their particular management 
decision-making needs. 

Proliferation of Performance Measures at Interior and NPS Detracted 
from Usefulness: 

We have previously reported that to be useful and meaningful to 
managers and staff across an agency, performance measures should be 
limited at each organizational level to the vital few that provide 
critical insight into the agency's core mission and operations. 
[Footnote 29] Setting such a limit also helps to ensure, among other 
things, that the costs involved in data collection and analyzing the 
data do not become prohibitive. However, in the 7 years since the 
inception of the former administration's PART assessment initiative, 
Interior has expanded its performance reporting to include 440 PART 
program measures, in addition to its approximately 200 strategic 
performance measures that satisfy GPRA reporting requirements on its 
nine bureaus. A senior headquarters official at Interior said that the 
annual Performance and Accountability Report[Footnote 30] contains so 
much data that it is difficult for senior leaders and managers to focus 
on priorities and easily identify performance gaps among the different 
program areas. 

At NPS, managers are required to report on over 120 performance 
measures related to GPRA and PART, covering a broad range of programs 
and projects including concessions, facilities, volunteer hours, 
visitor satisfaction, recreation opportunities, safety elements, and 
the condition of natural, historical, and cultural resources. At NPS, a 
senior headquarters official echoed the concern stated by an Interior 
official, noting that the volume and scope of performance information 
that managers are required to collect and report make it difficult for 
them to determine which aspects of performance are important to 
department and service leadership. Moreover, regional and park managers 
stated that the resources needed to collect and report on the large 
volume of GPRA and PART measures are extensive. Managers staffed to 
smaller parks with fewer employees were particularly concerned about 
the workload associated with collecting and reporting on all of these 
measures, with one noting that it was at the direct expense of park 
operations and maintenance activities. Several managers told us that 
new measures and reporting requirements were frequently introduced with 
new programs and initiatives, but that they were unaware of any efforts 
at NPS to review or retire performance measures that may no longer 
represent management priorities. 

According to officials at Interior and NPS, they were aware of issues 
related to the usefulness of performance information as currently 
collected and reported. The former Deputy Secretary of the Interior 
said that department-level measures were not necessarily useful for 
making day-to-day management decisions. She attributed this to 
Interior's effort to create measures with reach, which she defined as a 
few simple measures that can provide a view of how all of their bureaus 
are performing against broad outcome goals. She contrasted the 
department-level measures with rich measures that are intended to 
provide detailed information that makes sense on the ground and can be 
used to manage programs. Although this official said that, by 
definition, there should only be a few high-level reach measures, she 
acknowledged that there had been a proliferation of measures, which 
made it difficult to manage. 

Interior and NPS Have Undertaken Efforts to Improve the Usefulness of 
Performance Information: 

To address these concerns, some efforts had been initiated at both 
organizations to improve the usefulness of performance information 
without adding to the existing data-collection and reporting process. A 
senior headquarters official at Interior described how he worked with 
department and bureau managers to select a subset of 26 key performance 
indicators from the department's more than 200 strategic performance 
measures to help internal and external audiences focus on the 
department's critical mission areas. These 26 measures are aggregated 
across bureaus with shared performance goals and link their performance 
with cost information. For example, Interior reported that in 2007, the 
department spent approximately $114.4 million to control about 635,000 
acres--or 1.7 percent--of total acres with invasive plant species. 
These data capture expenditures and progress made by the four bureaus-
-NPS, Reclamation, Bureau of Land Management, and Fish and Wildlife 
Service--that collectively contribute to the goal. According to this 
official, these summary indicators are intended to help senior 
officials navigate the large volume of performance information so they 
can more easily identify areas that are achieving results and focus on 
areas that need improvement. By linking budget information to these key 
indicators, this official said he was able to improve the usefulness of 
the performance information as a decision-making tool for Interior, 
providing top leaders with a snapshot of the cost of meeting various 
goals. 

At NPS we observed several initiatives that were intended to improve 
the usefulness of performance information. The NPS Scorecard, for 
example, was developed by the NPS Comptroller--with input from 
headquarters, the regions, and the parks--as a diagnostic tool to 
evaluate performance and efficiency across the organization. The 
current version of the scorecard includes 34 financial, organizational, 
and strategic performance indicators derived from existing data sources 
and scores each park relative to its counterparts. Although still under 
development, some managers described the NPS scorecard as a promising 
instrument because it provided them with an analysis of their 
performance and efficiency in comparison to other parks, among other 
reasons. They also appreciated that the scorecard pulled information 
from existing NPS databases and did not require additional investment 
in data collection or reporting. However, some managers expressed 
concerns that the scorecard compared park performance to an average of 
all parks rather than an established benchmark. On an annual basis, to 
meet NPS-wide goals, specific performance targets are established for 
individual parks. Therefore, it is possible that a park that met or 
exceeded these agreed upon performance targets, but was still below the 
NPS-wide average, would receive an unfavorable rating on the scorecard. 

Another approach that was being adopted in some of the NPS regions we 
reviewed--the Core Operations Analysis--is a park-level funding and 
staffing planning process that is intended to improve park efficiency 
and ensure that a park's resource-allocation decisions are linked to 
its core mission goals. Regional-level managers who engaged in the Core 
Operations Analysis said it was useful in establishing goals based on 
the park's priorities, monitoring progress toward achieving those 
goals, and holding park superintendents accountable for meeting 
established goals, in contrast to the GPRA and PART goals, which 
several managers perceived as relating to department-or program-level 
planning efforts and long-term goals. One region had taken the 
additional step of establishing linkages between their analysis and 
GPRA goals with assistance from a senior Interior official. 

In addition, two park superintendents we interviewed said they were 
able to make GPRA and PART measures useful for their day-to-day 
management decision making by linking them to shorter-term, operational 
goals. For example, one had created an annual work plan that linked 
GPRA goals to intermediate tasks necessary to achieve those goals so 
that staff could see how their efforts contributed to aggregate or long-
term outcomes. Specifically, the GPRA target that "87 percent of 
visitors understand and appreciate the site's significance" was linked 
to two intermediate goals: "research and preserve information on 
stories and resources" and "conduct 7 guided tours per day, 362 days 
per year." Another said that he incorporated targets related to these 
GPRA goals in all of his managers' performance plans in order to make 
the performance information more meaningful on an annual basis. 

Although these initiatives appeared to be positive steps toward 
ensuring that performance measures were useful for decision making at 
different operating levels--as well as to help focus leaders and 
managers on performance priorities--they did not appear to reduce the 
volume of performance information that NPS and Reclamation were 
required to report. As a result, efforts to ensure that performance 
information was useful at the bureau level may have been hindered. For 
example, the two park managers who had linked shorter-term or 
disaggregated targets to GPRA goals told us that they were atypical in 
taking this approach. They explained that many of their colleagues were 
not able to develop bridge measures or interim goals due to lack of 
training, time, or staff resources relative to their reporting 
workload; or because they were responsible for reporting on too many 
measures as a result of the scope of their park's operations. This 
perspective was borne out by our interviews with other park 
superintendents who said they expended significant resources to report 
on many measures that lacked relevance to their management decision 
making. One park manager told us that he did not have the staff 
available to collect and use information over and above what was 
already required. 

Labor-Intensive and Poorly Integrated Data Systems Increase Burden of 
Performance Reporting at NPS and Reclamation: 

According to NPS managers we interviewed at all levels, lack of 
integration among NPS' multiple data systems--and little flexibility to 
modify its primary performance information system to accommodate 
evolving programs and requirements--contributed to a time-consuming, 
labor-intensive performance-reporting process. At NPS, managers were 
required to use the Performance Management Data System (PMDS) to 
collect GPRA performance information. However, regional and park 
managers we spoke to said that the system was not easy to use for data 
entry or for obtaining information needed to manage programs or 
operations. One headquarters official confirmed that the system had not 
been designed as a management tool, but was designed simply to 
aggregate performance from the parks. Several managers described the 
system as slow and difficult to use and noted this was especially 
problematic for NPS' remote parks that do not have high-speed system 
access.[Footnote 31] NPS managers we interviewed also said that PMDS 
cannot be used to integrate GPRA data with other information that would 
be useful for decision making such as park projects status and funding 
information, facilities condition information, or other park program 
performance information that is not covered by current GPRA measures. 
Several park managers described how another system, the Project 
Management Information System (PMIS), captures the park's project 
proposals and is useful in tracking park-level projects' status and 
funding. Although PMIS has a field where park managers can enter 
expected performance outcomes related to GPRA goals, because there are 
no automated linkages between the two systems, doing so would require 
additional data entry. 

Further, a NPS headquarters official said that PMDS does not have the 
flexibility to generate new reports to satisfy evolving reporting 
requirements, which can be problematic when NPS is expected to 
demonstrate results associated with special funding initiatives such as 
the Centennial Commitment--a multiyear initiative that provided NPS 
with $100 million in fiscal year 2008 to augment existing park funding. 
According to this official, PMDS was not designed to differentiate 
Centennial dollars--or any other special appropriations--from their 
annual appropriation; rather the system was set up to collect 
performance information, regardless of funding source, from individual 
park units. According to this same official, PMDS does not attribute 
specific performance results to discrete fund sources because gains in 
performance are often the result of multiple fund sources over multiple 
years. To satisfy Centennial reporting requirements, managers had to 
collect and report additional performance information, using other 
systems, to show what they did with Centennial dollars and what types 
of results they accomplished. 

One park manager noted that although the lack of integration and 
flexibility was an inconvenience for the largest parks, they generally 
had enough staff to absorb the workload. However, some managers 
responsible for smaller, geographically isolated parks that do not 
employ many staff said that these issues presented a challenge to their 
ability to enter data and generate required reports. A top official at 
Interior also acknowledged that this was especially difficult for 
smaller parks where a few staff members have to serve in many roles and 
may not have enough time to collect all of the required data. 

Some Reclamation managers we spoke to also said that a lack of 
integrated systems made it difficult for them to collect and report on 
performance information. According to a regional manager responsible 
for GPRA reporting, there is no one centralized database to which a 
Reclamation executive can go to find out how the bureau is doing on all 
of Reclamation's required performance goals. The lack of linkage among 
the different Reclamation systems required managers to enter the same 
data multiple times, which some managers said is a burden. 

Officials at CMS Headquarters and Selected Programs Said Key Management 
Practices Had Promoted Use of Performance Information for Decision 
Making: 

In 2000, significantly fewer managers at CMS--then known as the Health 
Care Financing Administration--reported using performance information 
for various management decisions, as compared to their counterparts in 
the rest of government. Between our 2000 and 2007 surveys, however, CMS 
showed one of the largest average increases in the percentage of 
managers who reported using performance information for certain 
decisions. This increase placed CMS in about the middle of our agency 
rankings, which were based on an index of 2007 survey results designed 
to reflect each agency's managers' reported use of performance 
information (see figure 20 in appendix II). Selected officials we 
interviewed attributed this change to the combined effect of key 
management practices they employed, including leadership commitment to 
using performance information; alignment of strategic and performance 
goals; improving the usefulness of performance information; and 
building the analytic capacity to collect and use performance 
information. They also cited the legislative mandate for expanded 
performance reporting included in the Medicare Prescription Drug, 
Improvement, and Modernization Act of 2003 as another key change 
factor.[Footnote 32] According to these managers and officials, their 
increased use of performance information helped them to identify 
problems and solutions to improve results. 

CMS Managers Said Highly Visible Leadership Commitment, Frequent 
Communication Fostered Use of Performance Information: 

Our 2007 survey results indicated that CMS managers are at about the 
same level as the rest of government in reporting leadership commitment 
to results-oriented management, but compared to 2000, significantly 
more CMS managers agreed that their leadership is committed to using 
performance information (see figure 12). 

Figure 12: Percentage of CMS Managers Who Reported Top Leadership 
Demonstrated Commitment to Achieving Results: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: My agency's top leadership demonstrates a strong 
commitment to achieving results. 

Percentage responding to a “great” or “very great” extent: 
2000: 46%; 
2007: 69%. 

Source: GAO. 

Note: Data are from GAO 2000 and 2007 surveys. 

[End of figure] 

Nearly all of the CMS officials we interviewed credited the commitment 
of one or more agency leaders--such as the CMS Administrator, Chief 
Operating Officer, or a Consortium head--for their increased use of 
performance information to achieve results. Some of them further 
described how leadership demonstrated their commitment. For example, a 
budget official told us that at the first staff meeting of the year he 
distributes the Chief Financial Officer's performance plan and 
priorities to lay out for staff their performance goals for the year. 
He also described how each group director is given a poster-sized chart 
outlining his/her performance goals and program-specific GPRA goals for 
the year, which hang in their offices. As another example, an official 
we interviewed in Region IX described how top management discusses 
performance goals and the accomplishment of goals in staff meetings. 
She also noted that they feature information on performance milestones 
in their organizational newsletter. 

CMS Survey & Certification Division managers of Region IV of the 
Consortium for Quality Improvement and Survey & Certification 
Operations (CQISCO) attributed improvements in performance against GPRA 
goals--such as reducing the incidence of pressure ulcers among nursing- 
home patients--to leadership commitment to using performance 
information and frequent communication of performance information with 
stakeholders, among other factors. According to CMS Region IV managers 
we interviewed, they are several steps removed from nursing-home health-
care delivery, which in the past had been seen as a limiting factor in 
their ability to affect outcomes among nursing-home patients. One 
manager cited the regional leadership's commitment to getting external 
stakeholders to the table--even those outside of CMS' realm of 
regulatory oversight--as a critical factor to improving outcomes. She 
further described frequent, effective communication of performance 
information among stakeholders as a means to getting them to work 
together. See figure 13 for more information on these efforts. 

Figure 13: CMS Region IV Communicated Performance Information with 
Stakeholders to Improve Quality of Care in Nursing Homes: 

[Refer to PDF for image: illustration and accompanying text] 

Performance goal: Reduce incidence of pressure ulcers by 11%. 

Collaboration on goals and shared performance information by External 
stakeholders and Internal stakeholders. 

Performance goal achieved: 2,441 fewer long-stay residents with 
pressure ulcers[A]. 

When Region IV was charged with reducing the prevalence of pressure 
ulcers among nursing home residents by more than 11 percent[A]—an 
outcome the region could not directly control—a Region IV official 
said “we knew we had to try something different.” According to the 
Associate Regional Administrator, this different approach included 
collaboration with the internal and external stakeholders that could 
contribute to improved performance against their regional pressure 
ulcer reduction goal. They engaged hospital and nursing home personnel, 
patient advocates, emergency medical technicians, quality improvement 
organizations, state survey agencies, and others. They shared 
performance information about the problem and collaborated on possible 
causes and solutions. 

According to CMS regional officials, collaboration on goals and shared 
performance information were among key factors in bringing about 
improvements in the prevalence of pressure ulcers among nursing home 
residents in each of the eight states in their region. In Region IV, 
between fiscal years 2006 and 2008, this improvement translated into 
nearly 2,500 fewer long-stay nursing home residents with pressure 
ulcers. 

Source: GAO analysis of CMS data. 

[A] In the 2-year period during fiscal years 2006-2008, Region IV 
reduced the percentage of long-stay nursing-home residents with 
pressure ulcers from 9.3 percent to 8.3 percent, which represented an 
11 percent decrease, or 2,441 fewer cases. 

[End of figure] 

According to several headquarters officials we interviewed, as a result 
of leadership support that emphasized communication internally and 
externally and other key drivers, CMS has undergone a culture change 
that places a greater emphasis on using performance data to achieve 
results. 

CMS Managers Reported Alignment among Agency, Program, and Individual 
Performance Goals: 

Our survey results indicated that between 2000 and 2007, a 
significantly greater percentage of CMS managers reported that they 
were held accountable for program results (see figure 14). 

Figure 14: Percentage of CMS Managers Who Reported That Agency Managers 
at Their Level Are Held Accountable for the Results of Their Programs: 

[Refer to PDF for image: horizontal bar graph] 

Survey question: Agency managers/supervisors at my level are held 
accountable for the results of the program(s) they are responsible for. 

Percentage responding to a “great” or “very great” extent: 
2000: 42%; 
2007: 77%. 

Source: GAO. 

Note: Data are from GAO 2000 and 2007 surveys. 

[End of figure] 

Top CMS headquarters officials said that a new performance-management 
system that required linkages between organizational, program, and 
individual goals had made individual accountability for program results 
more explicit. In 2006, CMS began to implement the Department of Health 
and Human Services' four-tiered system, Performance Management 
Appraisal Program (PMAP), for non-SES employees at the agency.[Footnote 
33] Under PMAP, employees are held accountable for both administrative 
and program performance. Top CMS officials described how agency goals 
and objectives were embedded in the Administrator's performance 
agreement and cascaded down through the management hierarchy, so that 
each level of management understood their accountability for achieving 
the agency's broad goals. For example, broad goals for preventive 
health care cascaded from the Department of Health and Human Services 
to a Health Insurance Specialist in CMS' Office of Clinical Standards 
and Quality (OCSQ), who was responsible for communications to raise 
awareness among beneficiaries (see figure 15). 

Figure 15: CMS Reported Alignment among Department and Agency Goals and 
Individual Performance Objectives: 

[Refer to PDF for image: illustration] 

Strategic goal/performance goal: Department of Health and Human 
Services: Improve the safety, quality, affordability, and accessibility 
of health care including behavioral health care and long-term care; 
Strategic objective/performance objective: 
* Promote and encourage preventive health care including mental health, 
lifelong healthy behaviors, and recovery. 

Strategic goal/performance goal: CMS: Improve early detection of breast 
cancer; 
Strategic objective/performance objective: 
* Improve early detection of breast cancer among Medicare beneficiaries 
age 65 years and older by increasing the percentage of women who 
receive a mammogram. 

Strategic goal/performance goal: Director, Quality Improvement Group, 
OCSQ: Increase in utilization of Medicare preventive benefits (defined 
as influenza vaccination, screening mammography, and initial preventive 
services examination); 
Strategic objective/performance objective: 
* Achieve 2 percent increase in mammography for the targeted 
participating providers by end date of this work period; 
* Increase provider awareness of the importance of preventive services 
by educating 80 percent of targeted physician groups for the core 
prevention theme related to screening mammography by end date of this 
period. 

Strategic goal/performance goal: Supervisor, Health Insurance 
Specialist, OCSQ: Promote and encourage preventive health care, 
including mental health, lifelong health behaviors and recovery 
specifically through collaborative partnerships; 
Strategic objective/performance objective: 
* Fosters a team atmosphere whereby effective communications and 
collaboration are promoted both within and outside of the division; 
* Ensures effective staff communications and coordination in addressing 
media and congressional requests; recommends one proactive 
communication effort per quarter to benefit beneficiary awareness or 
stakeholders, or both, through outreach. 

Source: GAO analysis of CMS information. 

[End of figure] 

CMS Officials Said Selecting a Vital Few Performance Measures with 
Relevance to Managers Encouraged Use of Performance Information: 

Our survey results show that between 2000 and 2007, there was a 
significant decline in the percentage of CMS managers who reported that 
difficulty developing meaningful measures was a hindrance to using 
performance information (see figure 16). 

Figure 16: Percentage of CMS Managers Who Reported That Difficulty 
Determining Meaningful Measures Hinders Using Performance Information: 

Survey question: Difficulty determining meaningful measures hinders 
measuring performance or using the performance information. 

Percentage responding to a “great” or “very great” extent: 
2000: 65%; 
2007: 48%. 

Source: GAO. 

Note: Data are from GAO 2000 and 2007 surveys. 

[End of figure] 

Our interviews with CMS officials provided insight into steps they had 
taken to ensure that performance information was useful to managers. 
They said they selected measures for GPRA reporting purposes that were 
useful for decision making and limited the number of measures to the 31 
that represented the agency's priorities. According to an official 
responsible for strategic planning and performance reporting, because 
quality of care is a top priority for CMS, many of their measures--such 
as incidence of pressure ulcers among nursing-home residents--reflect 
this focus. This official noted that it would be unmanageable to 
measure and report on every aspect of their programs and processes. 
They ultimately settled on a set of performance goals that helped 
managers and staff identify performance gaps and where there are 
opportunities to improve performance to close the gaps. 

Improved Data Systems and Training Opportunities Enhanced CMS' Capacity 
to Identify Problems and Devise Solutions: 

At CMS, our prior work identified the need for the agency to develop 
better management information systems, among other actions, to improve 
oversight of nursing-home quality and safety.[Footnote 34] More 
recently, we acknowledged that CMS had pursued important upgrades in 
the system used to track the results of state survey activities and has 
increased its analyses of data to improve oversight.[Footnote 35] This 
is consistent with CMS finance and administrative officials' statements 
that the agency had invested in systems infrastructure to manage using 
performance information. A CQISCO manager responsible for Survey & 
Certification performance reporting in Region IV noted that easier 
access to performance information through improved data systems had 
enabled them to use performance information more effectively to 
identify problems and develop solutions. CQISCO managers and staff can 
now use more than 350 standard performance reports in exercising their 
oversight of state agencies responsible for surveying nursing-home 
quality. For example, a CQISCO Region IV official told us that they use 
many of these performance reports at quarterly meetings with state 
survey officials and that the performance information is helpful in 
identifying aberrant trends and illustrating these trends to the 
states. They also compare performance to prior years to determine 
whether positive outcomes have occurred. CQISCO Region IV officials 
provided an example of how a report on the frequency of state nursing- 
home quality surveys, which includes information that had not been 
easily accessible several years ago, helped them to improve outcomes in 
a particular state (see figure 17). 

Figure 17: A CMS Region IV Manager Described How Easier Access to 
Performance Data Contributed to Improved Nursing-Home Survey Frequency 
in Alabama: 

[Refer to PDF for image: horizontal bar graph and accompanying text] 

Every nursing home receiving Medicare or Medicaid payment must be 
inspected by the state survey agency against federal quality-of-care 
and fire safety standards not less than once every 15 months.[A] 
Performance is measured in terms of the percentage of surveys a state 
conducts within this required interval. In CMS’s Southeast Region, a 
Survey & Certification Division official said that investments in data 
systems made it easier to obtain the performance information that 
helped them improve outcomes related to Alabama’s nursing-home 
oversight activities. In the middle of fiscal year 2007, the regional 
Survey & Certification team was reviewing a performance report on the 
frequency of Alabama’s nursing home surveys and found that over 90 
percent had not met standards for timeliness. When states fail to 
conduct timely surveys, there is a risk that quality of care issues, 
such as preventing avoidable pressure ulcers, weight loss, or 
accidents, will go undetected in between surveys. In response, the team 
put together a “Request for Action Plan” to Alabama’s State Survey 
Agency that included a range of data reports on the state’s 
performance. “When the state agency sees the problems presented in 
black and white, it really encourages them to take action,” the manager 
said. The team continued to provide the information to the state and, 
over time, fewer and fewer surveys missed the timeliness standard. 
Partway into fiscal year 2009, CMS’s reports on Alabama’s nursing-home 
survey frequency showed 100 percent compliance with the maximum survey 
interval standard (see fig. below). Several years ago, the official 
noted, the information on state survey frequency was not as easy to 
access. “These days, it doesn’t take a technical expert to run these 
reports or interpret the information—they’ve been designed so that 
managers and staff can use them.” According to the official, improved 
availability of performance information has contributed to improved 
results. 

Percentage of Alabama nursing-home quality surveys that did not meet 
standard for timeliness, FY2007–FY2009: 

Fiscal year: 2007; 
Percentage: 76.6. 

Fiscal year: 2008; 
Percentage: 39.4. 

Fiscal year: 2009[B]; 
Percentage: 0. 

[A] In addition, the statewide average interval for these surveys must 
not exceed 12 months. CMS generally interprets these requirements to 
permit a statewide average interval of 12.9 months and a maximum 
interval of 15.9 months for each home. 42 U.S.C. § 1395i-3(g). 

[B] As of February 2009. 

Source: GAO analysis of CMS data. 

[End of figure] 

According to our prior work, inadequate staff expertise may have 
hindered CMS from using performance information in monitoring state 
performance of nursing-home oversight.[Footnote 36] In 2003, officials 
in three regions said lack of staff expertise, among other issues, 
prevented them from using reports that were available to aid them in 
overseeing state survey activities. Our survey results, along with 
recent interviews with several CMS officials, indicate that the agency 
has taken steps to develop its staff's capacity to use performance 
information. Between 2000 and 2007, there was a significant positive 
increase on all six survey questions related to managers' access to 
training over the past 3 years on use of performance information for 
various activities (see figure 18). 

Figure 18: Percentage of CMS Managers Who Reported That Training Was 
Provided to Help Accomplish Key Management Tasks: 

Survey question: Link the performance of program(s)/operation(s)/ 
project(s)to the achievement of agency strategic goals. 

Percentage responding to a “great” or “very great” extent: 
2000: 22%; 
2007: 54%. 

Survey question: Use program performance information to make decisions; 

Percentage responding to a “great” or “very great” extent: 
2000: 17%; 
2007: 34%. 

Survey question: Assess the quality of performance data; 

Percentage responding to a “great” or “very great” extent: 
2000: 9%; 
2007: 24%. 

Survey question: Develop program performance measures; 

Percentage responding to a “great” or “very great” extent: 
2000: 15%; 
2007: 49%. 

Survey question: Set program performance goals; 

Percentage responding to a “great” or “very great” extent: 
2000: 29%; 
2007: 58%. 

Survey question: Conduct strategic planning; 

Percentage responding to a “great” or “very great” extent: 
2000: 38%; 
2007: 57%. 

Source: GAO. 

Note: Data are from GAO 2000 and 2007 surveys. 

[End of figure] 

Key officials in CMS' headquarters told us that the agency had provided 
training on a range of topics related to performance measurement, such 
as "The Government Performance Logic Model," "Aligning Project 
Management with Organizational Strategy," and "Strategic Planning and 
Performance Measurement." According to one CQISCO Region IV official, 
increasing her staff's skills in conducting analyses of performance 
information and presenting findings was a gradual process that required 
training, coaching, and guidance. She said that there are one to two 
formal training opportunities for staff every year and that the 
Consortium holds quarterly meetings that address topics such as how to 
analyze data to identify problems. Another key approach was to hold 
staff individually accountable for using performance information in 
reports they present to senior management. Additionally, in 2006, 
Region IV's Survey & Certification Division began to include a 
performance element in all of its managers' performance agreements 
related to performance information use. 

Conclusions: 

It has been more than 16 years since Congress passed GPRA in an effort 
to ensure that federal agencies have the infrastructure and tools they 
need to improve results. Across the federal government, agencies have 
developed and implemented strategic plans and are routinely generating 
performance information to report progress toward their strategic 
goals. Our survey of government managers, however, showed that GPRA's 
legislative requirements and other performance improvement initiatives 
are not sufficient to ensure that managers will actually use 
performance information to manage for results. At FEMA and Interior, 
inconsistencies or weaknesses in key management practices at these 
agencies--such as demonstrating leadership commitment, aligning goals, 
ensuring usefulness of performance measures, and building analytic 
capacity--appeared to hinder their use of performance information. 

At FEMA, despite their strong commitment to achieving mission results, 
some of the officials we interviewed did not demonstrate the same level 
of commitment to using performance information in decision making, 
especially in the face of unpredictable circumstances such as natural 
disasters. Strengthening the commitment of FEMA's leaders alone, 
however, would not be enough to ensure that managers throughout the 
agency are well-positioned to use performance information to manage for 
results. The agency also had gaps in performance information, such as 
immature performance measurement at the regional level, which made it 
difficult to establish a line of sight over strategic, program, 
regional, and individual performance goals. These gaps, coupled with a 
lack of a performance-management system that required goal alignment, 
made it challenging for managers to hold individuals accountable for 
achieving results. Furthermore, FEMA faces other hurdles, such as a 
lack of trained staff and inadequate information systems, to ensure 
that performance information can be easily collected, communicated, and 
analyzed. However, despite the presence of these challenges, there are 
some emerging efforts that FEMA's leadership could build on, such as 
consistent and timely regional reporting against performance goals. 

At Interior and NPS we observed a management culture where performance 
information was primarily used for after-the-fact accountability and 
reporting purposes, but not as a tool for improving results. Leaders at 
Interior and NPS were not effectively communicating how, if at all, 
they used performance information to identify performance gaps and 
develop strategies to better achieve results. Instead, the greater 
emphasis on using performance information to hold individuals 
accountable for achieving goals may be contributing to the perception 
that it is being used to punish poor performers rather than to improve 
overall performance. Leaders who do not effectively strike a balance 
among such uses of performance information run the risk of creating 
perverse incentives where managers are afraid to fail rather than 
inspired to succeed. 

Under GPRA and PART, Interior collectively tracks nearly 650 
performance measures, which made it difficult for leadership and 
management at all levels to focus on critical priorities. Even where 
there was management commitment to using performance information 
proactively, some bureau-level managers at NPS and Reclamation said 
GPRA and PART measures were not useful for decision making, either 
because there were too many or they were not credible. A labor- 
intensive, cumbersome performance-information system further hindered 
NPS and Reclamation managers' efforts to use performance information to 
inform their decision making. Interior's more recent focus on key 
performance indicators and NPS' efforts to develop more useful 
performance information for park-level decision making could be the 
foundation for further improvements. 

Our review of selected areas at CMS indicated a possible roadmap for 
agencies seeking to increase their use of performance information to 
improve results. There, it was clear that agency leaders were committed 
to using performance information and they made it a priority to build 
the analytic capacity to do so. Managers credited the use of the very 
same management practices that were weak or inconsistent at our other 
case agencies with helping improve their ability to manage for results. 
Although these managers noted that they could not control ultimate 
quality-of-care outcomes--such as the incidence of pressure ulcers 
among nursing-home patients--they said that communicating performance 
information to states, nursing homes, and other stakeholders helped 
them to work collaboratively to improve results. 

Managing for results will become even more critical under the Recovery 
Act: both Interior and FEMA have seen significant increases in their 
funding; in the case of Reclamation, funding has more than doubled. 
With the economic health of the nation at stake, federal managers of 
these and other agencies will be expected to allocate these resources 
to achieve critical results. 

Recommendations for Executive Action: 

The Secretary of DHS should direct the Administrator of FEMA to take 
the following three actions: 

1. direct agency leadership to demonstrate its commitment to using 
performance information for decision making by reviewing performance 
results with subordinate managers on a regular and recurring basis and 
communicating decisions based on performance information to show that 
performance information is reviewed and acted upon; 

2. augment FEMA's analytic capacity to collect and analyze performance 
information by: 

a. continuing to build upon recent efforts to provide training to 
directorate and regional managers to enhance their use of performance 
information, which includes topics such as strategic planning, 
developing robust performance measures, and analyzing what the 
performance data mean; and: 

b. reviewing performance information systems to address users' needs 
for integrated, timely, and relevant performance information; 

3. improve linkages among agency, program, and individual performance 
by: 

a. continuing to engage program and regional managers in efforts to 
develop, and where appropriate refine, intermediate, measurable 
performance targets that cascade from agency strategic goals; and: 

b. in the absence of a DHS-wide performance-management system, 
developing interim guidance for FEMA's current performance-appraisal 
system, covering supervisors and managers, on how to align individual 
performance objectives with program and agency goals. Such guidance 
could include information on how work plans can be used to align 
individual and agency performance goals and objectives, examples of 
alignment from subunits within FEMA that are already implementing this 
practice, or other approaches to promoting such alignment. 

The Secretary of the Interior should take the following two actions: 

1. direct departmental leadership and the Director of NPS to 
demonstrate their commitment to using performance information for 
decision making by reviewing performance results with subordinate 
managers on a regular and recurring basis and communicating decisions 
based on performance information to show that performance information 
is reviewed and acted upon; 

2. direct departmental leadership, the Director of NPS, and the 
Commissioner of Reclamation in conjunction with OMB to review the 
usefulness of their performance measures and refine or discontinue 
performance measures that are not useful for decision making. The 
review should also consider options for reducing the burden of 
collecting and reporting performance information. This review should 
involve managers at all levels to take into account their differing 
needs for performance information. 

Agency Comments and Our Evaluation: 

We provided a draft of this report to the Secretary of DHS, the 
Secretary of the Interior, and the Secretary of the Department of 
Health and Human Services for comment. We received written comments 
from DHS and Interior, which are reprinted in appendices IV and V. The 
Department of Health and Human Services provided only technical 
comments, which were incorporated as appropriate. 

DHS concurred with two of our three recommendations and partially 
concurred with the third. DHS also noted efforts underway that may 
ultimately address some of these recommendations, such as FEMA's 
recently established Performance Improvement Council, which is working 
in conjunction with other FEMA teams to address performance issues 
across the agency. With regard to our third recommendation concerning 
improved linkages among agency, program, and individual performance 
goals, DHS agreed that FEMA should continue to develop or refine 
intermediate performance targets that cascade from agency strategic 
goals. However, DHS did not concur that, in the absence of a DHS-wide 
performance-management system, FEMA should develop an interim plan to 
allow managers to hold staff accountable for the accomplishment of 
agency strategic goals. DHS commented that an interim plan is not 
advisable at this time because the current FEMA Employee Performance 
System covers both non-bargaining and bargaining unit employees, and an 
interim performance management system would require FEMA to support two 
systems until it could be bargained. Since DHS appeared to interpret 
our recommendation to mean that FEMA should develop and maintain one or 
more interim systems, which was not our intention, we contacted 
officials from both agencies for clarification. Upon further discussion 
with FEMA human capital officials, we revised the language of our 
recommendation to state that the agency should develop interim guidance 
for its current performance appraisal system for managers and 
supervisors. FEMA officials agreed that developing interim guidance for 
its managers and supervisors would not require them to develop dual 
systems or to negotiate with its employee labor unions. They further 
acknowledged that such guidance could help lay the groundwork for 
implementing DHS' future performance management system and indicated 
their willingness to address the recommendation as clarified. 

Interior agreed in principle with the recommendations in the report and 
noted that it is in the process of revising its strategic plan in a 
manner that will be responsive to our recommendations. The department 
also provided additional comments on some of the findings contained in 
the report. First, Interior noted that Bureau of Reclamation, rather 
than Interior, sets goals related to water delivery. In response, we 
revised our report to indicate that Interior included--rather than set--
a goal for this measure for fiscal years 2009 and 2010, and reported on 
the results of this goal for the past four years in its fiscal year 
2008 Performance Accountability Report (PAR). Interior also commented 
that the specific water delivery measure we discussed as lacking 
credibility is provided only for reference at the end of Interior's PAR 
and is not emphasized. While it is true that this measure is shown at 
the end of the report, it is nonetheless labeled as a performance 
measure and Reclamation must collect and report the performance data, 
even though both Interior and Reclamation officials agreed that these 
data were not used in decision making. 

Second, Interior expressed concern related to our finding that at both 
NPS and Reclamation, managers we interviewed described certain 
performance measures as lacking credibility. Interior stated that they 
would like to continue to emphasize outcome measures and suggested that 
our report associated long-term measures with a lack of credibility. In 
addition, Interior commented that our report implies a need for an 
increase in specific, narrower-focused measures to improve credibility 
and use among managers. We disagree that we equated long-term measures 
with low credibility among managers. As we reported, managers we 
interviewed said that credibility issues were a result of measures that 
do not accurately define comparable elements or do not take into 
account different standards across bureaus or units. Furthermore, we 
agree that Interior should continue to emphasize outcome measures, and 
do not believe that our report encourages adoption of additional, 
narrow measures as a solution to the credibility issues we noted. On 
the contrary, one of our findings was the large volume of performance 
measures may be hindering managers' efforts to relate short-term or 
disaggregated performance targets to longer-term outcome goals. As we 
stated in the report, limiting measures to the vital few at each 
organizational level can ensure there is useful, meaningful performance 
data available to managers and staff across an agency. 

Interior also provided additional perspective on our recommendation 
that it work with NPS, Reclamation, and OMB to review the usefulness of 
their performance measures. 

As agreed with your offices, unless you publicly announce the contents 
of this report earlier, we plan no further distribution until 30 days 
from the date of this letter. We will then send copies of this report 
to the Secretary of DHS, the Secretary of the Interior, and the 
Secretary of the Department of Health and Human Services, and other 
congressional committees interested in results-oriented government and 
management issues at DHS, Interior, and CMS. In addition, the report 
will be available at no charge on GAO's Web site at [hyperlink, 
http://www.gao.gov]. 

If you or your staff have any questions about this report, please 
contact me at (202) 512-6806 or at steinhardtb@gao.gov. Contact points 
for our Offices of Congressional Relations and Public Affairs may be 
found on the last page of this report. Individuals who made key 
contributions to this report are listed in appendix VI. 

Signed by: 

Bernice Steinhardt: 
Director, Strategic Issues: 

[End of section] 

Appendix I: Objectives, Scope, and Methodology: 

To develop a better understanding of what practices may inhibit or 
promote the use of performance information in managerial decision 
making our objectives were to: (1) identify agencies with relatively 
low use of performance information and the factors that contribute to 
this condition; and (2) examine practices in an agency where there were 
indications of improvement in its use of performance information. We 
conducted a review of three agencies to identify what barriers and 
challenges hinder managers' use of performance information and, in 
addition for objective two, what practices appear to have contributed 
to agency efforts to improve managerial use of performance information. 
To address both of our objectives, we began by examining the results of 
our four surveys of performance and management issues,[Footnote 37] 
reviewing our prior reports and other relevant materials on results- 
oriented management, the Government Performance and Results Act of 1993 
(GPRA), and the Office of Management and Budget's (OMB) Performance 
Assessment Ratings Tool (PART). We also compared management practices 
at selected agencies with those we have previously identified as 
enhancing the use of performance information for decision making. 
[Footnote 38] 

To identify agencies for our review, we used our survey results from 
2000 and 2007 since both were designed to provide analysis of data at 
the agency and department level as well as governmentwide. For purposes 
of selecting agencies for review under objective one, we used our 2007 
survey data to calculate an agency average "core uses" index score to 
identify agencies with a lower percentage of managers reporting 
performance information use relative to their counterparts at other 
agencies. See appendix II for detailed information on how we developed 
the core uses index score and a table showing each agency's ranking and 
index score. 

In deciding which agencies to select, we focused on those agencies with 
the lowest ranking. For that subset of agencies, we reviewed additional 
performance-related material such as the Office of Personnel 
Management's Federal Human Capital Survey, and OMB's PART assessments 
to aid us in the selection process. Since we expected that our review 
could potentially produce recommendations concerning agency-specific 
performance management issues we observed in the course of our review, 
we also conducted an environmental scan of recent and ongoing GAO 
research at our potential case-study agencies to avoid duplicating 
similar work.[Footnote 39] Based on these considerations, we selected 
the Department of the Interior (Interior) and Federal Emergency 
Management Agency (FEMA). 

In the initial phase of our review of these two agencies, we conducted 
interviews with senior-level officials responsible for operations, 
budget, human resources, and performance-reporting functions at each 
agency to gain an understanding of their performance-based management 
policies and practices established at the top levels of the 
organization. To help guide us in determining where to further focus 
our review, we also asked officials and managers to identify particular 
areas that could provide illustrative examples of the challenges and 
difficulties they faced in using performance information for decision 
making. Based on these initial interviews and other information, we 
then identified programs and operational areas for more in-depth 
review. 

At Interior, we selected the National Park Service (NPS) and Bureau of 
Reclamation (Reclamation) for review. At both NPS and Reclamation, we 
interviewed senior-level officials responsible for operations, budget, 
policy, human resources, and performance-management functions. At NPS, 
we interviewed management and program officials from four of seven 
regions: Pacific West, National Capital, Southeast, and Intermountain 
Regions. We interviewed park superintendents and program managers from 
two parks within each of the four regions for a total of eight parks. 
Of the four regions and eight parks, we conducted site visits at three 
of the regional management offices, and four national parks; the other 
interviews were conducted over the phone. In our sample, we included a 
mix of small, medium, and large parks, which we defined by the size of 
their operating budget: 

(1) small parks had an operating budget of less than $1 million; 

(2) medium parks had an operating budget of $1 million to $10 million; 
and: 

(3) large parks had an operating budget of more than $10 million. 

We also selected parks on the basis of their diversity across resource 
types that included national historic parks, national battlefields, 
national recreation areas, and national seashores. We interviewed 
senior and line managers from three of five of their major national 
program areas, including Natural Resources, Stewardship, and Science 
programs; Cultural Resources programs; and Park Planning, Facilities, 
and Lands programs. 

At Reclamation, of the five regions, we visited the Mid Pacific Region 
and spoke to officials from two other regions: the Upper Colorado, and 
Great Plains Regions. Within these three regions, we interviewed the 
regional director, GPRA Coordinator, and one area or project manager. 
These three regions are geographically diverse and include a range of 
operations and projects including new construction, dams, and power- 
generating plants. 

At FEMA, we followed a similar procedure of meeting with senior-level 
officials and soliciting their suggestions for areas or operations that 
were demonstrative of the challenges and difficulties of using 
performance information in managerial decision making. At FEMA, we 
interviewed top officials and program managers from two of their eight 
Directorates along with regional officials and line managers 
responsible for general operations as well as Mitigation and Disaster 
Assistance Directorate program delivery in 3 of FEMA's 10 regions: 
Region I in Boston, Region IV in Atlanta, and Region IX in San 
Francisco. 

To address our second objective of identifying an agency where 
improvement in the managers' use of performance information appears to 
have progressed, we used results from a set of selected items that we 
asked on both the 2000 and 2007 survey. For each agency, we calculated 
the difference between the percent of agency managers reporting their 
use of performance information to a great or very great extent in 2000 
and 2007 across a set of nine items that addressed the use of 
performance information in areas of managerial decision making such as 
setting program priorities, allocating resources, and setting or 
revising work goals. We then calculated the average of these nine 
differences as an overall descriptive indicator of change in each 
agency's managers' reporting on their use of performance information. 
[Footnote 40] See figure 19 for the results of this analysis. 

Figure 19: Average Change in Percentage of Federal Managers Reporting 
Use of Performance Information to a Great or Very Great Extent, 2000- 
2007: 

[Refer to PDF for image: vertical bar graph] 

Agency: NSF; 
Average change 2000 to 2007: 29.1%. 

Agency: NRC; 
Average change 2000 to 2007: 21.3%. 

Agency: CMS; 
Average change 2000 to 2007: 20.9%. 

Agency: FAA; 
Average change 2000 to 2007: 19.9%. 

Agency: EPA; 
Average change 2000 to 2007: 18.0%. 

Agency: Education; 
Average change 2000 to 2007: 15.9%. 

Agency: Energy; 
Average change 2000 to 2007: 15.8%. 

Agency: IRS; 
Average change 2000 to 2007: 14.6%. 

Agency: SSA; 
Average change 2000 to 2007: 13.6%. 

Agency: FEMA; 
Average change 2000 to 2007: 11.9%. 

Agency: NASA; 
Average change 2000 to 2007: 11.3%. 

Agency: DOT; 
Average change 2000 to 2007: 11.1%. 

Agency: AID; 
Average change 2000 to 2007: 11.1%. 

Agency: State; 
Average change 2000 to 2007: 10.1%. 

Agency: Labor; 
Average change 2000 to 2007: 9.15%. 

Agency: Forest Service; 
Average change 2000 to 2007: 5.6%. 

Agency: VA; 
Average change 2000 to 2007: 5.1%. 

Agency: DOJ; 
Average change 2000 to 2007: 3.9%. 

Agency: Treasury; 
Average change 2000 to 2007: 3.8%. 

Agency: GSA; 
Average change 2000 to 2007: 2.7%. 

Agency: USDA; 
Average change 2000 to 2007: 2.3%. 

Agency: HUD; 
Average change 2000 to 2007: 2.3%. 

Agency: Interior; 
Average change 2000 to 2007: 1.9%. 

Agency: Commerce; 
Average change 2000 to 2007: 1.4%. 

Agency: DOD; 
Average change 2000 to 2007: -0.7%. 

Agency: OPM; 
Average change 2000 to 2007: -1.8%. 

Agency: HHS; 
Average change 2000 to 2007: -3.2%. 

Agency: SBA; 
Average change 2000 to 2007: -9.6%. 

Source: GAO. 

Note: Data are from GAO 2007 survey. 

[End of figure] 

Similar to our selection procedures for objective one, we focused on 
those agencies showing the greatest positive shift in the percent of 
managers endorsing their use of performance information in their 
decision making. We then reviewed additional performance-related 
material for that subset of agencies showing the greatest change. In 
selecting an agency for review, we considered various factors such as 
agency size, mission, and workforce mix with a view that an agency 
facing substantive barriers and challenges to the use of performance 
information would yield good illustrative case examples of change. We 
selected the Centers for Medicare & Medicaid Services (CMS). In our 
2000 survey, on five items asking managers about the extent to which 
they had five different types of performance measures, for example, 
outcome, output, customer service, CMS[Footnote 41] had the lowest 
ranking of all agencies on four of the five items.[Footnote 42] While 
CMS has shown notable change in its managers' reported use of 
performance information, its relative standing in 2007 as reflected in 
appendix II, figure 20, shows it to be in about the middle of the 
distribution of the 29 agencies we ranked. 

At CMS, we interviewed senior-level officials to gain insight into how 
they were able to improve their managers' use of performance 
information as indicated by our survey results. We asked agency 
officials to help us identify regional-and line-manager interview 
subjects who could articulate their experiences with such a change. 
Based on this input, we interviewed the top officials from two of CMS' 
four Consortia: the Consortium for Quality Improvement and Survey & 
Certification Operations and the Consortium for Financial Management 
and Fee for Service Operations. We also interviewed regional officials 
and managers responsible for Survey & Certification in Region IV and 
regional officials responsible for administration and outreach in 
Region IX. At our interviews, we asked officials to identify the 
barriers and challenges they faced in the use of performance 
information and strategies for overcoming them. 

At all the agencies where we conducted our work, our interviews and 
examination of agency documentation such as strategic plans, GPRA 
measures, and performance agreements incorporated a review of the 
extent to which these organizations implemented practices that our work 
has shown can promote the use of performance information for management 
decision making. We also examined our 2007 survey results for these 
agencies to explore what differences in agency responses could be 
useful in identifying conditions or perceptions that were relevant to 
or elaborated on our observations. Throughout the body of the report, 
the differences in percentages are significant unless noted otherwise. 
Although we reviewed available performance information in the examples 
we cite and describe how performance information was used to make 
decisions, we did not attempt to independently assess the reliability 
of the information or verify that the use resulted in improved 
outcomes, since they were not within the scope of our review 
objectives. 

We performed our work in the Washington, D.C., metropolitan area; 
Boston, Massachusetts; San Francisco and Sacramento, California; 
Atlanta, Georgia; from March 2007 to May 2009 in accordance with 
generally accepted government auditing standards. Those standards 
require that we plan and perform the audit to obtain sufficient, 
appropriate evidence to provide a reasonable basis for our findings and 
conclusions based on our audit objectives. We believe that the evidence 
obtained provides a reasonable basis for our findings and conclusions 
based on our audit objectives. 

[End of section] 

Appendix II: Agency Rankings Based on Index of 2007 Survey Results: 

As part of our analyses of the 2007 survey data, we identified a set of 
nine items from the questionnaire that inquired about uses of 
performance information that we identified in a previous GAO report. 
[Footnote 43] Using those items, we developed an index that reflected 
the extent to which managers' perceived their own use of performance 
information for various managerial functions and decisions as well as 
that of other managers in the agency. To obtain an index score of 
reported use of performance information, we computed an average score 
for each respondent across the nine items we identified. We then 
averaged the respondent scores from each agency to produce an overall 
index score for each agency. By using this average index score, which 
yields values in the same range as the 5-point extent scale used on 
each item, we were able to qualitatively characterize index score 
values using the same response categories used for the items 
constituting the index.[Footnote 44] Figure 20 shows the relative 
ranking on the index score for each agency in the 2007 survey. 

Figure 20: Agency Ranking Based on 2007 Survey Results on Use of 
Performance Information: 

[Refer to PDF for image: list] 

Rank: 1; 
Agency/Component: National Aeronautics and Space Administration. 

Rank: 2; 
Agency/Component: Nuclear Regulatory Commission. 

Rank: 3. 
Agency/Component: Department of Veterans Affairs. 

Rank: 4; 
Agency/Component: Social Security Administration. 

Rank: 5; 
Agency/Component: National Science Foundation. 

Rank: 6; 
Agency/Component: General Services Administration. 

Rank: 7; 
Agency/Component: Department of Energy. 

Rank: 8; 
Agency/Component: Department of Housing And Urban Development. 

Rank: 9; 
Agency/Component: Department of Education. 

Rank: 10; 
Agency/Component: Department of the Treasury (excluding Internal 
Revenue Service). 

Rank: 11; 
Agency/Component: Internal Revenue Service. 

Rank: 12; 
Agency/Component: Environmental Protection Agency. 

Rank: 13; 
Agency/Component: Small Business Administration. 

Rank: 14; 
Agency/Component: Centers for Medicare & Medicaid Services. 

Rank: 15; 
Agency/Component: Department of Commerce. 

Rank: 16; 
Agency/Component: Office of Personnel Management. 

Rank: 17; 
Agency/Component: Federal Aviation Administration. 

Rank: 18; 
Agency/Component: Agency for International Development. 

Rank: 19; 
Agency/Component: Department of Labor. 

Rank: 20; 
Agency/Component: Department of Agriculture (excluding Forest Service). 

Rank: 21; 
Agency/Component: Department of Homeland Security (excluding Federal 
Emergency Management Agency). 

Rank: 22; 
Agency/Component: Department of Defense. 

Rank: 23; 
Agency/Component: Department of State. 

Rank: 24; 
Agency/Component: Department of Transportation (excluding Federal 
Aviation Administration). 

Rank: 25; 
Agency/Component: Department of Justice. 

Rank: 26; 
Agency/Component: Department of Health and Human Services (excluding 
Centers for Medicare & Medicaid Services). 

Rank: 27; 
Agency/Component: Department of the Interior. 

Rank: 28; 
Agency/Component: Federal Emergency Management Agency. 

Rank: 29; 
Forest Service. 

Source: GAO. 

[End of figure] 

[End of section] 

Appendix III: Timeline of Major Government Results-Oriented Management 
Reforms: 

1993: Congress enacted the Government Performance and Results Act of 
1993 (GPRA) (Public Law 103-62) to address several broad purposes, 
including improving federal program effectiveness, accountability, and 
service delivery; and enhancing congressional decision making by 
providing more objective information on program performance. GPRA 
requires executive agencies to complete strategic plans in which they 
define their missions, establish results-oriented goals, and identify 
the strategies that will be needed to achieve those goals. GPRA also 
requires executive agencies to prepare annual performance plans that 
articulate goals for the upcoming fiscal year that are aligned with 
their long-term strategic goals. Finally, GPRA requires executive 
agencies to measure performance toward the achievement of the goals in 
the annual performance plan and report annually on their progress in 
program performance reports. 

1993: The Clinton administration launched the National Performance 
Review, later renamed the National Partnership for Reinventing 
Government which was intended to transform the federal government to be 
more results-oriented, performance-based, and customer-focused by among 
other things requiring consideration of employee and customer views and 
increasing the use of technology, especially the Internet, for delivery 
of services and information to the public. 

1994-1996: Congress passed the Government Management Reform Act of 1994 
(Public Law 103-356), Paperwork Reduction Act of 1995 (Public Law 104-
13), and Clinger-Cohen Act of 1996 (Public Law 104-106, div. D, E), 
which together provide a framework for developing and integrating 
information about agencies' missions and strategic priorities, the 
results-oriented performance goals that flow from those priorities, 
performance data to show the level of achievement of those goals, and 
the relationship of reliable and audited financial information and 
information technology investments to the achievement of those goals. 

2001: With the President's Management Agenda, the Bush administration 
attempted to resolve long-standing federal management weaknesses by 
establishing five governmentwide management priorities including 
performance-budget integration and improved financial reporting. 

2002: The Office of Management and Budget (OMB) created the Program 
Assessment Rating Tool (PART), a diagnostic tool that is intended to 
provide a consistent approach for evaluating federal programs as part 
of the executive budget formulation process. Through PART, OMB sought 
to create better ties between program performance and the allocation of 
resources. 

2003: OMB issued changes to Circular A-11 requiring agencies to submit 
"performance budgets" in lieu of annual performance plans for their 
fiscal year 2005 budget submission to OMB and Congress. 

[End of section] 

Appendix IV: Comments from the Department of Homeland Security: 

U.S. Department of Homeland Security: 
Washington, DC 20528: 

July 7, 2009: 

Ms. Bernice Steinhardt: 
Director Strategic Issues: 
United States Government Accountability Office: 
441 G Street, NW: 
Washington, DC 20548: 

Dear Ms. Steinhardt: 

Thank you for the opportunity to review and comment on the Government 
Accountability Office's (GAO's) Draft Report GAO-09-676 entitled 
Results Oriented Management: Strengthening Key Practices at FEMA and 
Interior Could Promote Greater Use of Performance Information. 

FEMA has reviewed the Draft Report and concurs with Recommendations 1, 
2 a., 2 b., and 3 a. and non-concurs with Recommendation 3 b. 

The Secretary of DHS should direct the Administrator of FEMA to: 

Recommendation 1: direct agency leadership to demonstrate their 
commitment to using performance information for decision making by 
reviewing performance results with subordinate managers on a regular 
and recurring basis and communicating decisions based on performance 
information to show that performance information is reviewed and acted 
upon. 

Response: Concur. FEMA recently formed a Performance Improvement 
Council (PIC) to address program performance issues across the agency. 
This particular working group is made up of individuals within each 
program, office, and directorate that coordinate all aspects of program 
performance management within their spheres of influence. The PIC works 
in conjunction with FEMA's Investment Working Group (IWG) who, in turn, 
makes recommendations to FEMA's Investment Review Board, which 
comprises senior leadership, on relevant courses of action based on 
input from the PIC. The IWG coordinates many of the agency's cross-
programmatic resource allocation decisions at various stages of the 
budget process. 

Recommendation 2: augment FEMA's analytic capacity to collect and 
analyze performance information by: 

a. continuing to build upon recent efforts to provide training to 
directorate and regional managers to enhance their use of performance 
information, which included topics such as strategic planning, 
developing robust performance measures, and analyzing what the 
performance data mean; and; 

b. reviewing performance information systems to address users' needs 
for integrated, timely, and relevant performance information. 

Response: Concur. The Office of Policy & Program Analysis, Program 
Analysis and Evaluation Division, chairs the FEMA PIC. The PIC is a 
recently developed cross-functional and cross-organizational group 
created to address program performance related issues across the Agency 
in order to improve programmatic outcomes. Most recently, the PIC 
provided performance measurement training to all FEMA Programs, Offices 
and Directorates to enhance that particular competence among these FEMA 
entities. Also, to address paragraph b. above--one of the elements of 
this particular working group's emphasis will be ensuring that 
information systems requirements to capture relevant data and 
information are identified and pursued. 

Recommendation 3: improve linkages among agency, program, and 
individual performance by: 

a. continuing to engage program and regional managers in effort to 
develop, and where appropriate, refine intermediate, measurable 
performance targets that cascade from agency strategic goals, and; 

Response: Concur. 

Recommendation: 

b. secondly, in the absence of a department-wide performance management 
system, developing an interim plan that allows managers to hold staff 
accountable for the accomplishment of agency strategic goals by 
providing them with a tool they can use to measure their staffs' 
performance against goals. 

Response: Non-concur. An interim plan is not advisable at this time, 
since the current FEMA Employee Performance System covers both non-
bargaining and bargaining unit employees. An interim performance 
management system would require FEMA to support
two systems until it could be bargained. 

It should be noted that during FY 2009, DHS Chief Human Capital Officer 
expects both the new Performance Management System Directive and 
Instructions to be sent for vetting to the components and the Request 
for Proposal for a new software system to be submitted for bidding. 

Thank you again for the opportunity to comment on this Draft Report and 
we look forward to working with you on future Homeland Security issues. 

Sincerely, 

Signed by: 
Jerald E. Levine: 
Director: 
Departmental GAO/OIG Liaison Office: 

[End of section] 

Appendix V: Comments from the Department of the Interior: 

Note: Page numbers in the draft report may differ from those in this 
report. 

United States Department of the Interior: 
Office Of The Secretary: 
Washington, DC 20240: 

June 29, 2009: 

Bernice Steinhardt: 
Director, Strategic Issues: 
U.S. Government Accountability Office: 
441 G Street, N.W. 
Washington, D.C 20548: 

Dear Ms. Steinhardt: 

Thank you for providing the Department of the Interior the opportunity 
to review and comment on the draft Government Accountability Office 
Report entitled, "Results-Oriented Management: Strengthening Key 
Practices at FEMA and Interior Could Promote Greater Use of Performance 
Information," (GAO-09-676). 

We agree with the GAO recommendations in principle, so that the 
Department of the Interior will be working to: 

* Demonstrate commitment throughout all levels of management and 
leadership in promoting and practicing the consideration of performance 
as a factor in decision making. 

* Review and revise its performance measures to better align them with 
agency activities and ensure the appropriate balance of output/activity 
based performance measures, that are recognizable to program managers, 
and related to longterm/outcome performance measures. 

The Department is currently in the process of revising its strategic 
plan including a detailed review of the measures in a manner that will 
be responsive to the recommendations. 

The enclosure provides additional comments on the report regarding: 

* A clarification that the bureaus set their own performance targets; 

* Concerns about equating increasingly specialized measures with 
improving credibility, i.e., the value of long-term measures; 

* The need to better balance the application of program measures 
including over 400 PART measures; and; 

* Concerns about how measures are applied through the use of performance
information. 

The time and effort put forth by the GAO team in conducting this 
analysis and preparing these observations and recommendations is much 
appreciated. We hope to pursue these recommendations with recognizable 
results in time for the next GAO report on the use of performance 
information. 

If you have any questions, or need additional information, please 
contact Richard Beck, Director, Office of Planning and Performance 
Management, at (202) 208-1818. 

Sincerely, 

Signed by: 

Pamela K. Haze: 
Deputy Assistant Secretary for Budget and Business Management: 

Enclosure: 

General Comments: 

Page 31, The role of bureaus in setting their own targets: 

* In pursuing effective performance management, it is important at the 
Department of the Interior that programs have as much ownership as 
possible of their performance measure targets and reporting. To improve 
the relevance of performance assessment to the programs being measured, 
programmatic performance targets are set by the bureaus and not by the 
Department. Therefore, the particular reference that "...Interior set 
annual goals for this measure..." should be corrected to recognize that 
the water delivery annual goals being discussed (on page 31) are set by 
the Bureau of Reclamation. 

* It is also important to note that the water delivery measure 
emphasized in the GAO report is not highlighted but rather provided for 
reference purposes in the back of the Performance and Accountability 
Report's (PAR) Part II tables. In the PAR, more emphasis is actually 
placed on the Representative Performance Measure about Reclamation's 
facility reliability, for which program managers have more direct 
influence, in the up front Management Discussion and Analysis section. 

Concerns about equating increasingly specialized measures with 
improving credibility: 

* We would like to continue to emphasize outcome measures. We are 
concerned with the depiction of long-term measures as "lacking 
credibility" and an impediment to day-to-day performance measurement. 
Also, while we agree with the disadvantages of too many measures, the 
report appears to suggest that an increased set of more specialized, 
narrower focused measures, e.g. for different sizes of historic 
structures, could improve their credibility, and use, among managers. 

The need to better balance the application of program measures: 

* As part of determining the most effective cadre of measures, it is 
important that the Department and its bureaus have the final decision 
on which measures to use. The present existence of over 400 PART 
measures needs to be reviewed for their relativity to effective program 
management. 

Concerns about how measures are applied on the use of performance 
information: 

* We also suggest that along with considering the construction of the 
measures, it may be beneficial to focus on managers' concerns about how 
the performance information would be used. We suggest that managers be 
given an opportunity to provide context along with the reported 
performance. We have been including more perspective within our 
performance reports, especially in our PAR Management Discussion and 
Analysis and Part II tables to present explanations of trends in 
performance achievement over time, in the context of what has been 
achieved, the significance relative to future achievement, and 
consideration of future management adjustments. While we believe that 
we have demonstrated some improvement in this type of performance trend 
analysis in our most recent PAR and Citizen's Report, placing 
performance in a perspective beyond just whether or not a single year's 
targets were met or not, will continue to be a focus of our performance 
assessment efforts. 

* Along with the recommendations provided by GAO, we recognize that 
senior leadership and management could help better demonstrate to 
managers that reporting the context surrounding the level of 
performance achieved is as important as the value of the target that 
has been met, not met, or exceeded. 

[End of section] 

Appendix VI: GAO Contact and Staff Acknowledgments: 

GAO Contact: 

Bernice Steinhardt at (202) 512-6806 or steinhardtb@gao.gov: 

Staff Acknowledgments: 

Elizabeth Curda and Laura Miller Craig managed this assignment. Jessica 
Nierenberg, Thomas M. Beall, Nicholas Benne, and Kate Hudson Walker 
made key contributions to all aspects of the report. David Bixler, 
Peter DelToro, Daniel Dunn, Elizabeth Erdmann, Ellen Grady, William O. 
Jenkins, Jr., Kathleen M. King, Barbara Lancaster, Walter Ochinko, 
Melanie Papasian, Mark Ramage, Jerry Sandau, William Trancucci, and 
John Vocino also provided assistance. In addition, A.J. Stephens 
provided legal support and Donna Miller developed the report's 
graphics. 

[End of section] 

Related GAO Products: 

Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine 
Its Approach for Funding State Oversight of Health Care Facilities. GAO-
09-64. Washington, D.C.: February 13, 2009. 

High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 
2009. 

Disaster Recovery: FEMA's Public Assistance Grant Program Experienced 
Challenges with Gulf Coast Rebuilding. GAO-09-129. Washington, D.C.: 
December 18, 2008. 

Government Performance: 2007 Federal Managers Survey on Performance and 
Management Issues, an E-supplement to GAO-08-1026T. GAO-08-1036SP. 
Washington, D.C.: July 24, 2008. 

Government Performance: Lessons Learned for the Next Administration on 
Using Performance Information to Improve Results. GAO-08-1026T. 
Washington, D.C.: July 24, 2008. 

Nursing Home Reform: Continued Attention Is Needed to Improve Quality 
of Care in Small but Significant Share of Homes. GAO-07-794T. 
Washington, D.C.: May 2, 2007. 

President's Management Agenda: Review of OMB's Improved Financial 
Performance Scorecard Process. GAO-07-95. Washington, D.C.: November 
16, 2006. 

National Park Service: Major Operations Funding Trends and How Selected 
Park Units Responded to Those Trends for Fiscal Years 2001 through 
2005. GAO-06-431. Washington, D.C.: March 31, 2006. 

Nursing Homes: Despite Increased Oversight, Challenges Remain in 
Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, 
D.C.: December 28, 2005. 

Program Evaluation: OMB's PART Reviews Increased Agencies' Attention to 
Improving Evidence of Program Results. GAO-06-67. Washington, D.C.: 
October 28, 2005. 

Performance Budgeting: PART Focuses Attention on Program Performance, 
but More Can Be Done to Engage Congress. GAO-06-28. Washington, D.C.: 
October 28, 2005. 

Managing for Results: Enhancing Agency Use of Performance Information 
for Management Decision Making. GAO-05-927. Washington, D.C.: September 
9, 2005. 

Human Capital: Senior Executive Performance Management Can Be 
Significantly Strengthened to Achieve Results. GAO-04-614. Washington, 
D.C.: May 26, 2004. 

Results-Oriented Government: GPRA Has Established a Solid Foundation 
for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 
2004. 

Highlights of a GAO Forum: High-Performing Organizations: Metrics, 
Means, and Mechanisms for Achieving High Performance in the 21st 
Century Public Management Environment. GAO-04-343SP. Washington, D.C.: 
February 13, 2004. 

Performance Budgeting: Observations on the Use of OMB's Program 
Assessment Rating Tool for the Fiscal Year 2004 Budget. GAO-04-174. 
Washington, D.C.: January 30, 2004. 

Results-Oriented Government: Using GPRA to Address 21st Century 
Challenges. GAO-03-1166T. Washington, D.C.: September 18, 2003. 

Nursing Home Quality: Prevalence of Serious Problems, While Declining, 
Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, 
D.C.: July 15, 2003. 

Results-Oriented Cultures: Implementation Steps to Assist Mergers and 
Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 
2003. 

Program Evaluation: An Evaluation Culture and Collaborative 
Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: 
May 2, 2003. 

Results-Oriented Cultures: Creating a Clear Linkage between Individual 
Performance and Organizational Success. GAO-03-488. Washington, D.C.: 
March 14, 2003. 

Results-Oriented Cultures: Insights for U.S. Agencies from Other 
Countries' Performance Management Initiatives. GAO-02-862. Washington, 
D.C.: August 2, 2002. 

Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. 
Washington, D.C.: August 2001. 

Managing for Results: Federal Managers' Views on Key Management Issues 
Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 
2001. 

Managing for Results: Emerging Benefits From Selected Agencies' Use of 
Performance Agreements. GAO-01-115. Washington, D.C.: October 30, 2000. 

Managing for Results: Federal Managers' Views Show Need for Ensuring 
Top Leadership Skills. GAO-01-127. Washington, D.C.: October 20, 2000. 

Managing for Results: Challenges Agencies Face in Producing Credible 
Performance Information. GAO/GGD-00-52. Washington, D.C.: February 4, 
2000. 

Nursing Homes: Additional Steps Needed to Strengthen Enforcement of 
Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 
1999. 

The Government Performance and Results Act: 1997 Governmentwide 
Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 
2, 1997. 

Executive Guide: Effectively Implementing the Government Performance 
and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1, 1996. 

Government Reform: Goal-Setting and Performance. GAO/AIMD/GGD-95-130R. 
Washington, D.C.: March 27, 1995. 

[End of section] 

Footnotes: 

[1] Pub. L. No. 103-62, 107 Stat. 285 (Aug. 3, 1993). Congress enacted 
GPRA to address several broad purposes, including improving federal 
program effectiveness, accountability, and service delivery; and 
enhancing congressional decision making by providing more objective 
information on program performance. See appendix III for more 
information on GPRA and other federal management reforms. 

[2] OMB created PART, a diagnostic tool that was intended to provide a 
consistent approach for evaluating federal programs as part of the 
executive budget formulation process. Through PART, OMB sought to 
create better ties between program performance and the allocation of 
resources. Although PART was discontinued as of the change in 
administration in 2009, it is likely that OMB will continue some form 
of agency assessment that will require performance information. See 
appendix III for more information on PART and other federal management 
reforms. 

[3] Our surveys were completed in 1997, 2000, 2003, and 2007 and were 
designed to obtain the observations and perceptions of respondents on 
various aspects of results-oriented management topics such as the 
presence and use of performance measures, hindrances to measuring 
performance and using performance information, and agency climate. Most 
of the items on our surveys asked respondents to rate the strength of 
their perception on a 5-point extent scale ranging from "to no extent" 
at the low end of the scale to "to a very great extent" at the high 
end. For more information on our survey methodology and selected survey 
results see appendix I. 

[4] GAO, Government Performance: Lessons Learned for the Next 
Administration on Using Performance Information to Achieve Results, GAO-
08-1026T (Washington, D.C.: July 24, 3008). In addition to our 
testimony, our survey results are also available: GAO, Government 
Performance: 2007 Federal Managers Survey on Performance and Management 
Issues, an E-supplement to [hyperlink, 
http://www.gao.gov/products/GAO-08-1026T], [hyperlink, 
http://www.gao.gov/products/GAO-08-1036SP] (Washington, D.C.: July 24, 
2008). 

[5] [hyperlink, http://www.gao.gov/products/GAO-08-1026T]. 

[6] [hyperlink, http://www.gao.gov/products/GAO-08-1026T]. Hereafter, 
when describing our survey results, we are reporting the percentage of 
federal managers who selected the "great" or "very great extent" 
response to survey items. 

[7] GAO, Managing for Results: Enhancing Agency Use of Performance 
Information for Management Decision Making, [hyperlink, 
http://www.gao.gov/products/GAO-05-927] (Washington, D.C.: Sept. 9, 
2005). 

[8] Disaster Relief and Recovery Supplemental Appropriations Act, 2008, 
Pub. L. No. 110-329, div. B, 122 Stat. 3574, 3592 (Sept. 30, 2008); 
Department of Homeland Security Appropriations Act, 2009, Pub. L. No. 
110-329, div. D, 122 Stat. 3574, 3670-676 (Sept. 30, 2009); Pub. L. No. 
111-5, 123 Stat. 115 (Feb.17, 2009). The Recovery Act provided stimulus 
funding for preserving and creating jobs and promoting economic 
recovery and for investment in transportation, environmental 
protection, and other infrastructure. The Congressional Budget Office 
(CBO) estimates that the Recovery Act's combined spending and tax 
provisions will cost $787 billion, of which over $580 billion will be 
in additional spending. 

[9] Energy and Water Development and Related Agencies Appropriations 
Act, 2009, Pub. L. No. 111-8, div. C, 123 Stat. 524, 609 (Mar. 11, 
2009); Department of the Interior, Environment, and Related Agencies 
Appropriations Act, 2009, Pub. L. No. 111-8, div. E, 123 Stat. 701-725. 
Interior was funded by continuing resolutions for the 2009 fiscal year 
up until the Omnibus Appropriations Act was enacted. See Pub. L. No. 
111-6, 123 Stat. 522 (Mar. 6, 2009); Consolidated Security, Disaster 
Assistance, and Continuing Appropriations Act, 2009, Pub. L. No. 110- 
329, div. A, Continuing Appropriations Resolution, 2009, 122 Stat. 
3574, 3575, 3593 (Sept. 30, 2008). 

[10] Pub. L. No. 111-5, 123 Stat. 115, 137, 166-168 (Feb. 17, 2009). 

[11] CMS contracts with states to assess the quality of care provided 
by Medicare and Medicaid participating facilities. 

[12] GAO, Nursing Home Reform: Continued Attention Is Needed to Improve 
Quality of Care in Small but Significant Share of Homes, [hyperlink, 
http://www.gao.gov/products/GAO-07-794T] (Washington, D.C.: May 2, 
2007). For a full bibliography of our prior work on nursing homes, see 
GAO, Medicare and Medicaid Participating Facilities: CMS Needs to 
Reexamine Its Approach for Funding State Oversight of Health Care 
Facilities, [hyperlink, http://www.gao.gov/products/GAO-09-64] 
(Washington, D.C.: Feb. 13, 2009). 

[13] GAO, High-Risk Series: An Update, [hyperlink, 
http://www.gao.gov/products/GAO-09-271] (Washington, D.C.: January 
2009). 

[14] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[15] A critical step in the Public Assistance program process is the 
completion of a project worksheet, which documents eligible work and 
estimated cost. 

[16] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[17] GAO, Results-Oriented Cultures: Creating a Clear Linkage between 
Individual Performance and Organizational Success, [hyperlink, 
http://www.gao.gov/products/GAO-03-488] (Washington, D.C.: Mar. 14, 
2003). 

[18] FEMA's SES managers were covered by DHS' human-capital-management 
system, which required linkages between agency goals and individual 
performance objectives. 

[19] Pub. L. No. 110-329, div. D, § 522. 

[20] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[21] In discussing hindrances to using performance information, 
managers we interviewed at Interior generally referred to performance 
information developed to meet GPRA and PART reporting requirements. 
Both GPRA and PART were intended to enhance decision making by 
requiring agencies to develop results-oriented performance goals linked 
to agency missions and report on the results achieved. See appendix III 
for more information on GPRA and PART. 

[22] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[23] Funds are obligated when a definite commitment is made that 
creates a legal liability of the government for the payment of goods 
and services ordered or received. An agency may incur an obligation, 
for example, when it places an order, signs a contract, awards a grant, 
or purchases a service. 

[24] More recently, according to another senior headquarters official, 
Interior had discontinued the requirement to include GRPA goals in SES 
individual performance plans. However, NPS guidance concerning its SES 
members' fiscal year 2008 performance plans indicated that GPRA 
performance information would be taken into consideration in individual 
performance evaluations. 

[25] GAO, Government Reform: Goal-Setting and Performance, [hyperlink, 
http://www.gao.gov/products/AIMD/GGD-95-130R] (Washington, D.C.: Mar. 
27, 1995). 

[26] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[27] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[28] [hyperlink, http://www.gao.gov/products/GAO-05-927]. 

[29] GAO, Executive Guide: Effectively Implementing the Government 
Performance and Results Act, [hyperlink, 
http://www.gao.gov/products/GGD-96-118] (Washington, D.C.: June 1, 
1996). 

[30] Current OMB guidance calls for agencies to combine the annual 
performance report required by GPRA with their financial statement and 
accountability report into a Performance and Accountability Report 
(PAR). 

[31] A headquarters official indicated that in 2007, changes were made 
to PMDS to improve the data-entry process. However, none of the 
managers we interviewed in 2008 through early 2009 commented on these 
changes. 

[32] Pub. L. No. 108-173, 117 Stat. 2066 (Dec. 8, 2003). 

[33] CMS' SES are covered separately by the agency's automated program, 
the Performance Plan System (PPS). The PPS supports the guidelines and 
requirements as outlined in the Department of Health and Human 
Services' Senior Executive and Organizational Performance Management 
System. 

[34] GAO, Nursing Homes: Additional Steps Needed to Strengthen 
Enforcement of Federal Quality Standards, [hyperlink, 
http://www.gao.gov/products/GAO/HEHS-99-46] (Washington, D.C.: Mar. 18, 
1999). 

[35] GAO, Nursing Homes: Despite Increased Oversight, Challenges Remain 
in Ensuring High-Quality Care and Resident Safety, [hyperlink, 
http://www.gao.gov/products/GAO-06-117] (Washington, D.C.: Dec. 28, 
2005). 

[36] GAO, Nursing Home Quality: Prevalence of Serious Problems, While 
Declining, Reinforces Importance of Enhanced Oversight, [hyperlink, 
http://www.gao.gov/products/GAO-03-561] (Washington, D.C.: July 15, 
2003). 

[37] Our surveys were conducted in 1997, 2000, 2003, and 2007. For 
information on the design and administration of each of the four 
surveys, see GAO, The Government Performance and Results Act: 1997 
Governmentwide Implementation Will Be Uneven, [hyperlink, 
http://www.gao.gov/products/GAO/GGD-97-109] (June 2, 1997); Managing 
for Results: Federal Managers' Views on Key Management Issues Vary 
Widely Across Agencies, [hyperlink, http://www.gao.gov/products/GAO-01-
592] (May 25, 2001); Results-Oriented Government: GPRA Has Established 
a Solid Foundation for Achieving Greater Results, [hyperlink, 
http://www.gao.gov/products/GAO-04-38] (Mar. 10, 2004); and most 
recently, Government Performance: Lessons Learned for the Next 
Administration on Using Performance Information to Achieve Results, 
[hyperlink, http://www.gao.gov/products/GAO-08-1026T] (Washington, 
D.C.: July 24, 3008) 

[38] GAO, Managing for Results: Enhancing Agency Use of Performance 
Information for Management Decision Making, [hyperlink, 
http://www.gao.gov/products/GAO-05-927] (Washington, D.C.: Sept. 9, 
2005). 

[39] Although the Forest Service had the lowest ranking among all 
federal agencies, our recent work at this agency had already resulted 
in recommendations to address key management issues that we will 
continue to monitor. 

[40] See GAO, Government Performance: 2007 Federal Managers Survey on 
Performance and Management Issues, an E-supplement to [hyperlink, 
http://www.gao.gov/products/GAO-08-1026T], [hyperlink, 
http://www.gao.gov/products/GAO-08-1036SP] (Washington, D.C.: July 24, 
2008), for the wording of the nine items used in computing the average 
agency change. These items were 8b, 8c, 8f, 8g, 8h, 8i, 8j, 8k, and 8l. 
We could not use the core-uses index items for this change analysis 
since it incorporated new items that had been added since the 2000 
survey. 

[41] At the time of the 2000 survey, CMS was known as the Health Care 
Financing Administration. 

[42] For all five items, fewer than a fifth of CMS managers reported 
having any type of measure to a great or very great extent. 

[43] See GAO, Managing for Results: Enhancing Agency Use of Performance 
Information for Management Decision Making, [hyperlink, 
http://www.gao.gov/products/GAO-05-927] (Sept. 9, 2005). See the online 
e-supplement, GAO, Government Performance: 2007 Federal Managers Survey 
on Performance and Management Issues, an E-supplement to [hyperlink, 
http://www.gao.gov/products/GAO-08-1026T], [hyperlink, 
http://www.gao.gov/products/GAO-08-1036SP] (Washington, D.C.: July 24, 
2008) for the wording of the items. The nine items constituting the 
index are questions 8a, 8c, 8d, 8e, 8k, 8m, 10d, 10m, and 11b. 

[44] For example, index score values between 1 and 2.99 were viewed as 
covering the two categories of "small" or "to no extent" while values 
of 3 to 3.99 fit the category "moderate extent" and values between 4 
and 5 encompassed the categories of "great" or "very great" extent. 
Agency averages ranged from 2.94 to 3.64. 

[End of section] 

GAO's Mission: 

The Government Accountability Office, the audit, evaluation and 
investigative arm of Congress, exists to support Congress in meeting 
its constitutional responsibilities and to help improve the performance 
and accountability of the federal government for the American people. 
GAO examines the use of public funds; evaluates federal programs and 
policies; and provides analyses, recommendations, and other assistance 
to help Congress make informed oversight, policy, and funding 
decisions. GAO's commitment to good government is reflected in its core 
values of accountability, integrity, and reliability. 

Obtaining Copies of GAO Reports and Testimony: 

The fastest and easiest way to obtain copies of GAO documents at no 
cost is through GAO's Web site [hyperlink, http://www.gao.gov]. Each 
weekday, GAO posts newly released reports, testimony, and 
correspondence on its Web site. To have GAO e-mail you a list of newly 
posted products every afternoon, go to [hyperlink, http://www.gao.gov] 
and select "E-mail Updates." 

Order by Phone: 

The price of each GAO publication reflects GAO’s actual cost of
production and distribution and depends on the number of pages in the
publication and whether the publication is printed in color or black and
white. Pricing and ordering information is posted on GAO’s Web site, 
[hyperlink, http://www.gao.gov/ordering.htm]. 

Place orders by calling (202) 512-6000, toll free (866) 801-7077, or
TDD (202) 512-2537. 

Orders may be paid for using American Express, Discover Card,
MasterCard, Visa, check, or money order. Call for additional 
information. 

To Report Fraud, Waste, and Abuse in Federal Programs: 

Contact: 

Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]: 
E-mail: fraudnet@gao.gov: 
Automated answering system: (800) 424-5454 or (202) 512-7470: 

Congressional Relations: 

Ralph Dawn, Managing Director, dawnr@gao.gov: 
(202) 512-4400: 
U.S. Government Accountability Office: 
441 G Street NW, Room 7125: 
Washington, D.C. 20548: 

Public Affairs: 

Chuck Young, Managing Director, youngc1@gao.gov: 
(202) 512-4800: 
U.S. Government Accountability Office: 
441 G Street NW, Room 7149: 
Washington, D.C. 20548: