This is the accessible text file for GAO report number GAO-05-1
entitled 'Informing Our Nation: Improving How to Understand and Assess
the USA's Position and Progress' which was released on November 10,
2004.
This text file was formatted by the U.S. Government Accountability
Office (GAO) to be accessible to users with visual impairments, as part
of a longer term project to improve GAO products' accessibility. Every
attempt has been made to maintain the structural and data integrity of
the original printed product. Accessibility features, such as text
descriptions of tables, consecutively numbered footnotes placed at the
end of the file, and the text of agency comment letters, are provided
but may not exactly duplicate the presentation or format of the printed
version. The portable document format (PDF) file is an exact electronic
replica of the printed version. We welcome your feedback. Please E-mail
your comments regarding the contents or accessibility features of this
document to Webmaster@gao.gov.
This is a work of the U.S. government and is not subject to copyright
protection in the United States. It may be reproduced and distributed
in its entirety without further permission from GAO. Because this work
may contain copyrighted images or other material, permission from the
copyright holder may be necessary if you wish to reproduce this
material separately.
Report to the Chairman, Subcommittee on Science, Technology, and Space,
Committee on Commerce, Science, and Transportation, U.S. Senate:
November 2004:
INFORMING OUR NATION:
Improving How to Understand and Assess the USA's Position and Progress:
[Hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-1]:
GAO Highlights:
Highlights of GAO-05-1, a report to the Chairman, Subcommittee on
Science, Technology, and Space, Committee on Commerce, Science, and
Transportation, U.S. Senate:
Why GAO Did This Study:
There has been growing activity and interest in developing a system of
key national indicators that would provide an independent, trusted,
reliable, widely available, and usable source of information. Such a
system would facilitate fact-based assessments of the position and
progress of the United States, on both an absolute and relative basis.
This interest emerges from the following perspectives.
* The nation’s complex challenges and decisions require more
sophisticated information resources than are now available.
* Large investments have been made in indicators on a variety of topics
ranging from health and education to the economy and the environment
that could be aggregated and disseminated in ways to better inform the
nation.
* The United States does not have a national system that assembles key
information on economic, environmental, and social and cultural issues.
Congressional and other leaders recognized that they could benefit from
the experiences of others who have already developed and implemented
such key indicator systems. GAO was asked to conduct a study on: (1)
The state of the practice in these systems in the United States and
around the world, (2) Lessons learned and implications for the nation,
and (3) Observations, options, and next steps to be considered if
further action is taken.
What GAO Found:
GAO studied a diverse set of key indicator systems that provide
economic, environmental, social and cultural information for local,
state, or regional jurisdictions covering about 25 percent of the U.S.
population—as well as several systems outside of the United States. GAO
found opportunities to improve how our nation understands and assesses
its position and progress.
Citizens in diverse locations and at all levels of society have key
indicator systems. Building on a wide array of topical bodies of
knowledge in areas such as the economy, education, health, and the
environment, GAO found that individuals and institutions across the
United States, other nations, and international organizations have key
indicator systems to better inform themselves. These systems focus on
providing a public good: a single, freely available source for key
indicators of a jurisdiction’s position and progress that is
disseminated to broad audiences. A broad consortium of public and
private leaders has begun to develop such a system for our nation as a
whole.
These systems are a noteworthy development with potentially broad
applicability. Although indicator systems are diverse, GAO identified
important similarities. For example, they faced common challenges in
areas such as agreeing on the types and number of indicators to include
and securing and maintaining adequate funding. Further, they showed
evidence of positive effects, such as enhancing collaboration to
address public issues, and helping to inform decision making and
improve research. Because these systems exist throughout the United
States, in other nations, and at the supranational level, the potential
for broad applicability exists, although the extent of applicability
has yet to be determined.
Congress and the nation have options to consider for further action.
GAO identified nine key design features to help guide the development
and implementation of an indicator system. For instance, these features
include establishing a clear purpose, defining target audiences and
their needs, and ensuring independence and accountability. Customized
factors will be crucial in adapting such features to any particular
level of society or location. Also, there are several alternative
options for a lead entity to initiate and sustain an indicator system:
publicly led, privately led, or a public-private partnership in either
a new or existing organization.
Observations, Options, and Next Steps:
Key indicator systems merit serious discussion at all levels of
society, including the national level, and clear implementation options
exist from which to choose. Hence, Congress and the nation should
consider how to
* improve awareness of these systems and their implications for the
nation,
* support and pursue further research,
* help to catalyze discussion on further activity at subnational
levels, and
* begin a broader dialogue on the potential for a U.S. key indicator
system.
www.gao.gov/cgi-bin/getrpt?GAO-05-1.
To view the full product, including the scope and methodology, click on
the link above. For more information, contact Christopher Hoenig at
(202) 512-6779 or hoenigc@gao.gov.
[End of section]
Contents:
Letter:
Summary:
Purpose:
Background:
Scope and Methodology:
Results in Brief:
Chapter 1: Introduction:
Indicators and Indicator Systems:
An Illustrative History of National Efforts in the United States:
Current Activities to Inform the Nation through Comprehensive Key
Indicator Systems:
Detailed Scope and Methodology:
Chapter 2: Citizens in Diverse Locations and at All Levels of Society
Have Indicator Systems:
Topical Indicator Systems in the United States Form a Vital Foundation
for Comprehensive Key Indicator Systems:
The Practice of Developing Comprehensive Key Indicator Systems Is
Active and Diverse:
Chapter 3: Comprehensive Key Indicator Systems Are a Noteworthy
Development with Potentially Broad Applicability:
A Diverse Set of Systems Faced Similar Challenges:
Comprehensive Key Indicator Systems Show Evidence of Positive Effects:
Comprehensive Key Indicator Systems Have Potentially Broad
Applicability:
Chapter 4: Congress and the Nation Have Options to Consider in Taking
Further Action:
Certain Design Features Should Guide the Development of Any System,
Including a U.S. National System:
Congress Could Choose from a Range of Organizational Options as
Starting Points for a U.S. National System:
Others Considering Comprehensive Key Indicator Systems Have Similar
Options:
Chapter 5: Observations and Next Steps:
Observations:
Next Steps:
Appendixes:
Appendix I: U.S. National Topical Indicator Systems Included in This
Study:
Appendix II: Overview of Social and Cultural Indicators:
Appendix III: Comprehensive Key Indicator Systems Included in This
Study:
Appendix IV: Timeline and Evolution of the Boston Indicators Project:
Appendix V: Timeline and Evolution of the Oregon Benchmarks:
Appendix VI: The Role of Indicators in the European Union:
Appendix VII: Selected Bibliography on Indicator Systems:
Appendix VIII: GAO Contact and Contributors:
GAO Contact:
Major Contributors:
Other Contributors:
Tables:
Table 1: Comprehensive Key Indicator Systems Selected for GAO's Study:
Table 2: Selected Topical Areas Covered by Federal Statistical
Programs:
Table 3: Selected Highlights of Indicator Traditions in the United
States:
Table 4: Comprehensive Key Indicator Systems Reviewed for This Study,
by Level of Jurisdiction:
Table 5: European Structural Indicators--Headline Indicators:
Table 6: Characteristics, Advantages, and Disadvantages of the Public
Organizational Option:
Table 7: Characteristics, Advantages, and Disadvantages of the Private
Organizational Option:
Table 8: Characteristics, Advantages, and Disadvantages of the Public-
Private Organizational Option:
Table 9: Advantages and Disadvantages of a New Versus an Existing
Organization:
Table 10: Organizational Types of the Systems Studied for Our Review:
Figures:
Figure 1: Possible Topics for a Comprehensive Key Indicator System:
Figure 2: An Economic Indicator Showing World Exports of Goods and
Services as a Percentage of World GDP, 1970-2002:
Figure 3: A Social and Cultural Indicator Showing the Percentage of
Persons Ages 16-24 Who Were Neither Enrolled in School Nor Working, by
Race/Ethnicity (Selected Years 1986-2003):
Figure 4: An Environmental Indicator Showing the Number and Percentage
of Days with an Air Quality Index (AQI) Greater Than 100, 1988-2001:
Figure 5: GPI Per Capita for Burlington, Vermont; Chittenden County,
Vermont; the State of Vermont; and the United States, 1950-2000:
Figure 6: Revisions in the Leading Index of the Business Cycle
Indicators, 1984-1997:
Figure 7: Coronary Heart Disease and Stroke Deaths, by Year, in the
United States, 1979-1998:
Figure 8: Reported Sources of Pollution That Resulted in Beach Closings
or Advisories, 2001:
Figure 9: Percentage of Children Ages 6 to 18 Who Are Overweight, by
Gender, Race, and Mexican-American Origin, Selected Years 1976-1980,
1988-1994, 1999-2000:
Figure 10: Percentage of Medicare Beneficiaries Age 65 or Older Who
Reported Having Had Problems with Access to Health Care, 1992-1996:
Figure 11: Relative Longevity of Selected Comprehensive Key Indicator
Systems in the United States and Abroad:
Figure 12: Boston's Data Items by Source:
Figure 13: Neighborhood Facts Database Sample, Denver:
Figure 14: SAVI Web Site Sample, Indianapolis:
Figure 15: The Boston Indicators Project's Interactive Web Site:
Figure 16: Number of Publicly Traded Gazelle Firms in the Silicon
Valley:
Figure 17: Students Carrying Weapons--Percentage of Students Who Carry
Weapons in Oregon:
Figure 18: Percentage of Working-Age People Who Are Currently Employed
in the United Kingdom by Region for 2000 and 2003:
Figure 19: Long-term Unemployment Rates for Men, 1999-2002:
Figure 20: Median Number of Days It Takes for Homes to Sell in a
Particular Area of Baltimore:
Figure 21: Different Indicators Used to Measure the Success of Public
Schools in Jacksonville, Florida:
Figure 22: SAVI Interactive Tools:
Figure 23: Traffic Congestion in Chicago--Actual 1996 and Projected
2030:
Figure 24: Travel Trends Placing Stress on the Chicago Regional Traffic
System:
Figure 25: Percentage of 9th Graders Reporting Use of Alcohol in the
Last 30 Days:
Figure 26: Oregon State Agencies Whose Programs Are Linked to Child
Abuse or Neglect:
Figure 27: Population Coverage of Select Comprehensive Key Indicator
Systems in the United States:
Letter November 10, 2004:
The Honorable Sam Brownback:
Chairman:
Subcommittee on Science, Technology, and Space:
Committee on Commerce, Science, and Transportation:
United States Senate:
Dear Mr. Chairman:
Since the founding of our republic, the importance of informing the
nation has been an essential component of a healthy democracy. In our
country, power resides with the people and their duly elected
representatives, and knowledge serves to both inform and constrain the
use of power. This idea is embodied in forms ranging from the decennial
census to the notion of annually reporting on the state of the union,
with its history of providing a broad, general picture of the nation's
position and progress, along with the President's agenda for the coming
year.
Our founding fathers recognized that this critical issue needed ongoing
attention. President George Washington, in his first annual message to
Congress on January 8, 1790, said, "Knowledge is in every country the
surest basis of public happiness. In one in which the measures of
government receive their impressions so immediately from the sense of
the community as in ours it is proportionably [sic] essential." Since
that time, there has been a long history--checkered by both success and
failure--of attempts to create ever more advanced ways to inform our
public dialogues and generate a context for civic choices and
democratic governance.
This bedrock principle of informing our nation and its citizens has
maintained its simple, common sense relevance for centuries. Yet, it
has also evolved and adapted over time to encompass new national and
global challenges.
At the time of our nation's founding, collecting and disseminating
information was achieved primarily through word of mouth and the
printing press, drawn from few institutional sources, and traveled at
speeds of 10 to 20 miles per hour. The availability of information was
primarily limited to elite groups, and broad general perspectives were
difficult to develop because of a dearth of factual information.
Today, information is collected and disseminated at the speed of light,
is generated in massive amounts from an array of sources, and is
available throughout the world to almost anyone. It is so diverse and
rich that general perspectives are difficult to develop because of a
surfeit of information.
Yet it is just those perspectives we now need in order to work through
the short-and long-term challenges facing our nation, particularly
when, at the federal level, the gap between public expectations and
available resources is expected to widen. There is no substitute for
being able to understand the whole (e.g., the position and progress of
the nation) in order to better assess and act on the parts (e.g., the
various key issues that we face).
The opportunity before us is to build sophisticated information
resources and comprehensive key indicator systems that aggregate vital
information across sectors, levels of societies, and institutions.
These would be available to any person or institution, anywhere at any
time, and for any purpose.
They would add a key dimension to how we inform ourselves. We now have
many diverse and extensive bodies of information on issues of limited
focus (e.g., health care). But we could use comprehensive key indicator
systems on a broader array of critical issues to help generate a
broader perspective, clarify problems and opportunities, identify gaps
in what we know, set priorities, test effective solutions, and track
progress towards achieving results. For instance, across the federal
government, such systems could inform a much needed re-examination of
the base of existing programs, policies, functions, and activities.
To be a leading democracy in the information age may very well mean
producing unique public sources of objective, independent,
scientifically grounded, and widely shared quality information so that
we know where the United States stands now and how we are trending, on
both an absolute and relative basis--including comparisons with other
nations. By ensuring that the best facts are made more accessible and
usable by the many different members of our society, we increase the
probability of well-framed problems, good decisions, and effective
solutions.
The stakes are high, including considerations regarding allocations of
scarce public resources, strengthing the economy, creating jobs,
stimulating future industries, enhancing security, promoting safety,
strengthening our competitive edge, sustaining the environment,
preserving our culture, and promoting quality of life. As a result,
Congress has a crucial interest in the evolution of comprehensive key
indicator systems throughout our nation and the world.
Given the variety of activity and interest we observed at all levels of
U.S. society on this issue, this report can benefit not only those
seeking to develop a national key indicator system, but also the local
and state communities who would like to learn more, develop new
systems, or refine their existing efforts. We look forward to working
with you and other leaders in joining the effort to develop new
approaches to informing our nation that will be of truly lasting value
to the American people.
Copies of this report are being sent to appropriate congressional
committees and other interested parties in the United States and around
the world. We will also make copies available to others upon request.
This report will also be available at no charge on the GAO Web site at
[Hyperlink, http://www.gao.gov]. If you or your staff has any questions
about matters discussed in this report, please contact me at (202)
512-5500 or Christopher Hoenig, Managing Director, Strategic Issues, at
(202) 512-6779 or [Hyperlink, hoenigc@gao.gov]. Key contributors are
listed in appendix VIII.
Sincerely yours,
Signed by:
David M. Walker:
Comptroller General of the United States:
[End of section]
Summary:
Purpose:
A substantial amount of activity is taking place throughout the United
States and around the world to develop comprehensive key indicator
systems for communities, cities, states, and nations that include
essential economic, environmental, and social and cultural indicators.
These systems help people and organizations answer vital questions,
such as: How is their community, state and/or nation as a whole doing
in fact? How does it compare to others or to prior conditions? And how
does that information help them make better choices? Such systems can
become an essential part of civic dialogue and decision making.
Many in the United States believe that comprehensive key indicator
systems represent a significant and evolving opportunity to improve how
individuals, groups, and institutions inform themselves. This is
because they can enable assessment of the position and progress not
just of a wide range of jurisdictions throughout the country, but also
of the nation as a whole. Figure 1 illustrates the variety of topics
that might be included in such a system.
Figure 1: Possible Topics for a Comprehensive Key Indicator System:
[See PDF for image]
[End of figure]
To begin the process of considering whether or how to develop such a
system at the national level in the United States, congressional and
other leaders have an interest in better understanding the experiences
of those who have already designed and implemented comprehensive key
indicators systems. GAO was not asked to develop a set of national
indicators or conduct an assessment of the position and progress of the
United States, but rather to address the following three questions.
1. What is the state of the practice in developing and implementing
comprehensive key indicator systems in the United States and around the
world?
2. What are the lessons learned from these systems and future
implications?
3. What are some options for Congress to consider in identifying an
organization to develop and implement a national system?
Background:
An indicator is a quantitative measure that describes an economic,
environmental, social or cultural condition over time. The unemployment
rate, infant mortality rates, and air quality indexes are a few
examples.
An indicator system is an organized effort to assemble and disseminate
a group of indicators that together tell a story about the position and
progress of a jurisdiction or jurisdictions, such as the City of
Boston, the State of Oregon, or the United States of America. Indicator
systems collect information from suppliers (e.g., individuals who
respond to surveys or institutions that provide data they have
collected), which providers (e.g., the Census Bureau) then package into
products and services for the benefit of users (e.g., leaders,
researchers, planners, and citizens).
Topical indicator systems involve specific or related sets of issues,
such as health, education, public safety, employment, or
transportation. They also form the foundation of information resources
for the general public, the media, professionals, researchers,
institutions, leaders, and policymakers.
Comprehensive key indicator systems pull together only the most
essential indicators on a range of economic, environmental, and social
and cultural issues, as opposed to a group of indicators on one topic.
Comprehensive systems are only as good as the topical systems they draw
from.
Both comprehensive and topical indicator systems use indicators from
public and private sources, and often disseminate this information to
diverse audiences, such as in a report or on a Web site. Ultimately,
however, comprehensive key indicator systems attempt to address
questions that topical systems (which focus on a specific issue) or
current statistical databases (which are detailed and highly technical)
cannot answer for wide and diverse audiences.
Comprehensive key indicator systems can help to identify a
jurisdiction's significant challenges and opportunities, highlight
their importance and urgency, inform choices regarding the allocation
of scarce public resources, assess whether solutions are working, and
make comparisons to other jurisdictions. They exist in a number of
countries, including Australia, Canada, Germany, and the United
Kingdom, as well as supranational entities like the European Union
(EU).[Footnote 1]
There is a long history of considering the need for a national
comprehensive key indicator system in the United States going back at
least to the 1930s. Currently, although a number of cities, states, and
regions in the United States have comprehensive key indicator systems,
there is no such system for the United States as a whole. The federal
government has, however, invested billions of dollars in a rich variety
of topical information that could underpin a national system.[Footnote
2] It also supports various efforts to enhance the availability of that
information, such as Fedstats and The Statistical Abstract of the
U.S.[Footnote 3]
Currently, a consortium of not-for-profit, private, and public sector
efforts is collaborating to create a comprehensive key indicator system
for the United States.[Footnote 4] This initiative, known as the Key
National Indicators Initiative (KNII), emerged after GAO--in
cooperation with the National Academies--convened a forum in February
2003.[Footnote 5] At this forum, a cross-section of leaders provided
their views on whether and how to develop such a national system and
believed that it was an important idea that should be explored
further.[Footnote 6] They also suggested that it should build on
lessons learned from other efforts both around the country and
worldwide.
The KNII has grown to include a diverse group of over 200 leaders from
government, business, research, and the nonprofit sector. This group
consists of experts as well as representatives from broad-based
institutions throughout the nation. The National Academies currently
houses a secretariat to incubate this effort. It has recently begun to
organize more formally and received initial operational funding. One of
its goals is to create and test a prototype "State of the USA" Web
site.
Scope and Methodology:
This report is a first step in examining how existing comprehensive key
indicator systems are working and their implications for the nation. It
presents information obtained from a select, but not necessarily
representative, group of 29 comprehensive key indicator systems at all
levels of society and diverse geographic locations, as shown on table
1. GAO interviewed representatives from each of the selected indicator
systems, as well as a range of experts in the field. In addition, GAO
conducted in-depth reviews--including interviews with officials,
stakeholders, and users--of 5 of these 29 systems: Boston, Oregon,
Germany, the United Kingdom, and the EU. GAO also studied U.S. topical
indicator systems in five areas: the business cycle, science and
engineering, health, children and families, and aging. To explore
options for Congress, GAO drew upon its professional judgment,
historical and legal analysis, fieldwork, and expert interviews.
Table 1: Comprehensive Key Indicator Systems Selected for GAO's Study:
Name of system: U.S local/regional level: State of the Region (Southern
California);
Approximate population: 17,123,000;
Approximate duration: (in years): 7.
Name of system: U.S local/regional level: Chicago Metropolis 2020;
Approximate population: 8,090,000;
Approximate duration: (in years): 8.
Name of system: U.S local/regional level: New York City Social
Indicators;
Approximate population: 8,080,000;
Approximate duration: (in years): 15.
Name of system: U.S local/regional level: Index of Silicon Valley
(California);
Approximate population: 2,300,000;
Approximate duration: (in years): 12.
Name of system: U.S local/regional level: King County Benchmarks
(Washington);
Approximate population: 1,760,000;
Approximate duration: (in years): 14.
Name of system: U.S local/regional level: Social Assets and
Vulnerabilities Indicators (Indianapolis);
Approximate population: 1,600,000;
Approximate duration: (in years): 11.
Name of system: U.S local/regional level: Indicators for Progress
(Jacksonville, Fla.);
Approximate population: 1,200,000;
Approximate duration: (in years): 19.
Name of system: U.S local/regional level: Hennepin County Community
Indicators (Minneapolis);
Approximate population: 1,120,000;
Approximate duration: (in years): 9.
Name of system: U.S local/regional level: Community Atlas (Tampa area,
Fla.);
Approximate population: 1,070,000;
Approximate duration: (in years): 7.
Name of system: U.S local/regional level: Compass Index of
Sustainability (Orange County, Fla.);
: Approximate population: 965,000;
Approximate duration: (in years): 12.
Name of system: U.S local/regional level: Portland Multnomah
Benchmarks;
Approximate population: 678,000;
Approximate duration: (in years): 11.
Name of system: U.S local/regional level: Baltimore's Vital Signs;
Approximate population: 640,000;
Approximate duration: (in years): 4.
Name of system: U.S local/regional level: Boston Indicators Project;
Approximate population: 590,000;
Approximate duration: (in years): 7.
Name of system: U.S local/regional level: Milwaukee Neighborhood Data
Center;
Approximate population: 590,000;
Approximate duration: (in years): 13.
Name of system: U.S local/regional level: Sustainable Seattle;
Approximate population: 570,000;
Approximate duration: (in years): 12.
Name of system: U.S local/regional level: Denver Neighborhood Facts;
Approximate population: 560,000;
Approximate duration: (in years): 10.
Name of system: U.S local/regional level: Santa Cruz County Community
Assessment Project;
Approximate population: 250,000;
Approximate duration: (in years): 11.
Name of system: U.S local/regional level: Benchmarking Municipal and
Neighborhood Services in Worcester (Massachusetts);
Approximate population: 175,000;
Approximate duration: (in years): 6.
Name of system: U.S local/regional level: Santa Monica Sustainable City
(California);
Approximate population: 84,000;
Approximate duration: (in years): 10.
Name of system: U.S local/regional level: Burlington Legacy Project
(Vermont);
Approximate population: 39,000;
Approximate duration: (in years): 5.
Name of system: U.S state level: North Carolina 20/20;
Approximate population: 8,407,000;
Approximate duration: (in years): 9.
Name of system: U.S state level: Minnesota Milestones[A];
Approximate population: 5,059,000;
Approximate duration: (in years): 13.
Name of system: U.S state level: Oregon Benchmarks;
Approximate population: 3,560,000;
Approximate duration: (in years): 15.
Name of system: U.S state level: Results Iowa;
Approximate population: 2,944,000;
Approximate duration: (in years): 5.
Name of system: U.S state level: Maine's Measures of Growth;
Approximate population: 1,306,000;
Approximate duration: (in years): 11.
Name of system: U.S state level: Social Well-Being of Vermonters;
Approximate population: 619,000;
Approximate duration: (in years): 11.
Name of system: National level outside the United States: German System
of Social Indicators;
Approximate population: 83,000,000;
Approximate duration: (in years): 30.
Name of system: National level outside the United States: United
Kingdom Sustainable Development Indicators;
Approximate population: 60,000,000;
Approximate duration: (in years): 5.
Name of system: Supranational level: European Structural Indicators;
Approximate population: 450,000,000;
Approximate duration: (in years): 4.
Source: GAO.
Note: for more information on each of these systems, see app. III of
this report. The Web links for each of these systems can be found at
[Hyperlink, http://www.keyindicators.org].
[A] Since GAO conducted its interviews in fall 2003, Minnesota
Milestones ceased to be an active system. State officials told us that
the Web site will be maintained but there are no plans to update the
data in the near future.
[End of table]
GAO selected comprehensive key indicator systems that were recognized
by experts and others as being useful and accessible; and had been in
existence for more than 2 years. Also, GAO asked national associations
representing state and local governments to validate the selections.
The European examples were selected after consultation with OECD,
several European national statistical offices, and other experts. GAO
selected one system in each of the topical areas it reviewed on the
basis of experts' recommendations. GAO also conducted a literature
review. Importantly, GAO has not defined explicit, objective criteria
for the success or failure of a comprehensive key indicator system.
More research is needed in this area because so many situational,
evaluative, and contextual factors influence the determination of such
criteria.
Most of the graphics presented in this report from the indicator
systems GAO studied are only to illustrate the types of information and
the variety of ways it is presented in the reports or on the Web sites
of these systems. The examples are not intended to highlight or frame
discussions of the substantive issues conveyed by them.
GAO did not, nor was it asked to, catalogue the full universe of the
potentially large number of topical or comprehensive key indicator
systems. Moreover, indicators are only one part of the complex
knowledge base required to inform a nation. For instance, comprehensive
key indicator systems must be supported by detailed databases for those
who want or need to conduct more extensive research or analysis. A
review of these databases and other elements that contribute to an
informed society are beyond the scope of this report.
Given the relatively small number of systems GAO studied in-depth, this
report's findings and conclusions may not be universally applicable.
GAO did not review the entire body of knowledge associated with
indicator systems in either private enterprises or government agencies
and did not perform a formal cost and benefit analysis of the systems
reviewed. Nor did GAO evaluate the federal statistical system and its
related agencies. Most of the indicator system efforts GAO studied are
not necessarily comparable in size and political-economic structure to
the United States, which potentially limits the validity of
generalizations to the U.S. national context.
To gain additional insights, GAO solicited and received comments on a
draft copy of the report from over 60 experts who possess knowledge and
experience in this field, including leaders from the statistical and
scientific communities. Sections of the report were also reviewed by
the systems GAO studied to confirm facts and figures. GAO incorporated
comments where appropriate in this final version. GAO's work was
conducted from July 2003 through September 2004 in accordance with
generally accepted government auditing standards.
Results in Brief:
GAO found that comprehensive key indicator systems are active, diverse,
and evolving. Individuals and institutions from local, state, and
regional levels across the United States--as well as some other nations
and the EU--have comprehensive key indicator systems to better inform
themselves. GAO found enough similarities in the challenges they
encountered and the positive effects they have had to view them as a
coherent, noteworthy development in governance. They also have
potentially broad applicability. Accordingly, GAO has identified key
design features and defined a set of options for Congress and the
nation to consider regarding the further development of comprehensive
key indicator systems at all levels of society, including the U.S.
national level.
State of the Practice: Citizens and Institutions in Diverse Locations
and at All Levels of Society Have Comprehensive Key Indicator Systems:
Jurisdictions throughout this country and around the world are
operating comprehensive key indicator systems and have been for years.
Many recognize that these systems could represent a significant tool to
better inform public and private debate and decision making.
Topical Systems Provide the Foundation for Comprehensive Key Indicator
Systems:
The United States has a wide variety of topical indicator systems at
the national level that provide a resource for comprehensive key
indicator systems to draw upon. The interrelationship between topical
and comprehensive key indicator systems is complementary. Topical
systems form the essential underpinning for aggregating information
into comprehensive key indicator systems. Comprehensive key indicator
systems create a broad picture for users that illuminates the relative
coverage, depth, and sophistication of topical systems. The broader
perspective that comprehensive key indicator systems provide can also
help identify new areas where topical indicators are needed.
One of the U.S. national topical indicator systems is Healthy People (a
federal effort led by the Department of Health and Human Services).
This system provides a set of national health objectives, along with
indicators to measure progress, which are revisited every 10 years. It
also highlights 10 leading health indicators, such as physical
activity, overweight and obesity, tobacco use, and substance abuse.
Since it was established in 1979, Healthy People has engaged a diverse
group of stakeholders throughout the country, including a Healthy
People Consortium. The Healthy People Consortium is a group of public
and private organizations that is dedicated to taking action to achieve
the Healthy People agenda. Further, most states have their own Healthy
People plans.
Comprehensive Key Indicator Systems Are Active, Diverse, and Evolving:
The comprehensive key indicator systems GAO studied each bring together
diverse sources of information to provide an easily accessible and
useful tool for a broad variety of audiences and uses. The Boston
Indicators Project, for example, brings together a set of indicators
from sources such as the U.S. decennial census, state and city
agencies, nonprofit organizations, and universities. It groups the
indicators into categories and established goals in these
areas.[Footnote 7]
These systems are oriented toward both public and private choices. They
incorporate individual and institutional perspectives and address a
wide range of audiences, including business, nonprofit, government, and
media users, as well as the general public. A small business owner of a
company that provides health care services, for example, might use
information from an indicator system to investigate market
opportunities in a particular geographic area or demographic group. A
foundation or nonprofit could use indicators regarding the status of
children's education, health, and family environment to inform
decisions to fund certain grant applications. Information from
comprehensive key indicator systems could be used to help government
leaders establish priorities and allocate scarce public resources. They
can also help individuals understand more about issues that affect
their life choices, such as how progress in community development,
public safety, and education could affect where they might want to
live.
Comprehensive Key Indicator Systems Are Oriented Primarily toward
Learning or Outcomes:
GAO found that comprehensive key indicator systems are primarily, but
not exclusively, either learning-oriented or outcome-
oriented.[Footnote 8]
Some systems are oriented more toward learning and information
exchange. The indicators in these systems are primarily selected based
upon the information needs of their target audiences and are grouped
into categories without specific links to outcomes or goals.
Information is often presented on Web sites with limited commentary or
analysis of results. The Social Assets and Vulnerabilities Indicators
(SAVI) system in Indianapolis is an example of a learning-oriented
system. It collects, organizes, and presents information on "community
assets," such as schools, libraries, hospitals, and community centers.
It also includes indicators in areas like health, education, and
criminal justice that highlight "vulnerabilities," such as
neighborhoods with high crime or unemployment. Learning-oriented
systems enable citizens, researchers, and leaders to learn more about
and monitor conditions in their jurisdictions and may help inform
decision making.[Footnote 9]
Other comprehensive key indicator systems encompass an outcome-oriented
focus on societal aspirations or goals. These indicator systems are
used to monitor and encourage progress toward a vision for the future-
-or in some cases a specific set of goals--which have been established
by the people and institutions within a jurisdiction. Most of the
systems GAO studied were outcome oriented. One of these, the Oregon
Benchmarks system, measures progress toward a strategic vision and
related goals for the state, known as Oregon Shines. It is organized
around three broad goals (1) quality jobs; (2) safe, caring, and
engaged communities; and (3) healthy and sustainable surroundings; each
of which has specific objectives. Under the goal for safe, caring, and
engaged communities, for example, Oregon has a specific objective to
decrease the number of students carrying weapons, measured by the
percentage of students who report carrying them (based on a state wide
survey).
Attention to Relevant Issues, Aspirations, and Questions Is Important
in the Development and Evolution of Comprehensive Key Indicator
Systems:
GAO's work showed that an orientation toward outcomes--whether outcomes
were formative and implicit or advanced and explicit--had an important
influence on focusing and facilitating the development of the system.
Audiences are more likely to use information if they see how it is
relevant to their aspirations or interests. Therefore, outcome-oriented
systems can help create focused information for their audiences that
may enhance the use of and continuing support for these systems.
Moreover, broad discussions about strategic issues and opportunities
can help to reframe existing problems in new ways or identify important
gaps in knowledge about certain issues or populations. The notion of
progress assumes some agreement on the most important questions,
issues, or opportunities facing a jurisdiction. The civic dialogue and
processes used to reach common ground in the systems GAO studied were
often extensive, complex, and time-intensive. Such processes are a pre-
requisite for initiating, and are critical in sustaining, any
comprehensive key indicator system.
Lessons Learned and Implications: Comprehensive Key Indicator Systems
Are a Noteworthy Development with Potentially Broad Applicability:
Comprehensive key indicator systems add a dimension of information
about society that is currently not available to most people. The 29
systems GAO studied showed evidence of positive effects, such as
improving decision making, enhancing collaboration on issues, and
increasing the availability of knowledge. These systems, although very
diverse, encountered similar challenges and applied many of the same
design features. Because GAO found systems at all levels of society,
including other nations, this demonstrates the potential for
transferability--meaning that approaches used in other jurisdictions
may be adapted and used elsewhere. Thus, the development and use of
comprehensive key indicator systems has the potential for broad
applicability throughout the United States at the subnational and
national levels.
Comprehensive Key Indicator Systems Showed Some Evidence of Positive
Effects:
GAO found that comprehensive key indicator systems showed evidence of
positive effects in four areas. They enhanced collaboration to address
public issues, provided tools to encourage progress, helped inform
decision making and improve research, and increased public knowledge
about key economic, environmental, and social and cultural issues.
These positive effects are a function of how different stakeholders use
indicators (along with other resources and information) within the
context of various political, economic, and other factors. Individuals,
the media, businesses, non-profits, interest groups, professionals, and
governments, among others, all may play a role in influencing ideas,
choices, and actions. Thus, it is difficult to attribute actions
directly to an indicator system. In several cases, these systems
generated information that appeared to spur action and produce positive
effects in the short term. It can take years, however, for an indicator
system to become a widely used and effective tool.
Enhanced Collaboration to Address Public Issues:
By revealing significant public policy problems or raising the profile
of new, divisive, or poorly understood issues, comprehensive key
indicator systems can help spur or facilitate collaboration. Focusing
attention on a particular condition may bring increased pressure to
bear on diverse parties in the public and private sectors to
collaborate on strategies for change. Providing a common source of
information also facilitates a shared understanding of existing
conditions.
The Chicago Metropolis 2020 indicator report, for example, highlighted
the region's severe traffic congestion and its effects. This report was
a key factor leading to the formation of a task force of public and
private leaders, supported by the state's governor and legislature, to
deal with transportation problems in the Chicago metropolitan region.
The task force recommended actions intended to transform transportation
and planning agencies into a more coherent regional system, which are
under consideration.
Provided Tools to Encourage Progress:
Users of comprehensive key indicator systems found that they provide an
effective tool for monitoring and encouraging progress toward a shared
vision or goals. Some jurisdictions used information from these systems
to assess the extent to which various parties, including government
agencies, not-for-profit organizations, and businesses, contributed to
achieving results.
For instance, the European Structural Indicators system helps officials
determine how well countries in the EU are meeting agreed-upon policy
goals that are spelled out in the Lisbon Strategy. Spotlighting each
country's progress, or lack thereof, in an annual, publicly released
report encourages each country to improve its performance, which could
then raise the overall position of the EU.[Footnote 10] When the EU
determines, based on a review of the related indicators, that a member
country has not made sufficient progress toward a particular goal, it
can recommend specific actions to help further that country's progress.
Some countries have changed their policies in response to EU
recommendations, such as Spain, which has agreed to take steps to raise
its employment rate among women.
Helped Inform Decision Making and Improved Research:
Bringing relevant information together in a single resource helps
leaders, researchers, and citizens to easily access and use it.
Therefore, comprehensive key indicator systems--if they are viewed as
credible, relevant, and legitimate--provide the capacity for many to
work from, and make choices based upon, the same source of reliable
information. This also enhances efficiency by eliminating the need for
individuals or institutions to expend additional time and resources
looking for or compiling information from disparate sources.
Researchers, for example, could more easily determine what knowledge
exists to help identify existing or new areas meriting further study.
In Indianapolis, officials from the Social Assets and Vulnerabilities
Indicators system (SAVI) provided input, based upon the system's
economic, public safety, demographic, and program indicators, on where
to locate a new Young Men's Christian Association (YMCA) facility for
the city. SAVI used its indicators to map areas of need and found that
numerous parts of the city were equally in need of better recreation
and educational facilities. That is, no one part of the city was a
clear-cut choice based on analysis of the indicators. As a result, the
YMCA made a decision to not construct a single new building. Instead it
created a "YMCA Without Walls" program offering a variety of new
services throughout the city in existing facilities, such as churches,
schools, and community centers.
Increased Knowledge about Key Economic, Environmental, and Social and
Cultural Issues:
Comprehensive key indicator systems allow users to better understand
the interrelationships between issues that may not have been apparent
when viewed separately. New insights may also result from looking at
economic, environmental, and social and cultural information from
crosscutting perspectives (e.g., opportunity, equity).
Further, comprehensive key indicator systems helped expose information
or knowledge gaps about significant issues. These gaps may result from
(1) the absence of information; (2) inadequate knowledge about the
interrelationships among various indicators (e.g., the impact of
economic development on crime rates); or (3) a poor understanding about
the conditions of certain population groups. As a result, indicator
system providers and users can help spur new data collection efforts or
redirect existing efforts to reduce gaps and increase knowledge.
For example, when developing the Compass Index of Sustainability (in
Orange County, Florida), gaps were identified in knowledge about the
county's aging population. Neither government agencies nor other
organizations were collecting adequate data on the health and well-
being of aging residents. The system's report commented on these gaps,
leading county commissioners to appoint a task force. The task force
reviewed existing data collection efforts and recommended improvements
that are now underway, thereby increasing knowledge about a major
segment of the population.
System Costs Are Difficult to Quantify:
Most of the systems GAO studied are located in larger organizations or
agencies and the reported costs dedicated to developing, implementing,
and sustaining them are difficult to quantify. Because the system
managers were able to borrow or leverage staff and resources from their
parent organizations, the full costs of the time and effort to develop,
implement, and sustain these systems were not fully captured. In most
cases, one to three persons worked on the project full-time. For
example, one person (in the city's Department of Public Works) manages
Santa Monica's Sustainable City indicator system. Further, because
these systems rely primarily on indicators or data collected by others,
the costs incurred by others to collect data generally are not
reflected as part of an indicator system's costs.
According to officials from the systems GAO reviewed, systems'
significant cost items included acquiring and managing technology,
paying staff and consultants, and printing and distributing reports.
For example, representatives of the Southern California Association of
Governments' State of the Region system said that they dedicated
approximately $200,000 for their system's 2002 annual indicators
report. Of this amount, approximately $25,000 went to printing the
reports, which were distributed to various officials, academia,
businesses, and nonprofit organizations in southern California. The
rest of the funding was for two staff members and related costs to
draft and process the report. This cost structure was for the most part
consistent with the other systems GAO studied. However, any variation
in costs in relation to the size of the population covered by the
system has not yet been determined.
Certain Design Features Are Needed to Overcome a Range of Key
Challenges:
GAO identified a number of challenges experienced by the 29
comprehensive key indicator systems it reviewed and identified nine
common design features they exhibited. The nature of these challenges,
as well as the ways in which the design features were applied, varied
based on factors such as the system's size, purpose, target audiences,
and the jurisdiction's political and economic structures.
The primary challenges that systems experienced included (a) gaining
and sustaining stakeholders' support, (b) securing and maintaining
adequate funding, (c) agreeing on the types and numbers of indicators
to include, (d) obtaining indicators or data for the system, and (e)
effectively leveraging information technology. Many of these challenges
are continuous and interrelated. For example, challenges in obtaining
indicators or data for the system are exacerbated when systems have
difficulty maintaining adequate funding.
To address these challenges up front and help ensure a lasting, well-
used system, GAO's work in the United States and around the world
strongly suggests that the development of a comprehensive key indicator
system at any geographic level--including a U.S. national system--would
benefit from considering and applying these nine design features. At
the outset, establishing a clear purpose and defining a target audience
and its needs are most crucial. Decisions about how to incorporate
other important features into the system's design should follow
decisions about purpose and target audience.
1. Establish a Clear Purpose and Define Target Audiences and Their
Needs:
Deciding whether the system will focus primarily on allowing users to
learn more about the conditions of their jurisdiction, or whether it
would also measure progress toward specific outcomes, is a first step
in designing a comprehensive key indicator system. Another important
factor is whether to design the system for a specifically targeted
audience, such as government policymakers, or for a wider audience,
including business leaders, researchers, not-for-profit organizations,
the media, and citizens. The media are an especially critical audience
because of the role they often play in conveying the information
presented in indicator systems to the general public.
2. Ensure Independence and Accountability:
It is important to insulate comprehensive key indicator systems from
political pressures and other sources of potential bias as much as
possible. When indicator systems are perceived as biased toward a
particular ideological or partisan perspective, the indicators are less
likely to have credibility and may lose support from a broad group of
users. Mechanisms for helping to ensure transparency and accountability
to stakeholders include demonstrating that the system's managers are
achieving the indicator system's stated aims, using scarce resources
effectively, remaining independent from political processes, and
emphasizing problem areas or opportunities for improvement.
3. Create a Broad-Based Governing Structure and Actively Involve
Stakeholders:
A comprehensive key indicator system should be governed by a structure
that includes a blend of public and private officials and represents
views from various communities.[Footnote 11] The system's governing
officials typically make decisions about how to apply and implement the
design features and set the policies for the system's staff to follow,
including what products and services will be provided. The challenge of
gaining and sustaining support is continuous, even among systems with
champions or large user bases. A governing structure representing
various interests can help ensure that the system maintains a balanced
perspective to meet diverse needs and avoid "capture" by one party or
particular interest group.
4. Secure Stable and Diversified Funding Sources:
Securing adequate funding to initiate the system and sustain it over
time is a constant challenge. One way to help ensure that funding
remains stable over time--and an important aspect of maintaining
independence of the system--is to diversify the number and types of
funding sources. GAO found that a lack of diversified funding sources
made indicator systems more vulnerable to fiscal constraints. Systems
that relied on multiple funding sources, such as government, corporate,
and non-profit foundations, could make up for reductions from one
source by turning to others.
5. Design Effective Development and Implementation Processes:
It is critical to have transparent, collaborative, and repeatable
processes in place to effectively carry out basic functions of a
comprehensive key indicator system, including, but not limited to:
* developing and modifying an organizing framework for the indicators,
* selecting and revising the indicators on an ongoing basis,
* acquiring indicators or data to compute indicators as needed,
* engaging data providers,
* assessing the quality and reliability of the indicators or data, and:
* seeking and maintaining funding.
For example, many of the indicator systems GAO reviewed established
criteria for facilitating the process of selecting indicators, such as
relevance, comparability, and reliability. Selecting indicators is
particularly challenging because it involves making subjective
judgments about, and reaching agreement on, the relative importance of
issues to a jurisdiction.
6. Identify and Obtain Needed Indicators or Data:
Comprehensive key indicator systems often report on indicators or use
data that are originally collected by others. Identifying and gaining
access to indicators or data that are controlled by other organizations
is critical to these systems. Some systems have established formal
processes that specify how they will use the data and when and in what
form they will receive the data from providers. In addition to having
legal authority to access the information, the system should have
responsibility, including legal responsibility, for protecting the
privacy of the information when necessary.
7. Attract and Retain Staff with Appropriate Skills:
Systems cannot operate effectively on a day-to-day basis if they do not
have staff with appropriate skills and abilities. The ability to
collaborate with diverse stakeholders is a fundamental requirement.
Systems also need to involve people with a wide variety of skills and
knowledge in areas including statistics, information technology
management, and marketing. Working knowledge and experience with key
economic, environmental, and social and cultural issues are also
important.
8. Implement Marketing and Communications Strategies for Target
Audiences:
Reaching diverse audiences, including the print and electronic media,
requires multifaceted marketing and communications strategies. These
strategies spread the word about the existence and features of the
system; disseminate information on what the indicator trends are
showing; help to encourage a broader base of individuals and
organizations to make use of the system; and provide training and
assistance to users.
9. Acquire and Leverage Information Technologies:
The development of advanced information technologies (e.g., the World
Wide Web) has transformed the tools available for comprehensive key
indicator systems, although the extent to which systems have leveraged
these technologies varied in the systems GAO reviewed. According to
many of the system managers, effectively using technology, including
the Internet, has made it possible to transfer data quickly,
disseminate it economically, and make it more widely available.
However, gaining access to new technologies can be costly and requires
staff or users to have technical expertise.
Comprehensive Key Indicator Systems Have Potentially Broad
Applicability:
Comprehensive key indicator systems exist across all levels of society,
and GAO's review of selected systems indicates that these systems have
potentially broad applicability. They exhibit similar features that can
be transferred and adapted by other systems, and have years of
experience from which to draw. Further, existing mainstream information
technologies have lowered costs of distribution and increased the
methods available to make information more accessible and usable. Other
developed nations already have comprehensive key indicator systems.
Several specific factors demonstrate the feasibility for a U.S.
national system.
Strong Foundations. Since comprehensive key indicator systems for the
most part aggregate existing indicators to enhance dissemination and
usage, a U.S. system has a large body of indicators from which to
select. An array of existing topical indicator systems are continually
evolving and developing broader conceptions of how to understand and
assess a society's position and progress.
Demonstrated Scalability and Comparability. GAO has found working
systems at all levels of society in the United States and abroad,
including neighborhoods, communities, cities, regions, states,
nations, and supranational entities. They range from small population
scales in the millions to the largest system GAO studied, the EU, at
over 450 million.[Footnote 12] Hence, a system for the U.S. population
of over 290 million is potentially feasible.
Evidence of Transferability. Elements from existing systems are being
adapted by new entities to meet specific needs and interact with one
another, especially at the local levels in the United States. For
example, the Boston Foundation has developed technology and processes
that could be used by other cities, and a group of organizations in
Dallas has developed a comprehensive key indicator system (Dallas
Indicators) that is, in part, based on the Boston Indicators Project.
Hence, there is abundant knowledge and expertise at varying scales that
could be applied, with recognition of unique factors, to a U.S.
national system.
Credible Activity. There is a significant amount of activity and
interest across the United States in further developing and sharing
information on comprehensive key indicator systems that could
contribute to and complement a national system. Moreover, the Key
National Indicators Initiative is currently in the process of planning
a national comprehensive key indicator system for the United States.
Observations and Next Steps: Congress and the Nation Have Options to
Consider in Taking Further Action:
The United States confronts profound challenges resulting from a
variety of factors, including changing security threats, dramatic
shifts in demographic patterns, increasing globalization, and the
accelerating pace of technological change. Addressing these challenges
will likely depend on information resources that better portray a broad
picture of society and its interrelationships.
However, in light of the United States's large supply of topical
indicators, a natural question is: If the nation has so much
information on so many issues, why does it need a comprehensive key
indicator system? One answer to this question is that having
information on all the parts--while important and necessary--is not a
substitute for looking at the whole, whether in life, business,
science, or governance and politics.
A National Indicator System for the United States Merits Serious
Discussion:
It appears feasible to create a comprehensive key indicator system for
the nation that provides independent, objective, and usable information
on the nation's position and progress. If designed and executed well, a
national comprehensive key indicator system could have wide impact--
that is, if American citizens, leaders, and institutions pay attention
to it, access it, and use it to inform their personal and professional
choices. Alternatively, if it is poorly planned and implemented, the
effort could absorb scarce time and resources, fail to meet
expectations, and might even make it more difficult to create such a
system in the future.
The potential positive benefits of a U.S. comprehensive key indicator
system could include the ability to:
* highlight areas in which progress has been made in improving people's
living conditions as well as areas needing new or higher levels of
public attention;
* connect debates about the relative merits of competing demands with
reliable indicators to help make choices among competing priorities and
direct resources where they have the most impact;
* provide information about the possible impact of particular
interventions and policies, thereby providing greater accountability
and learning;
* facilitate comparisons within the United States or of the nation as a
whole with other countries;
* accelerate the identification of important gaps in the nation's
knowledge about important issues and populations;
* enhance fact-based consensus on issues and aspirations, thereby
devoting more time, energy, and resources to discussing priorities and
effective solutions;
* provide more people and institutions with an accessible "window" into
the nation's critical sources of information, thereby increasing the
return on the large investments that have already been made to collect
it; and:
* at the federal level, inform a much-needed re-examination of the base
of existing programs, politics, functions, and activities as well as
the mandated creation of a governmentwide performance plan.
However, there are some pitfalls that a key national indicator system
would need to avoid. First, because there are some areas where
indicators or data may not exist (e.g., certain aspects of the
environment) or are difficult to measure (e.g., certain aspects of
culture), a key U.S. indicator set could have an implicit bias towards
areas with existing measures. It will be important for the nation to
focus on what it needs to measure, not just on what it currently
measures. Second, poor indicator selection or lack of attention to data
quality, in the context of such a highly visible system, raises the
risk in terms of possible misinformation or unintended consequences
arising from use of the system. Finally, exploring a broad number of
creative solutions to the problem of how to better inform the nation--
including the possibility of competing efforts--may help to encourage
faster or more robust development. A single system, if not designed to
be open and innovative and implemented in such a fashion, could
restrain innovation.
Comprehensive Key Indicator Systems Could Help Better Inform the Nation
at Many Levels:
One distinguishing characteristic of the United States is unity built
out of diversity. This diversity finds its expressions in the multiple
levels and branches of government, the different sectors of economic
and social activity, the varied geographic regions, and the widely
ranging racial, ethnic, professional, cultural, and other communities
of interest. Accordingly, questions about a national system from a
local, state, or regional perspective might include the following: Can
it provide specific or contextual information, at an appropriate level
of disaggregation (e.g., geographic areas or population subgroups) that
helps localities, states, and regions become better informed?
Alternatively, how could a U.S. national comprehensive key indicator
system help subnational jurisdictions better understand themselves in a
national context?
A comprehensive key indicator system for the entire United States could
be designed in different ways. It could express only national-level
indicators (e.g., the average national unemployment rate) and
coordinate with subnational levels and others as they develop their own
comprehensive key indicator systems with more localized information.
Experts GAO talked with made it clear that this is an achievable aim
and would add value.
Alternatively, a national system could also include some capability for
users to get not only national-level information but also information
for geographic areas and demographic subgroups (e.g., unemployment
rates for metropolitan areas or school achievement levels for certain
population groups). Experts said that, due to availability and
comparability issues, limited progress toward such capabilities would
be possible in the short term. Much more work must be done to determine
how much flexibility in comparison and disaggregation could be built
into a single national system over time, versus what would be available
in separately managed databases.
Congress Could Choose from a Range of Organizational Options as
Starting Points for a U.S. National System:
The basic issue for Congress, or any other entity or jurisdiction
considering a comprehensive key indicator system, concerns who is to
develop, implement, and manage the system. It is important to note that
the specific organizational option Congress or any other decision maker
chooses as a starting point may be less important than ensuring that it
incorporates the nine key design features presented in this report.
GAO identified three basic organizational options for a U.S.
comprehensive key indicator system. Each option would allow for
incorporation of all or most of the nine design features but to varying
degrees: (A) a public organization, (B) a private organization, or (C)
a combination public-private organization. There are advantages and
disadvantages to each option.
Regardless of which option is chosen, the organization would need to
involve public and private individuals and institutions. Assessing the
position and progress of a market-oriented democracy like the United
States would benefit from aggregating both publicly and privately
produced information for two reasons. First, private sector providers
produce much useful information (e.g., attitudinal data on consumer
confidence). Second, much of the information collected by federal
agencies is tied directly to functional or programmatic purposes and,
therefore, is generally focused on areas where the government has
traditionally played a role. As a result, the federal government's
statistical programs could be supplemented with information collected
by others as the nation evolves and attempts to meet emerging
challenges in new ways. In addition, public and private institutions,
individuals, and a wide variety of groups have an interest in being
engaged in a national comprehensive key indicator system so that it
will meet their needs. Finally, public sector institutions that
currently provide indicators rely heavily on data collected from
private individuals or institutions. All of them have an interest in
seeing more available and accessible information in return for their
time, expense, and energy.
Option A: A Public Organization:
A national comprehensive key indicator system could be led by a federal
agency or a component of a larger agency or department. This option
would entail operating as either (1) a new organization within an
existing agency, (2) a completely new agency, or (3) an added
responsibility in the mission and activities of an existing agency. In
terms of advantages, a public organization could build upon the vast
institutional capacity and skills within the federal government.
Difficulties involved in mixing official and unofficial statistical
information would be a disadvantage for a public organization. It could
also be constrained by federal management and human capital policies.
The U.S. Census Bureau illustrates some of the main features of a
publicly led option. It is one of the main federal statistical
agencies, with an extensive statistical infrastructure and skill base.
As such, it provides an example of a potentially viable option for
housing a national system in an existing agency.
[End of table]
Option B: A Private Organization:
Another option would be to identify or charter a private organization
to develop and implement a national system. A private, non-profit
organization would be better suited than a for-profit organization to
develop a widely accessible, independent system. A common type of
congressionally chartered organization that would be an appropriate
venue for a national system is the federal Title 36 corporation. It
provides some degree of prestige and indirect financial benefits in
that it can receive federal funding, along with private gifts and
bequests. Federal supervision of such organizations is very limited as
these organizations are set apart from the executive and legislative
branches. In terms of advantages, a private organization would be more
adaptable and have flexibility in soliciting donations from a range of
sources and developing its management and human capital policies. A
disadvantage is that a private organization would be disconnected from
political appropriations and authorization processes, possibly making
it more difficult to encourage policymakers to accept and use the
indicator system. The National Academy of Sciences (NAS) is an example
of a Title 36 organization chartered by Congress. NAS is noted for its
reputation of providing independent, scientific information to the
nation, and provides an example of a potentially viable option to house
a national system in a private organization.
[End of table]
Option C: A Public-Private Organization:
Under the third option of a public-private organization, Congress would
have a great deal of flexibility in designing a unique organization and
selecting from a range of possible features. Congress would need to
decide which existing laws, such as the Privacy Act, should apply.
Advantages would include the opportunity to build on the capabilities
of the federal government while retaining the ability to more easily
adapt to changing circumstances. The mix of public and private
interests could also help balance the critical need for independence
with important connections to the political process. Of course, public-
private organizations are not immune to political pressures and would
need to build institutional processes and a culture focused on quality
and independence. Further, some risks that the organization would
overlap or compete with existing federal functions are possible even if
the organization is carefully structured. In designing a public-private
organization, various entities serve as possible models, including the
Smithsonian Institution (although it is not a viable option to house
such a system). The Smithsonian Institution is a hybrid organization
that is publicly supported and privately endowed, illustrating the
degree of flexibility Congress would have in establishing a public-
private partnership to house a national system.
[End of table]
Choosing a New or Existing Organization Carries Certain Advantages and
Disadvantages:
Unlike existing organizations, the most significant disadvantage for a
new organization is the difficulty of incubating it--that is, getting
it off to a successful start. The challenges of funding, establishing
networks internally and with key external communities, new operating
policies and procedures, and human capital issues are all more
difficult in a start-up situation. In addition, it is more difficult to
build awareness, trust, and credibility. However, a new organization
also provides the opportunity to make a fresh start and design an
organization that suits the key design features and enhances the
likelihood that it will become a long-lasting, well-used indicator
system.
A New Public-Private Organization Could Offer Greater Flexibility to
Apply Design Features:
A new public-private organization could facilitate collaboration among
a variety of communities and combine the best features of federal
support and engagement. Congress could incorporate flexibilities by
selectively determining which federal management and human capital
policies would apply to the organization. A public-private organization
could solicit both public and private funds, or it could be designed to
coordinate the separate actions of a few leading public and private
institutions. Most of the experts GAO interviewed believed that a
public-private partnership would probably be the best venue for a
national system. However, comprehensive key indicator systems could
begin by being housed in any of the three organizational options
discussed in this report. GAO found no significant reason why any
option should be ruled out, especially as a starting point.
From a broader national perspective, other jurisdictions throughout the
United States that are considering development of a comprehensive key
indicator system have similar options from which to choose. Unique
aspects and applications of local, state, and national laws, culture,
economic conditions, and considerations about existing organizations
and operations will affect which organizational option is best suited
for a particular jurisdiction. GAO's work revealed that lasting
comprehensive key indicator systems existed in a range of
organizational formats in jurisdictions throughout the United States,
from strictly public systems, such as the Oregon Benchmarks, to those
housed in private, nonprofit organizations, such as Chicago 2020.
Next Steps for Congress and the Nation:
In addition to Congress and the executive branch at the federal level,
there are many providers and users of information in thousands of
jurisdictions who could benefit from the findings in this report.
Accordingly, GAO's suggested next steps are addressed to a broad
audience around the nation.
Encourage Awareness and Education:
Expanding efforts to make leaders, professionals, and the public more
aware of comprehensive key indicator systems and their implications
could enhance discussions and enrich considerations about their
significance and potential application. Specific actions could include
conducting briefings, workshops, or media events; convening forums or
conferences; or holding congressional hearings.
Pursue Additional Research:
As it is becoming more feasible for jurisdictions to create such
systems, more research should be encouraged. Research conducted thus
far on these systems has shown that many questions remain, such as how
much time, money, and effort are required to create them and are they
worth it? A common research agenda, developed among interested parties,
would be of value. Learning more about large-scale systems, such as
those in other nations, would help inform the development of a possible
U.S. national comprehensive key indicator system.
Support Further Development of Comprehensive Key Indicator Systems:
A high degree of innovation is taking place at local levels, which can
help in building the nation's body of experience and inform
considerations at the state and national levels. One way to enhance the
improvement of existing systems and increase the probability of
successful new ones would be to institutionalize a national network of
practitioners and experts. The regular exchange of knowledge in such a
community of practice could reduce risks, expand opportunities, and
avoid reinventing solutions by leveraging accumulated expertise.
Widen the Dialogue on Options for a U.S. National System:
It is important to initiate a broader dialogue on the possible
development of a national comprehensive key indicator system that would
include Congress, the administration, other levels of government, and
different sectors of society. Such a dialogue should explore potential
benefits, costs, risks, and opportunities involved. Engaging interested
parties across the nation would help ensure collaboration across
boundaries, leverage existing information assets, build on existing
knowledge and experience, and position the nation to make choices about
whether and how to develop a national comprehensive key indicator
system for the United States.
[End of section]
Chapter 1: Introduction:
Difficult decisions related to societal aims, such as improving health
care, enhancing security, or sustaining the environment require
reliable, unbiased, and useful indicators that are readily accessible
to citizens, the media, advocates, businesses, policymakers, nonprofit
leaders, researchers, and other audiences. While in many ways such
information about the world is more available today than ever before,
too often it is in diverse formats and locations that may make it
difficult to locate and use effectively and to provide a general
picture of a jurisdiction's position and progress. In addition, it is
not easy to ensure that the most relevant and important information is
accessible, recognized, and used by a wide variety of people and
institutions. As a result, public and private decision making about
issues and solutions may be based on information that is limited,
fragmented, and incomplete.
One example where progress has been made is a single entry point for
federal statistical data ([Hyperlink, http://www.fedstats.gov]), which
gives access to statistics from over 100 federal agencies, available by
both state and topical area. It is a valuable resource for
professionals and those who need information on a specific topic.
However, the site does not provide access to a limited number of
indicators that have been agreed upon as important for understanding
and assessing the position and progress of the United States. Further,
it is not designed to allow a user to easily assemble indicators in
multiple topical areas at the same time, navigate easily through
different areas, or interact with the system for different purposes
(e.g., producing a report). Because the site links directly to agency
Web sites, a wide variety of formats exist and users must also navigate
within each agency's site to find desired information.
The nation's challenges at all levels demand new and more cross-sector,
cross-border responses involving many different individual and
institutional participants in U.S. society. These responses, in turn,
depend on more integrated information resources to support informed
public debate and decisions within and between different levels of
government and society. For example, individuals and institutions play
multiple roles in life (i.e., resident in a particular neighborhood and
borough in New York City, resident of the city itself, resident of the
State of New York, and citizen of the United States), illustrating one
reason why the interrelationships between indicator systems are
important.
Looking at the parts of a society is no substitute for viewing the
whole. Along these lines, there are examples of citizens, institutions,
and leaders, in both private and public roles and settings, that have
comprehensive key indicator systems. Such systems bring together a
select set of indicators that provides information conveniently in one
place on a broad range of topical areas, such as economic development
and employment, air and water quality, and public health and education.
We use the term comprehensive to denote systems that include indicators
from each of the three following domains: economic, environmental, and
social and cultural.[Footnote 13]
Organizers and users of comprehensive key indicator systems attempt to
address questions such as: What are our most significant challenges and
opportunities? What are their relative importance and urgency? Are we
making optimal choices to allocate scarce public resources, create
jobs, stimulate future industries, maintain a global competitive edge,
enhance security, sustain environmental health, and promote quality of
life considerations? Are our solutions working and compared to what?
How do we really know if they are working?
Importantly, indicator systems are oriented toward both public and
private choices; individual and institutional perspectives; business,
nonprofit, government, and media points of view; and leaders, voters,
and employees. Their intent is to improve the availability of quality
information for better decision making and problem solving. For
example, a small business owner could use such a system to investigate
market opportunities in particular geographic areas or among certain
demographic groups. A foundation might use the information on the
status of children's education, health, and family environment to make
decisions about competing grant applications. Policymakers in
government might use such information to inform priorities and allocate
scarce public resources.
Indicators and Indicator Systems:
An indicator is a quantitative measure that describes an economic,
environmental, or social and cultural condition. There are many widely
known indicators, such as the unemployment rate. Yet, there are many
more indicators that are less widely understood but of comparable
importance. For example, the number of patent applications or patents
granted in a particular industry or jurisdiction[Footnote 14] is
sometimes used to measure the degree of "inventiveness." Such an
indicator can be useful to businesses seeking to locate in places with
highly educated and creative potential employees. An indicator such as
this one could also be useful for assessing relative competitive
advantage in research and development.
The indicators related to unemployment and patent applications
illustrate another difference between indicators--direct vs. indirect
or "proxy" indicators. Experts in the field of statistics emphasize
this distinction because it highlights things that are difficult to
measure. A direct indicator measures exactly what it says it does--in
this case the unemployment rate. In contrast, an indirect indicator,
such as the number of patents, cannot directly measure inventiveness.
In fact, it may be impossible to measure such a concept directly and it
is possible that it could only be approximated through a variety of
quantitative proxy measures.
In this report, we define "indicator systems" as systematic efforts to
institutionalize the provision of indicators through various products
and services to satisfy the needs of targeted audiences. Indicator
systems measure many things, including attributes of people,
institutions, industries, and the physical environment, among others.
In terms of management and ownership, many topical indicator systems in
the United States are primarily public in character, such as the
National Income and Product Accounts maintained by the Bureau of
Economic Analysis. Others are privately led, such as the Institute for
Survey Research at the University of Michigan, which produces consumer
confidence indicators.
Indicators are based on data collected from suppliers (e.g.,
individuals and institutions that fill out surveys or census forms),
which can then be designed and packaged into products and services by
providers (e.g., the Bureau of Labor Statistics or the Conference
Board) for the benefit of various users (e.g., leaders, researchers,
planners, or voters). Audiences can use the information packaged in an
indicator system for a variety of reasons: to stimulate awareness,
increase understanding, frame points of view on issues, plan
strategically, assess progress, or make choices.
Indicator systems also vary to the degree that they focus on (1)
detailed account structures (e.g., the U.S. National Income and Product
Accounts); (2) portfolios of individual indicators; (3) single
composite indices that are constructed out of many individual
indicators (e.g., the U.S. Index of Leading Economic Indicators); or
(4) some combination of the above.
Further, indicators are only one part of the base of knowledge and
information necessary to inform a nation. They are important for
summarizing, highlighting, and synthesizing what can sometimes be
complex and bewildering information for many audiences. However, they
must be supported by more extensive databases to support analysts who
want to probe into a deeper understanding of the reasons for movements
in certain indicators.
Topical and Comprehensive Key Indicator Systems:
It is useful to distinguish between two types of indicator systems:
topical and comprehensive. "Topical indicator systems" consist of
indicators pertaining to a related set of issues, such as health, water
quality, education, science, technology, or transportation. For
example, a topical system in health might have related indicators like
the prevalence of certain diseases, such as cancer or heart disease;
levels of certain risk behaviors, such as cigarette smoking or drug
use; the number of citizens with access to health insurance; and the
number of doctors or hospitals available for use by citizens in a
particular jurisdiction. Topical indicator systems exist at different
geographical levels, including local, state, regional, national, and
supranational. They are a major source of information for the media,
professionals, researchers, citizens, and policymakers.
In contrast with topical systems, comprehensive key indicator systems
aggregate key economic, environmental, and social and cultural
indicators into a single system that disseminates information products
and services. Comprehensive key indicator systems are built selectively
by members of a jurisdiction from the foundation of many existing
topical indicators. Indicator systems have an institutional foundation
to sustain and improve them over time. Comprehensive key indicator
systems can make it easier to see a more complete, general picture of
the position and progress of a particular jurisdiction without
requiring the review of exhaustive detail. These comprehensive systems
also facilitate analysis and our understanding of how changes in one
domain can affect other domains. For example, public health (which
would be included in the social and cultural domain) may also be
affected by both economic and environmental factors.
Selecting the key aspects or activities of a society that are most
important to measure is a challenge for comprehensive key indicator
systems. Citizens of any jurisdiction view the world differently based
on their culture, geography, aspirations, values, and beliefs, among
other factors. Diverse perspectives and value judgments significantly
affect indicator choices and definitions, which are inherently
subjective. For example, poverty is a characteristic of society that is
frequently monitored, and it can be defined and measured in a number of
ways. The proportion of the population that is low income can be
selected as one indicator of poverty, which frames it in financial
terms. However, other possible indicators, based on nonfinancial
factors like physical, psychological and spiritual well-being and
education levels, also could be considered as broader indicators of
poverty.
Focus of U.S. National Topical Systems on Specific Issues:
The United States has national-level indicator systems in a variety of
topical areas, most of which are supported by the federal statistical
system. Because of the natural interrelationship between topical and
comprehensive systems, GAO included five U.S. national topical systems
in our study to provide context, including (1) the Conference Board's
Business Cycle Indicators,[Footnote 15] (2) the National Science
Foundation's Science and Engineering Indicators, (3) the Department of
Health and Human Services' Healthy People, (4) the Federal Interagency
Forum on Child and Family Statistics' America's Children: Key National
Indicators of Well-being, and (5) the Federal Interagency Forum on
Aging-Related Statistics' Older Americans: Key Indicators of Well-
being. (See app. I for further details on these systems.) These systems
and others provide a foundation for a national comprehensive key
indicator system as well as lessons learned that would be useful in
developing it. Accordingly, it is important to note the common elements
exhibited as part of the development and implementation of these
topical indicator systems.[Footnote 16] These systems have:
* originated in response to certain national challenges or concerns,
* evolved over time by expanding their scope and refocusing their
activities,
* been used in a variety of ways by the public and private sectors,
* relied heavily upon indicators from the federal statistical system,
* spurred the development of new or different indicators, and:
* enhanced approaches for collecting data.
Economic, Environmental, and Social and Cultural Domains:
The topical indicator systems we examined fell into either the
economic, environmental, or social and cultural domain. For example, at
the national level in the United States, the annual Economic Report of
the President covers several topical areas within the economic domain,
such as business, markets, finance, and employment.[Footnote 17] The
environmental domain includes areas such as natural resources and
ecosystems. The social and cultural domain includes topical areas such
as education and health care.
The following three figures illustrate some indicators that fall under
each domain. First, to illustrate the economic domain, one measure of
growing worldwide interdependence is the total share of world goods and
services that is traded. As shown in figure 2, from 1970 through 2002,
world exports increased from about 12 percent to 24 percent of world
gross domestic product (GDP). Hence, all over the world, people are
depending more and more on other nations to consume the goods they
produce and to produce the goods they, in turn, consume.
Figure 2: An Economic Indicator Showing World Exports of Goods and
Services as a Percentage of World GDP, 1970-2002:
[See PDF for image]
Note: Calculated from International Monetary Fund data.
[End of figure]
To illustrate an indicator in the social and cultural domain, one
indicator of the status of youth in the United States is a measure of
the percentage of persons ages 16 to 24 who are neither enrolled in
school nor working, as shown in figure 3. This indicator provides
information on a transition period for youth when most are finishing
their education and joining the workforce, a critical period for young
people as they are achieving their educational goals and choosing their
career paths. A breakdown of the data by race and ethnic group shows
that the percentage of youth that fall into this category of neither
being in school nor working has been consistently higher for American
Indian, Black, and Hispanic youths than for White and Asian/Pacific
Islander youths since 1986.
Figure 3: A Social and Cultural Indicator Showing the Percentage of
Persons Ages 16-24 Who Were Neither Enrolled in School Nor Working, by
Race/Ethnicity (Selected Years 1986-2003):
[See PDF for image]
Note: Data from Current Population Survey, March Supplement, selected
years 1986-2003, previously unpublished tabulation December 2003.
[End of figure]
As an example from the environmental domain, in 2003 the Environmental
Protection Agency (EPA) published a Draft Report on the Environment
2003 that covered topical areas in this domain, such as air, land, and
water.[Footnote 18] The air quality index, for example, is used for
daily reporting of air quality as related to ozone, particulate matter,
carbon monoxide, nitrogen dioxide, and sulfur dioxide. The higher the
index, the poorer the air quality. When air quality index values are
higher than 100, the air quality is deemed unhealthy for certain
sensitive groups of people. Based on EPA's air quality index data, the
percentage of days across the country on which air quality exceeded 100
dropped from almost 10 percent in 1988 to 3 percent in 2001, as shown
in figure 4.
Figure 4: An Environmental Indicator Showing the Number and Percentage
of Days with an Air Quality Index (AQI) Greater Than 100, 1988-2001:
[See PDF for image]
Source: EPA.
Note: Data used to create graphic are drawn from EPA, Office of Air
Quality Planning and Standards. National Air Quality and Emissions
Trends Report, 1997. Table A-15. December 1998; EPA, Office of Air
Quality Planning and Standards; Air Trends: Metropolitan areas trends,
Table A-17, 2001; (February 25, 2003;
[Hyperlink, http://www.epa.gov/airtrends/metro.html]).
[End of figure]
Significant national-level research has been conducted on topical
systems. For example, the National Academies, which brings together
committees of experts in areas of scientific and technological endeavor
to address critical national issues and advise the federal government
and the public, has conducted extensive research on indicator systems
in the United States and around the world. Specifically, the Academies
has done work in response to several requests from federal agencies
over the past 15 years to develop, evaluate, or propose statistics or
select indicators in fields such as the economy, health, education,
families, the environment, transportation, science, and technology.
Some indicators, however, can be considered under more than one of the
three domains. The number of housing starts, for example, could be
considered under the economic domain, but housing availability also
affects the social and cultural domain, which includes aspects of
quality of life. The health effects resulting from various
environmental conditions provide another example where the distinction
between different domains blurs. A wider perspective is also crucial in
the area of health care, which involves economic as well as social and
cultural indicators. For example, participants in a recent GAO forum on
health care observed that, although a nation's wealth is the principal
driver of health care spending, that wealth alone does not explain the
high level of spending in the United States.[Footnote 19] These
interrelationships point to one of the strengths of comprehensive key
indicator systems--they provide a tool to bring information together
more easily on an ongoing basis. This means they are especially
suitable for assessing increasingly complex, crosscutting issues that
are affected by a wide range of factors.
Comprehensive Systems' Broad Focus on Position and Progress across All
Three Domains:
A comprehensive key indicator system can be defined more specifically
as shown below.
* Comprehensive--Contains information from the three main domains:
economic, environmental, and social and cultural (note that
crosscutting categories such as sustainability do not fit neatly into
one domain). It is comprehensive in the sense that it provides broad
coverage across the three domains.
* Key--A core set of information that a group of citizens has selected
from a much larger range of possibilities. There is no "right" number
of key indicators. How jurisdictions strike the balance between
simplicity and effective coverage can differ widely. An indicator set
can include a few to hundreds of indicators, but it is not intended to
be exhaustive. Because these are a select set, they cannot provide a
full description of the position and progress of a jurisdiction but
rather focus on providing a generally accurate picture of the whole.
* Indicator--Description of an economic, environmental, or social and
cultural condition over time. These indicators can be but are not
necessarily tied directly to goals or formulated as objectives, or have
specific performance targets associated with them.
* System--The products, services, people, processes, and technologies
involved in an organizational form to sustain and adapt the set of
indicators. This refers to a larger set of civic, scientific,
technical, and other processes that involve suppliers (of data),
providers (of indicators), or users (of information).
Although comprehensive key indicator systems are functioning in the
United States at the community, local, state, and regional levels,
limited research appears to have been conducted with comprehensive key
indicator systems themselves as the focus of analysis. Appendix VII
provides a bibliography of some of the existing literature related to
topical and comprehensive key indicator systems.
Figure 1 shown earlier in the summary section of this report
illustrates how a comprehensive key indicator system might integrate
information from the three domains into a single conceptual framework.
Note that this framework also allows for crosscutting indicators that
do not easily fit into one of the three domains. Some comprehensive key
indicator systems are based primarily on broad, crosscutting conceptual
areas, such as quality of life or sustainable development. An example
of an indicator system that is tracking quality of life is the
Burlington Legacy Project of Burlington, Vermont. The Burlington Legacy
Project has calculated a single index of quality of life--referred to
as the genuine progress indicator (GPI) index, which is a composite of
26 economic, environmental, and social and cultural indicators. Figure
5 shows the GPI calculated for Burlington, Vermont; Chittenden County,
Vermont; the State of Vermont; and the United States.
Figure 5: GPI Per Capita for Burlington, Vermont; Chittenden County,
Vermont; the State of Vermont; and the United States, 1950-2000:
[See PDF for image]
Note: See also Costanza, et al., "Estimates of the Genuine Progress
Indicator (GPI) for Vermont, Chittenden County, and Burlington, from
1950 to 2000," Ecological Economics.
[End of figure]
Nations with Comprehensive Key Indicator Systems:
A number of countries, including Australia, Canada, and the United
Kingdom, have comprehensive key indicator systems at the national
level. Some exist at the supranational level, such as the European
Union's (EU) European Structural Indicators system.[Footnote 20]
Although we did not study the Canadian and Australian systems as part
of this review, they nonetheless illustrate how national comprehensive
key indicator systems can be organized.
Canada's Treasury Board maintains an annually updated comprehensive key
indicator system consisting of 20 indicators intended to reflect a
balance of economic, environmental, and social and cultural
conditions.[Footnote 21] This system provides a snapshot of where
Canada stands in comparison with other countries. The Treasury Board's
indicator system complements government departmental reports by giving
Canadians a broad perspective on national performance, providing a
context for assessing the performance of government programs, and
reporting on basic information to support dialogue among Canadians
about future directions in public policy. The Board grouped indicators
into the following four themes.
* Economic opportunities and innovation--real gross domestic product
per capita, real disposable income per capita, innovation, employment,
literacy and educational attainment.
* Health--life expectancy, self-rated health status, infant mortality
and healthy lifestyles.
* Environment--climate change, air quality, water quality,
biodiversity, and toxic substances and the environment.
* Strength and safety of communities--volunteerism, attitudes toward
diversity, cultural participation, political participation, and safety
and security.
Australia's comprehensive system--Measures of Australia's Progress--is
organized around four dimensions of progress with associated topical
areas. System organizers selected a variety of indicators to measure
progress in each of the topical areas. The dimensions and associated
topical areas for the 2004 report are as follows.[Footnote 22]
* Individuals--health, education and training, and work.
* Economy and economic resources--national income, financial hardship,
national wealth, housing, and productivity.
* Environment--the natural landscape, the human environment, oceans and
estuaries, and international environmental concerns.
* Living together--family, community, and social cohesion; crime; and
democracy, governance, and citizenship.
An Illustrative History of National Efforts in the United States:
A consistent message from the many experts and practitioners engaged in
this field has been to look at indicator systems from a historical
perspective. This is not only because such systems typically have
evolved over long periods, but also because some understanding of the
evolution of how U.S. citizens and organizations inform themselves
provides a basic foundation for describing comprehensive key indicator
systems. This history is intended to emphasize a few critical ideas.
First, our substantial information assets have evolved as the nation
confronted great problems or questions and needed to know more. Second,
the topical areas that resulted are the essential foundation for how
the nation informs itself. Third, since early in the 20th century, many
observers have recognized the potential value of a more comprehensive,
objective view of the United States. But it is only now, for a variety
of reasons, becoming potentially feasible to plan, design, and
implement such a resource.
National Challenges and Concerns Led to the Creation of Topical Area
Indicator Systems, Which Have Evolved Over Time:
The indicators required to inform our nation have developed over time
in response to important issues and opportunities. As national-level
indicators developed in the economic, environmental, and social and
cultural domains, each evolved with its own history and traditions. The
call for economic indicators grew out of the nation's experiences
during the Great Depression. Social upheavals after World War II and
the Great Society in the 1960s helped spark a desire for social and
cultural information. Scientific studies that raised concerns about
society's impact on the environment pointed to a need for more
information on environmental conditions. Substantial information
assets now exist in these topical areas--providing a foundation
consisting of thousands of indicators--on which we all depend for
decision making.
The U.S. federal statistical system includes indicators on many
specific topics and consists of numerous agencies and programs, each
established separately in response to different needs. The Office of
Management and Budget (OMB) has identified 70 federal agencies that
each spends at least $500,000 annually on statistical
activities.[Footnote 23] The U.S. federal statistical system is looked
to as a worldwide leader in terms of the sheer volume, scope, and
experience in developing and refining information sets in particular
domains and topical areas. Together, the output of these agencies
constitutes the federal statistical system. Ten of these agencies are
considered by OMB to be the principal statistical agencies because they
collect, produce, and disseminate statistical information as their
primary missions, while the other agencies that produce and disseminate
statistical data do so as an ancillary part of their missions. Table 2
provides a list of topical areas selected to illustrate the variety of
subjects covered by the federal statistical system.[Footnote 24]
Table 2: Selected Topical Areas Covered by Federal Statistical
Programs:
* Agriculture;
* Food and nutrition;
* Natural resources;
* Education;
* Health;
* International trade;
* Patents and trademarks;
* Energy;
* Occupational safety and health;
* Aging;
* Children and families;
* Homeland security;
* Housing;
* Crime and Justice;
* Employment;
* Job training;
* Transportation;
* Science and technology;
* Small business;
* Urban development.
Source: Office of Management and Budget.
[End of table]
Table 3 provides selected highlights of indicator traditions in the
economic, environmental, and social and cultural domains. These
highlights demonstrate three recognizable traditions in the development
of the United States' indicator systems that continue today but are now
being complemented by the development and evolution of comprehensive
systems. These national topical area indicator systems have evolved in
response to needs for new or different types of information, new
challenges, and shifting issues and priorities. They reflect an
investment of billions of dollars to create, maintain, and revise.
Table 3: Selected Highlights of Indicator Traditions in the United
States:
Tradition/domain: Economic indicators;
Illustrative examples: National Income and Product Accounts were
initially formulated to account for the flow of commodities and
services during World War II. They provide a base for key economic
indicators such as gross domestic product.
Tradition/domain: Economic indicators;
Illustrative examples: Business Cycle Indicators were created in the
1930s by the National Bureau of Economic Research and have been
compiled by the Conference Board since 1995. They were first compiled
by the U.S. Census Bureau for government agency use from 1961 to 1968
and then for public use from 1968 to 1972; the Bureau of Economic
Analysis compiled them from 1972 to 1995. The Conference Board
determines the specific data series included in the composite leading,
coincident, and lagging indicators, such as stock prices, employment,
and change in consumer prices for services respectively.
Tradition/domain: Economic indicators;
Illustrative examples: The Employment Act of 1946[A] committed the
federal government to the goals of full employment and economic
stability. The act created the Council of Economic Advisors, which
released the first Economic Report of the President in 1947. The
Council continues to publish it to this day.
Tradition/domain: Social and cultural indicators;
Illustrative examples: The Department of Labor, Children's Bureau's
Handbook of Federal Statistics on Children,[B] published in 1913,
attempted to bring together "scattered" federal data and other
information on children's welfare. The handbook was an early effort to
develop indicators for consistent monitoring of children and health.
Tradition/domain: Social and cultural indicators;
Illustrative examples: A proposed bill called the Full Opportunity and
Social Accounting Act[C] was first introduced in 1967. Although the
bill was never passed, it called for an annual social report from the
President to Congress and helped focus a national dialogue on social
indicators.
Tradition/domain: Social and cultural indicators;
Illustrative examples: In 1969, the Department of Health, Education and
Welfare published a report on social and cultural indicators called
Toward a Social Report.[D] The report was prepared at the direction of
President Johnson who sought "ways to improve the nation's ability to
chart its social progress." In 1973, federal statistical agencies
published a report on social indicators. Subsequent reports on social
indicators were published in 1976 and 1980.
Tradition/domain: Environmental indicators;
Illustrative examples: The National Environmental Policy Act (NEPA),
[E] signed into law on January 1, 1970, requires federal agencies to
assess the impacts of their decisions on the natural environment. While
NEPA did not establish any specific indicators, it does require that
federal agencies assess major federal actions significantly affecting
the environment. NEPA also established the Council on Environmental
Quality to advise the President on environmental matters.
Tradition/domain: Environmental indicators;
Illustrative examples: During the same year, EPA was created as an
independent agency to establish and enforce federal air standards and
water pollution control laws and to monitor the environment. The Clean
Air Act of 1970[F] also was passed. These initiatives focused national
attention on indicators of environmental quality.
Tradition/domain: Environmental indicators;
Illustrative examples: The Endangered Species Act of 1973[G] suggests
indicators of species viability, such as size and geographical
distribution of species' populations and their habitats. These
indicators can be used as the basis for avoiding the extinction of
species.
Source: GAO.
[A] Pub. L. No. 79-304, 60 Stat. 23 (1946).
[B] Department of Labor, Children's Bureau, Handbook of Federal
Statistics on Children (Washington, D.C.: Government Printing Office,
1913).
[C] 90th Congress, S-843.
[D] Department of Health, Education, and Welfare, Toward a Social
Report (Washington, D.C.: 1969).
[E] 42 U.S.C. §§ 4321-4370f.
[F] 42 U.S.C. §§ 7401-7671q.
[G] 16 U.S.C. §§ 1531-1544.
[End of table]
Economic Indicator Systems:
As the Great Depression deepened in the 1930s, the United States
established mechanisms to improve the collection of indicators on
particular economic and social and cultural conditions, including
national surveys on labor and health issues. During the 1940s and early
1950s, efforts increasingly focused on economic monitoring and
reporting. Key economic indicators, such as the National Income and
Product Accounts, became regularly reported and widely referenced by
policymakers, the business community, researchers, and the
public.[Footnote 25] The United States has been refining these
indicators since the 1930s, and work continues to this day. For
example, our 1997 report on the consumer price index (CPI) identified
more frequent updating of market basket expenditures weights as a way
to significantly improve the accuracy of the index and have a positive
impact on the federal budget deficit.[Footnote 26] Based on this and
other reports, the Bureau of Labor Statistics has made important
improvements in the CPI methodology, including more frequent updating
of the market basket.
An example of a specific topical area within the economic domain is the
Business Cycle Indicators system that is currently maintained by the
Conference Board. It consists of three sets of composite leading,
coincident, and lagging indexes--and is a well-known tool for
forecasting economic activity.[Footnote 27] The continuity of the
system has been critical for achieving a high level of attention from
national and business leaders.
Like most other U.S. economic indicators, the Business Cycle Indicators
system had its impetus in the dramatic economic transformations of the
Great Depression of the 1930s and World War II and its aftermath.
During the Great Depression, leaders were not able to adequately track
or forecast changes in the business cycle due to significant gaps in
our knowledge of the U.S. economy.
The Business Cycle Indicators system has been developed and refined
through public-private interactions over time. Business cycle indexes
have been published continuously since 1968, albeit with numerous
revisions and substitutions in response to factors like structural
changes in the economy due to, for example, increased globalization,
and new understandings of how the business cycle unfolds. Initially,
work on researching what would become the Business Cycle Indicators
came not from the government but from the private sector. Specifically,
this work began during the late 1930s at the private, nonprofit
National Bureau of Economic Research (NBER). NBER initially helped to
identify the most important business issues to measure and the types of
indicators needed. By the 1960s, NBER had refined the Business Cycle
Indicators and, in 1961 the U.S. Census Bureau began to regularly
publish reports based upon the indicators for government agency use. In
1968, the U.S. Census Bureau began publishing a report on the Business
Cycle Indicators not just for government agency use, but also for
public use and did so through 1972. The Bureau of Economic Analysis
then published the indicators from 1972 to 1995, although the program
was scaled back over time. The reports also included a sizeable
chartbook containing underlying economic data, which was eventually
eliminated. By 1995, the Business Cycle Indicators had become well
established, and the federal government granted the Conference Board
exclusive rights to produce the Business Cycle Indicators, which it has
done ever since.
Figure 6 illustrates how an indicator system may change over time. This
illustration shows how two different versions of the leading index--the
old leading index (or "current leading index" in the figure) and the
"new leading index" that replaced it in late 1996--predicted different
patterns for the U.S. economy. Specifically, figure 6 compares two sets
of trends: one based on the original ("current") leading index and the
other based on recalculations using a new, revised index. For example,
the old ("current") leading index provided a "false signal" of an
oncoming recession in 1984, whereas the revised leading index ("new")
provided a much more muted signal.
Figure 6: Revisions in the Leading Index of the Business Cycle
Indicators, 1984-1997:
[See PDF for image]
Note: Data from Business Cycle Indicators, vol.1, no. 11, December
1996.
[End of figure]
Social and Cultural Indicator Systems:
The apparent success of economic indicators in contributing to
discussions and decisions about managing economic policy helped spark
interest in producing indicators on the social and cultural well-being
of the nation and increased institutional support for enhancing the
availability of information to support planning and policy making. In
the 1960s, some believed that economic indicators alone were not
adequate to monitor the dramatic social changes taking place. A
heightened focus and debate on social and cultural indicators led
certain observers to label this effort as a "social indicators
movement"--even though some attempts were made to focus on
environmental indicators as well. (See app. II for more information on
the social and cultural domain.)
There were some attempts during the 1960s to unite economic indicators
with improved social and cultural and environmental indicators in order
to provide a comprehensive view of the position and progress of the
nation. A first step to enhance social and cultural indicators and
report more comprehensively on the position of the nation as a whole
occurred in 1962 when the National Aeronautics and Space Administration
commissioned the American Academy of Arts and Sciences to explore the
potential side effects of space exploration on U.S. society. The
resulting Social Indicators report, published in 1966, found that
adequate information for assessing American life was not as widely
available as economic information was. It called for increased
collection of social and cultural statistics and recommended the
development of a system of national social accounts to help guide
policy decisions.[Footnote 28]
In 1967, several senators proposed legislation calling for the creation
of a national system of social accounting and a Council of Social
Advisers that was to have been comparable to the Council of Economic
Advisers. Hearings were conducted on a proposed bill that would have
established an annual social report similar to the Economic Report of
the President, although the bill did not pass.
In 1969, the Department of Health, Education, and Welfare--now the
Department of Health and Human Services--produced an influential
publication entitled Toward a Social Report. This report was
commissioned by presidential directive to "develop the necessary social
statistics and indicators to supplement those prepared by the Bureau of
Labor Statistics and the Council of Economic Advisers." The report
dealt with various environmental and social and cultural concerns of
American society, such as health and illness; social mobility; the
physical environment; income and poverty; public order and safety;
learning, science, and art; citizen participation; and the perceived
alienation of certain groups of citizens. The report assessed
prevailing conditions on each of these topics, concluded that
indicators on social and cultural conditions were lacking, and
recommended that the executive branch prepare a comprehensive social
report for the nation with emphasis on indicators to measure social
change that could be used in setting policy and goals.[Footnote 29]
There were several other developments in the area of social and
cultural indicators during the 1970s and 1980s. In 1972, the Social
Science Research Council--a non-profit organization--established the
Center for Coordination of Research on Social Indicators.
In 1973, 1977, and 1980, the federal government published three
reference volumes , entitled Social Indicators.[Footnote 30] These
reports presented information on important aspects of the country's
social condition along with underlying historical trends and
developments. Subject areas included population; the family; health and
nutrition; housing; the environment; transportation; public safety;
education and training; work; social security and welfare; income and
productivity; social mobility and participation; and culture, leisure,
and use of time. However, the U.S. government discontinued the Social
Indicators series after the 1980 volume. Moreover, the Center for
Coordination of Research on Social Indicators also closed. Although the
absence of these consolidated efforts creates the appearance that the
production of literature on social and cultural indicators declined,
this is difficult to substantiate. An equally plausible possibility is
that it simply dispersed and continued to develop in respective topical
areas in academic, governmental, and non-profit settings.
Other developments during the 1970s and 1980s included publication of a
number of works on social indicators and the launch of several periodic
sample population surveys, such as the General Social Survey and the
National Crime Victimization Survey.[Footnote 31] Research on social
and cultural indicators was also under way in other countries and
involved some international organizations. For example, building on the
work completed in the United States, researchers in Germany continued
to develop social indicators. Their work formed the basis for the
German System of Social Indicators, which has been in place for 30
years. Additionally, the Organisation for Economic Co-operation and
Development (OECD) launched a social indicators program in 1970. This
program, with the help of an international network of researchers and
national statisticians, developed a model survey and a list of social
indicators intended to provide systematic indicators for national and
comparative use. OECD's first Programme of Work on Social Indicators
was cancelled after the publication of the first (and only) edition of
the report, Living Conditions in OECD Countries in 1986.[Footnote 32]
OECD began work on its current social indicators project in 1998, which
led to the publication of a 2002 report.[Footnote 33]
Observers have proposed a number of explanations as to why national
attempts to create more integrated social and cultural reporting appear
to have declined. One factor cited was that western industrial
societies experienced an economic crisis in the early 1980s that
continued to focus attention on economic problems. Further, the large
government budget deficits that accumulated during the 1980s reduced
the funding available for social research--along with many other
domestic policy priorities. Others believe that initial expectations
about what social and cultural indicators could accomplish may have
been "oversold." These observers argued that the usefulness of the
existing social and cultural indicators had not been demonstrated to
leaders and that, therefore, the indicators were not directly used in
policy making. Further, social processes were proving to be more
complex and less clearly understood than economic ones, and there was
no theoretical framework comparable to economic theory. An additional
factor may have been that the extensive cost of and effort associated
with collecting and analyzing social data were significant due to the
limited technology available at that time; and benefits were unclear.
In fact, the diversity of the ways in which social and cultural
indicators can be conceptualized continues to be a challenge. Many
topical areas that appear to reside clearly within that domain (e.g.,
social equity), upon further investigation, turned out to be
crosscutting and could only be examined in the context of
interrelationships with the other two domains. The difficulty of work
in the social and cultural domain is accentuated by the fact that it
covers many sensitive moral, racial, or religious issues, among others.
Healthy People, led by the Department of Health and Human Services, is
a specific example of a topical indicator system currently operating in
the social and cultural domain at the U.S. national level.[Footnote 34]
Healthy People originated in the late 1970s during a movement in the
medical, scientific, and public health communities to enhance health
promotion, health protection, and disease prevention in the nation.
Specifically, its purpose is to provide a consensus set of national
objectives related to various health concerns--such as the prevalence
of cigarette smoking and related illnesses among Americans--that the
health community could agree to, obtain data on, and monitor over time.
Healthy People was envisioned as a tool for progress, with a number of
objectives established to provide consistent guidance to the process.
The Healthy People system has increasingly engaged stakeholders at the
subnational levels to assist in progress toward national health goals
and objectives. In 1987 the Healthy People Consortium--an alliance that
now consists of more than 350 organizations and 250 state and local
agencies--was created to forge a coalition that is dedicated to taking
action to achieve the Healthy People objectives, such as reducing
obesity. It facilitates broad participation in the process of
developing the national prevention agenda and engages local chapters
and their members in the provision of community and neighborhood
leadership. The National Medical Association, Wellness Councils of
America, American Hospital Association, and American Medical
Association are examples of Consortium members that use their
expertise, contacts and resources to adopt, promote, and achieve the
Healthy People agenda. The Consortium also seeks to coordinate Healthy
People with state, local, and community level initiatives. Further, 41
states and the District of Columbia have their own Healthy People
plans.
Since 1980, Healthy People has evolved into a series of 10-year
efforts. For each upcoming decade, Healthy People has established new
sets of goal statements, focus areas, and objectives that build upon
the work of the prior decades' efforts. Healthy People 2010:
Understanding and Improving Health, was issued in 2000 and continues
the tradition by setting forth two overarching goals: (1) increasing
the quality and years of healthy life and (2) eliminating health
disparities. These goals are detailed in 28 focus areas that include
467 specific objectives, along with indicators to be used in monitoring
progress.[Footnote 35]
Figure 7 provides an example of current Healthy People indicators that
measure the objective of improving cardiovascular health and quality of
life through prevention, detection, and treatment of risk factors;
identifying and treating heart attacks and strokes; and preventing
recurrences--rates of coronary heart disease and stroke deaths (per
100,000 people). It shows that the age-adjusted death rate for heart
disease (per 100,000 people) declined throughout the 1980s and 1990s to
208 in 1998, while the rate of deaths due to strokes declined to 60.
Figure 7: Coronary Heart Disease and Stroke Deaths, by Year, in the
United States, 1979-1998:
[See PDF for image]
Notes: Data from National Vital Statistics Systems, 1979-98. The rates
are age adjusted by the year 2000 standard population to compensate for
the relative increase in the number of older people in the United
States, who have higher rates of death from coronary heart disease and
strokes.
* Age adjusted to the year 2000 standard population:
[End of figure]
Another innovation that emerged in the Healthy People 2010 report is
the identification of a smaller set of 10 "Leading Health Indicators,"
which provides a succinct, user-friendly measure of the health of the
U.S. population. These indicators are intended to increase general
public awareness and motivate action at the federal, state, and local
levels. The leading indicators include measures of:
* physical activity,
* overweight and obesity,
* tobacco use,
* substance abuse,
* responsible sexual behavior,
* mental health,
* injury and violence,
* environmental quality,
* immunization, and:
* access to health care.
Environmental Indicator Systems:
Public concerns about the quality of the environment date back to
around the turn of the 20thcentury but began to reach a critical mass
in the 1960s. Initially, many of these concerns centered on the effects
of pollution. In 1962, Rachel Carson published Silent Spring,
chronicling the effects of bioaccumulation.[Footnote 36] Several
reports raised similar concerns regarding the quality of the nation's
rivers, lakes, and estuaries. For example, the Potomac River was
heavily polluted, beach closures and warnings regarding shellfish
contamination were common events, and the Cuyahoga River in Ohio caught
fire. By the 1970s, the political momentum to protect the environment
and the public from the hazards of pollution led to a number of laws
and initiatives, including creating the EPA, establishing national
standards for drinking water, legislating protections for endangered
species, and enacting air and water pollution control laws.
For example, water quality is one area in which various efforts have
been undertaken to develop and implement environmental policies and
related indicators. Among these actions was the passage of the Federal
Water Pollution Control Act Amendments of 1972, which, as amended, is
commonly known as the Clean Water Act.[Footnote 37] The primary
objective of the act is to "restore and maintain the chemical,
physical, and biological integrity of the Nation's waters." Under the
act, states have primary responsibility for implementing programs to
manage water quality. In particular, state responsibilities include
establishing water quality standards to achieve designated uses (the
purposes for which a given body of water is intended to serve),
assessing whether the quality of their waters meets state water quality
standards, and developing and implementing cleanup plans for waters
that do not meet standards.
Monitoring information on water quality--for example, the presence of
chemicals such as chlorine, physical characteristics such as
temperature, and biological characteristics such as the health or
abundance of fish--is the linchpin that allows states to perform their
responsibilities. States generally monitor water quality directly, but
often supplement their efforts with information collected by federal
agencies, volunteer groups, and other entities. For example, many
states use data collected by the U.S. Geological Survey (USGS), which
has a large program for monitoring water quality.
While the use of water quality data is critical to meeting the
objectives of the Clean Water Act, other organizations use water
quality data for a variety of other purposes. Federal land management
agencies (including the Department of the Interior's Fish and Wildlife
Service, National Park Service, and Bureau of Land Management and the
Department of Agriculture's Forest Service) rely upon these data to
fulfill their responsibilities to protect and restore aquatic resources
on federal lands. In addition to these federal agencies, numerous
public and private organizations at the local level rely on water
quality data to ensure that public health and environmental goals are
protected. Many agencies and organizations maintain computerized data
systems to store and manage the water quality data they or others
collect.
Perhaps the largest water quality information system is EPA's storage
and retrieval system (STORET). State, local, and federal agencies and
private entities, such as universities and volunteer monitors, enter
data into STORET. Multiple users can access, analyze, and summarize the
raw data in STORET for many purposes. Data in STORET can now be
accessed via the Internet. States turn their raw data into information
on whether their waters meet water quality standards and report this
information to the EPA biennially.[Footnote 38] EPA then compiles and
analyzes this information in the National Water Quality Inventory--the
primary report for the public about the condition of the nation's
waters --which is often used to characterize the nation's progress in
achieving the goals specified in the Clean Water Act. The report is
used as a basis for making management decisions regarding water
quality, such as how funds are to be allocated among the
states.[Footnote 39]
However, the National Water Quality Inventory provides a limited
national picture of the condition of waters and watersheds in the
United States. A number of factors hinder what the National Water
Quality Inventory data can say about conditions at the national level.
Most states, territories, and tribes collect information on only a
portion of their water bodies. According to the best available data
from EPA, only about one-fifth of the nation's total rivers and stream
miles have been assessed to determine their compliance with state water
quality standards.[Footnote 40] Also, state monitoring programs,
sampling techniques, and standards differ. Inconsistencies are
compounded by the different ways that states submit data to EPA for
inclusion in the system. EPA and other agencies are in the process of
addressing the inconsistencies in the ways states monitor and assess
their waters, which hinder its ability to use the National Water
Quality Inventory report for making comparisons across states.
As part of another effort, EPA collects information from 237 agencies
on beach closings and advisories through its National Health Protection
Survey of Beaches. Reporting under the survey is voluntary and data are
drawn primarily from coastal and Great Lakes beaches rather than inland
beaches, so the survey's reliability as a national indicator is
unknown. Furthermore, monitoring and reporting vary by state. EPA asks
survey respondents to identify the sources of pollution that cause
advisories or closings. Without precise information, respondents use
their best judgment to identify sources. In more than half of the
cases, the source is unknown, as shown in figure 8. The most frequently
identified source is storm water runoff, which contains harmful
contaminants such as bacteria from livestock or pet waste.
Figure 8: Reported Sources of Pollution That Resulted in Beach Closings
or Advisories, 2001:
[See PDF for image]
Note: Data from EPA's Beach Watch Program: 2001 Swimming Season, May
2002.
[End of figure]
Recognition of the Need for Comprehensive Approaches in the United
States:
Although there have been attempts to comprehensively integrate
national-level indicators, no large-scale public effort has endured.
Attempts date back to the beginning of the 20th century, when President
Herbert Hoover established the Research Committee on Social Trends to
bring together comprehensive information on the socioeconomic condition
of the country. The Committee's 1933 report, Recent Social Trends in
the United States, addressed many aspects of society, including the
environment, demographics, health, education, recreation, religion,
urban and rural life, the family, labor, crime, and the arts.[Footnote
41] This effort also tried to analyze the interrelationships between
trends to understand the position of the country as a whole; however,
it was never repeated.
Today, efforts are underway to discuss and report on the position and
progress of the nation as a whole, but they have not taken on the
character of a comprehensive key national indicator system. These
efforts are attempting to better organize and enhance the visibility of
the indicators collected or funded by the federal government. However,
they do not integrate private sector indicators, which would allow
public and private sector leaders to rely on the same information and
could potentially increase efficiency of access and use. Examples of
ongoing federal efforts include the following.
* The annual State of the Union message describes the position and
progress of the nation--along with policy priorities for the coming
year--from the perspective of the current administration.[Footnote 42]
* Fedstats is an online effort that provides links to a variety of
statistics from federal agencies.[Footnote 43]
* Online briefing rooms at the White House Web site provide selected
statistics.[Footnote 44]
* The federal government has published The Statistical Abstract of the
United States since the 1870s. This publication contains time series of
estimates for various economic and demographic indicators at the
national level.[Footnote 45]
* The Interagency Council on Federal Statistics--under the leadership
of OMB--exists to enhance coordination and collaboration among federal
agencies that collect and disseminate indicators.
In addition to the recognition of potential value at the national
level, comprehensive key indicator systems have emerged and become
sustainable at much smaller scales in the "laboratories of democracy"
at the local, state, and regional levels.
Current Activities to Inform the Nation through Comprehensive Key
Indicator Systems:
Before moving to a more detailed analysis of the state of the practice,
it is worth noting the current level of activity regarding the
development and interaction of comprehensive key indicator systems.
Broadly speaking, the United States appears to be building a solid
foundation at local levels, with less diversity and activity as one
moves to the state or regional levels. The United States does not have
a national system that assembles key economic, environmental, and
social and cultural indicators.
Activities at the Subnational Level:
Networks of communication and knowledge sharing on comprehensive key
indicator systems exist at the local levels, especially communities and
neighborhoods. The Urban Institute's National Neighborhood Indicators
Partnership, the Community Indicators Consortium, and the Alliance for
Regional Stewardship are good examples of efforts to communicate and
share knowledge. (See app. III for a list of and detailed information
on the comprehensive key indicator systems that we studied.)
Numerous U.S. cities also have comprehensive key indicator systems. It
appears that there are significant opportunities to benefit both
established and newer efforts by sharing knowledge, best practices, and
research results.
Activities at the U.S. National Level:
A number of national leaders and experts have concluded that,
particularly in light of the long-term, crosscutting challenges facing
the nation, the United States should explore establishing a
comprehensive key national indicator system that incorporates
information from the economic, environmental, and social and cultural
domains. There is evidence that the fiscal and policy issues that each
level of government in our system faces are increasingly intertwined.
For example, the retirement of the baby boom generation and rising
health care costs threaten to overwhelm our nation's finances. To
effectively address emerging challenges, many believe the nation needs
to embark upon strategies that are affordable and sustainable and that
consider how best to coordinate and integrate the capabilities of all
levels of government, as well as the private sector, community groups,
and individuals.
Further, a number of trends--including security and preparedness,
globalization, a shift to knowledge-based economies, advances in
science and technology, and an aging population, along with the long-
range fiscal challenges facing the government--drive the need for
transformation. In most federal mission areas--such as homeland
security, affordable housing, and higher education assistance--
national goals are increasingly achieved through the participation of
many organizations. State and local governments, nonprofit
institutions, and private corporations all play vital roles in
formulating and implementing national initiatives. Promoting effective
partnerships with third parties will prove increasingly vital to
achieving national objectives.
Significant efforts have begun to explore ways to move forward in
researching and developing a comprehensive key national indicator
system. GAO, in cooperation with the National Academies, convened a
Forum on Key National Indicators in Washington, D.C. in February 2003
to discuss whether and how to develop a key national indicator system
for the United States.[Footnote 46] Participants included leaders from
the accountability, business, education, not-for-profit, government,
labor, media, minority, scientific, and statistics communities. These
participants were asked to respond to the following questions: How are
the world's leading democracies measuring national performance? What
might the United States do to improve its approach and why? What are
important areas to measure in assessing U.S. national performance? How
might new U.S. approaches be led and implemented? After discussing
these questions at length, participants pointed out the following four
main messages.
* Developing key national indicators for the United States is
important. While there are a variety of indicator efforts in the
nation, there is no generally accepted, comprehensive key indicator
system for the nation as a whole. Participants generally believed that
developing key national indicators is important for taking a more
comprehensive view of the nation's position and progress, both on an
absolute and relative basis. Several models were discussed that offer
lessons for developing a national indicator system, including existing
national topical indicator systems on aging, children, economics, and
health. Participants emphasized that the purpose of measurement, the
process of deciding what to measure, and the process of determining the
audiences are as critical as choosing what and how to measure.
* A broad range of information areas are considered significant. The
range of information assets cover the economic, social and cultural,
and environmental domains. Participants said that a first step is to
assemble "core" indicators from these existing sources. A straw
proposal--"USA Series 0.5"--was presented as a starting point for
building what might eventually become a broadly supported indicator
set. The "USA Series 0.5" included 11 key information areas: community,
crime, ecology, education, governance, health, the macro economy,
security, social support, sustainability, and transparency. In reacting
to "USA Series 0.5," participants suggested numerous refinements and
identified four additional information areas: communications,
diversity, individual values, and socioeconomic mobility.
* A rich history of indicator systems warrants collective research.
There is a long history of efforts throughout the world by leading
democracies to develop and sustain indicator systems. A distinction was
made between comprehensive indicator systems versus efforts that focus
on specific topical areas or issues. Research on what can be learned
from these systems is essential for deriving useful information for a
possible U.S. national system. Although comprehensive efforts are
currently under way in other democracies (e.g., Australia and Canada)
as well in the United States at the regional, state, and local levels,
it appears that few common sources of broad research exist to
facilitate knowledge sharing on comprehensive indicator efforts.
* A U.S. national initiative must build on past lessons and current
efforts. Developing a U.S. national comprehensive key indicator system
requires applying lessons from past efforts and engaging with many
existing efforts. A U.S. system must be flexible and evolve to respond
to economic, social and cultural, and environmental change. A
comprehensive key indicator system for the United States must be of
high quality, focused, independent, and have a definable audience. It
should incorporate diverse perspectives and would require adequate
funding, both in terms of its development and sustainability.
With nearly unanimous endorsement from forum participants of the
importance of pursuing the idea further, an informal National
Coordinating Committee (NCC) of public and private sector institutions
was constituted after the forum. Since then, the Key National
Indicators Initiative (KNII) has grown to include a large, diverse
group of leaders from the government, business, research, and not-for-
profit sectors. In December 2003, an important development occurred
when the National Academies--an independent organization chartered by
Congress to bring together experts in the areas of science and
technology to conduct critical research--became the secretariat to help
incubate the KNII.[Footnote 47]
During 2003 and 2004, a NCC steering committee and subcommittees were
created to continue the KNII discussion and refine the approach to be
taken. The KNII has created a Web site to serve as a clearinghouse for
knowledge on existing efforts under way throughout the country and the
world to help inform and underpin the initiative
([Hyperlink, http://www.keyindicators.org]. The steering committee
meets regularly and has continued to reach out to identify additional
partners in the planning process. These efforts have helped to build
the number of participants to over 200 diverse individuals and
organizations, including leaders in substantive fields (e.g., economics
and the environment) and representatives of major organizations (e.g.,
professional associations, government agencies, and public interest
groups). The NCC developed an action plan and timetable to achieve its
stated aims, which revolve around the creation of a prototype "State of
the USA" Web site to test dissemination of a comprehensive, user-
friendly, and fact-based database. It produced a draft conceptual
framework for the first phase of indicator development, a draft
communications plan to reach target audiences, and a grant proposal.
The NCC is in the process of securing private and/or public financing
to help institutionalize, sustain, and expand the initiative, and
received its first major funding in August 2004.
Increasing International Interest in Indicator Systems:
The past decade has witnessed continued growth in the development of
national indicator systems and in the evolution of national topical
indicator systems into comprehensive ones. International organizations
like the United Nations, the World Bank, and the International Monetary
Fund have supported such efforts, recognizing their importance in an
increasingly interconnected world. However, until recently there has
been no coordinated worldwide effort to study the development and
implications of national indicator systems, although significant
interest exists in exchanging related information about lessons learned
among countries. The OECD has begun such an initiative.
The OECD is an intergovernmental organization in which 30 member
countries, including the United States, discuss, develop and analyze
policy. It has become one of the world's leaders in developing
indicators to evaluate economic, social and cultural, and environmental
conditions and to assist members in policy making. While all of the
member countries are considered to be economically advanced and
collectively produce two-thirds of the world's goods and services,
membership is limited only by a country's commitment to a market
economy and a pluralistic democracy. The majority of the work performed
by the OECD is provided by its secretariat in Paris, which collects
data, monitors trends, analyzes and forecasts economic developments,
researches social changes and patterns in trade, environment,
agriculture, technology, taxation, and more. The core work of the OECD
is organized around the following five main areas--trade and investment
liberalization, policy reform and development, managing new and
evolving technologies, public governance, and social protection. OECD
provides members with studies, technical knowledge, and expertise in
these areas and uses the information to help develop guidelines and
codes.
In keeping with its global leadership role in providing quality data to
member countries, the OECD, in collaboration with the Italian
government, is sponsoring a World Indicators Forum in November 2004 to
promote and sustain a global community of practice on developing
national indicator systems. The forum will provide an opportunity to
coordinate research and information sharing among the 30 member nations
and others, and the OECD hopes it will become an annual event.[Footnote
48]
Detailed Scope and Methodology:
Recognizing that before considering such a large-scale national
comprehensive key indicator system, members of Congress and other
leaders could benefit from a better understanding of the experiences of
those who have already developed and implemented comprehensive key
indicator systems, we were asked to report on the following three
questions.
1. What is the state of the practice in developing and implementing
comprehensive key indicator systems in the United States and around the
world?
2. What are the lessons learned from these systems and future
implications?
3. What are some options for Congress to consider in identifying an
organization to develop and implement a national comprehensive key
indicator system?
To address these questions, we collected and synthesized information
from several lines of effort, including literature reviews on topical
and comprehensive indicator systems in the United States and around the
world; interviews with experts; panel discussions from an expert
session convened by the National Academies; reviews of topical area
indicator systems at the national level in the United States; fieldwork
on comprehensive indicators at the state, local, and regional levels in
the United States and on national and supranational efforts abroad; and
a review and analysis of organizational options for a U.S. national
comprehensive key indicator system.
We conducted a comprehensive literature review and interviewed experts
in the field to get a sense of the main issues related to indicators,
lessons learned, possible challenges and effects of a national
indicator system, knowledge of past and current efforts at the U.S.
national level, and ideas about possible efforts to study in greater
depth within and outside the nation. These experts represented a wide
range of communities, including academic researchers, current and
former government officials, not-for-profit leaders, and noted
practitioners at all levels of government. We also drew upon a
literature review and set of interviews that we had conducted for our
February 2003 forum on key national indicators.[Footnote 49]
We studied indicator systems at the national level in the United States
in the following five topical areas: the business cycle of the economy,
health, children and families, aging, and science and engineering. We
selected one indicator system in each of these five topical areas based
on recommendations from the experts we interviewed, recognizing that
this group does not represent the entire field of indicators and that
other indicator systems exist in each of these topical areas. We
reviewed related documents and conducted interviews with at least three
key stakeholders associated with or knowledgeable about each of these
efforts. We posed a standard set of questions to them that addressed
issues such as their history, uses, and the challenges they have
encountered.
As part of our effort to examine the current state of the practice in
comprehensive key indicator systems, we studied a select group of 29
comprehensive key indicator systems that were in operation in the
United States at the state, local, and regional levels, as well as in
Europe. (See app. III for a list of and additional information on these
29 systems.) These systems were selected based on (1) whether they met
all of the characteristics described below and (2) recommendations from
experts. We selected indicator systems that:
* included a mixture of economic, environmental, and social and
cultural indicators (regardless of whether the indicators were
organized around a particular policy focus or framework, such as
quality of life or sustainable development);
* had a reputation of being used or accessed within a jurisdiction;
and:
* had been in existence for more than 2 years and were currently in
operation.
As a final step in the selection process, we asked national
associations representing state and local governments, including the
National League of Cities and the National Association of State Budget
Officers, to review our selections to determine whether we included
indicator systems that generally reflected the state of the practice in
the United States at the subnational level and for the most part they
concurred with our selections. The European examples were selected
after consultation with OECD, several European national statistical
offices, and other experts.
We conducted interviews with representatives from each of the 29
comprehensive indicator systems. For the most part, our interviews
focused exclusively on those integrally involved in managing the
system, and we posed a standard set of questions to these
representatives. We conducted separate interview sessions with these
officials by convening U.S. regional interview sessions at four GAO
field offices in Atlanta, Boston, Chicago, and San Francisco.
We also conducted more in-depth reviews of several of the 29
comprehensive key indicator systems we studied in the United States and
Europe.
* In the United States, we conducted focused studies on a state system-
-the State of Oregon--and a city system--Boston. We visited Portland
and Salem, Oregon, and Boston, and conducted interviews with those who
had developed and implemented the systems as well as a broader range of
stakeholders, including users and potential users inside and outside of
government.
* We conducted focused studies outside the United States to get the
perspective of national and supranational indicator systems in Europe.
Specifically, we visited two European countries--Germany and the United
Kingdom--as well as EU offices in Belgium and Luxembourg. We focused on
the comprehensive key indicator systems that exist in each of the two
countries and in the EU, and explored how they interact with each other
to develop and implement these systems.
In all locations, we talked with those who are or had been involved in
developing and implementing comprehensive key indicator systems, along
with users and potential users of the indicator systems. However, we
did not collect systematic and detailed information on the potential
versus actual range of uses by different audiences for making choices.
As a result, the preponderance of our examples of usage and application
may give the impression that the systems are used primarily for public
purposes, as opposed to a much broader range of uses by private
individuals and institutions.
Most of the graphics presented in this report from the indicator
systems we studied are only to illustrate the types of information and
the variety of ways it is presented in the reports or on the Web sites
of these systems. The examples are not intended to highlight or discuss
the substantive issues conveyed by them.
We collected descriptive information on numerous aspects of the various
indicator systems described above, although we did not perform any
independent, formal analyses of these selected systems in terms of
benefits, costs, or risks. Also, the sample of selected systems we
reviewed did not include executive information systems or private
corporate systems. Importantly, we have not defined explicit, objective
criteria for the success or failure of a comprehensive key indicator
system. More research is needed in this area because many situational,
evaluative, and contextual factors influence the determination of such
criteria.
Although the federal statistical system is commented on or mentioned
for the purposes of context throughout the report--because of its
significant role in the issues surrounding topical and comprehensive
indicator systems--we did not audit or evaluate the federal statistical
system and its related agencies as part of our scope. Therefore, we are
not able to comment here on the discussions that take place among the
members of that system on many of the topics referred to in our report.
That body of experience and judgment will be vital to any further
serious dialogue on or implementation of the options and possible steps
discussed in this report. We did, however, coordinate with many of the
leaders within the U.S. statistical system for their expertise and
relied upon their advice. These individuals also were able to comment
fully on the document prior to publication.
As part of our work on all three objectives, we contracted with the
National Academies, Committee on National Statistics, to select a group
of what their staff viewed as the most relevant past studies conducted
by the Academies on topical area, domain, and comprehensive indicator
systems. The Academies' staff reviewed these studies, summarized them,
and convened a meeting of experts who had worked on or been involved
with these studies to discuss the findings and lessons learned, and
implications for how a national comprehensive key indicator system
might be developed and implemented. The Academies' review and
subsequent meeting served to validate many of the findings from our
fieldwork. The meeting of experts was held on January 26 and January
27, 2004.
To identify design features that should be considered when starting or
refining indicator systems, we analyzed the information obtained from
our reviews of the literature and the various indicator systems
described above. We applied our professional judgment to this body of
information in order to develop our observations for Congress, and we
also analyzed the legal requirements involved as part of our
identification of broad options for consideration in developing and
implementing a national effort. We did not conduct any formal cost,
benefit, or risk analyses for any specific option we identified and did
not make any recommendations as to which option, if any, Congress or
other leaders should choose.
While we examined indicators from all domains (economic, environmental,
and social and cultural) as part of our overall review of indicator
systems, we conducted additional work on the domain of social and
cultural indicators. Our review of this domain included a literature
search on past and current efforts to develop social and cultural
indicators in the United States and around the world as well as a
review of information obtained from our interviews with experts in the
indicator field and from practitioner interviews with selected
comprehensive and topical indicator systems.
Although this report is a first step in describing the state of the
practice in comprehensive indicator systems in the United States and
other areas of the world, we recognize that our analyses are based, in
part, on information obtained from the select group of indicator
systems described above. GAO did not, nor was it asked to, catalogue
the full universe of the potentially large number of topical or
comprehensive key indicator systems. Moreover, indicators are only one
part of the complex knowledge base required to inform a nation. For
instance, comprehensive key indicator systems must be supported by more
detailed databases for those who want or need to conduct more extensive
research or analysis. A review of these databases and other elements
that contribute to an informed society are beyond the scope of this
report. When we refer to "most" or "many" indicator systems in this
report, we are referring to those systems we selected to study and not
the larger universe of all indicator systems. We recognize that, given
the relatively small number of systems we studied in detail, our
findings and conclusions may not be applicable to the larger universe
of all indicator systems. The applicability of any generalizations or
extrapolations from our study examples to the U.S. national context may
also be limited.
To gain additional comments and insights, we sent a copy of this report
for review to over 60 representatives of various communities who
possess knowledge and experience in these issues, including
representatives of the scientific and research, public interest and
not-for-profit, and accountability communities. We provided a broad
spectrum of leaders and experts with an opportunity to comment on this
report, from the following categories: (a) sectoral, including
individual from the government (at all levels), business, and nonprofit
sectors; (b) discipline, including both generalists as well specialists
in topical areas like economics, health, the environment, and so forth;
and (c) professional orientation, including scientists, academics, and
practitioners. We also sent sections of our report to representatives
of the systems we mention in the text in order to validate facts and
figures. We incorporated their comments, where appropriate, throughout
the draft. Our work was conducted from July 2003 through September 2004
in accordance with generally accepted government auditing standards.
[End of section]
Chapter 2: Citizens in Diverse Locations and at All Levels of Society
Have Indicator Systems:
Citizens in jurisdictions throughout our country and around the world
are engaged in numerous efforts to develop topical and comprehensive
indicator systems. Some of these individuals act on their own behalf,
but many act on behalf of the public and private institutions they
represent. Diverse interested parties from a wide range of geographic
areas have recognized that monitoring trends over time can provide an
important method for viewing the conditions of their areas and making
comparisons with others, as well as for providing information for
planning and decision making. While opinions can and do differ over
what constitutes position and progress, those involved in each
indicator system have nonetheless found sufficient common ground to
agree that sustained efforts to collect, organize, and disseminate
information in more comprehensive, balanced, and understandable ways
will provide critical information that all can use in discussing
options and making choices.
Currently, the United States has an array of indicator systems in
topical areas (such as aging and health) that describe conditions in
the nation as a whole in those specific areas. In addition, many local,
state, and regional entities throughout the United States --as well as
several European countries and the European Union (EU)--have developed
comprehensive key indicator systems that draw from these topical areas
to create broader, general pictures of society and made them widely
available--often via the World Wide Web. We reviewed 29 diverse systems
at all levels of government, in many different parts of the United
States, as well as in Germany, the United Kingdom, and the EU.[Footnote
50]
The systems we studied have similarities in that each provides a public
good by serving as a single, freely available source of key indicators
about the economic, environmental, and social and cultural conditions
of a particular jurisdiction or group of jurisdictions. Each of these
systems has produced information products or services (e.g., an annual
report or a Web site) where the design and marketing of the products
have been geared toward better informing a target audience.
However, beyond this, the comprehensive key indicator systems we
studied differed regarding basic purpose. We found that one group of
systems is oriented more toward learning and information exchange. They
enable citizens, researchers and leaders to learn more about and
monitor conditions in their jurisdictions. Occasionally, these systems
help inform the activities of others, such as making policy and fiscal
decisions. In contrast, the second group of systems takes a step beyond
learning and exchanging information to encompass a more outcome-
oriented focus on goals or aspirations as well, however explicit or
implicit they might be. These systems use indicators to monitor and
encourage progress toward a set of goals or a vision for the future
that has been established by the people and institutions within a
jurisdiction and that has been articulated. Such systems can help
create more focused, relevant information for their audiences that may,
in turn, enhance the use of and continuing support for these systems.
The interactions over time within and between indicator systems are
complex. For example, some of the learning-oriented systems we reviewed
eventually stimulated civic activity to formulate common aspirations.
Conversely, it is possible a system that is focused on too aggressive
and narrow a set of goals might be weakened or fail to survive due to a
lack of legitimacy or from politicization. In some instances work in a
topical area, such as the environment, has expanded in scope and became
more comprehensive--such as work over the past decade on sustainable
development, which includes a range of economic, environmental, and
social and cultural issues. Finally, developers of larger-scale efforts
often learn from the innovations being pursued at smaller scales. On
the other hand, smaller-scale efforts can connect their citizens to
larger issues by monitoring and participating in regional, state,
national, supranational, or multinational systems.
Topical Indicator Systems in the United States Form a Vital Foundation
for Comprehensive Key Indicator Systems:
U.S. citizens have a large variety of sources and means by which to
inform themselves about the nation's position and progress. Indicators
that measure various aspects of the nation's conditions come, for the
most part, from a variety of national topical area systems on issues
ranging from health, safety, and water quality to education,
employment, and natural resources. We studied the following national
topical area systems in the United States: (1) the Conference Board's
Business Cycle Indicators,[Footnote 51] (2) the National Science
Foundation's Science and Engineering Indicators, (3) the Department of
Health and Human Services' Healthy People, (4) the Federal Interagency
Forum on Child and Family Statistics' America's Children: Key National
Indicators of Well-being, and (5) the Federal Interagency Forum on
Aging-Related Statistics' Older Americans: Key Indicators of Well-
being. (See app. I for additional information on these five systems.)
National Topical Area Indicator Systems Are Wide-Ranging and Have a
Variety of Uses:
Interested parties use national topical indicator systems in the United
States in a variety of ways. All of these systems provide an important
public good by bringing together diverse sets of information on
particular topics--often collected by different organizations or
agencies--in a single, convenient place to educate or inform the public
and leaders. For example, the biennial Science and Engineering
Indicators report (published by the National Science Board and the
National Science Foundation) provides a one-stop shop for reliable,
regularly updated indicators that are understandable to statisticians
and nonstatisticians alike.[Footnote 52] Some of the public and private
policy makers we interviewed who do not study the multitude of
publications on science and engineering issues said that having all of
this information in one place is valuable. They have used the volume as
background information for formulating policy and developing proposals,
as well as for program planning.
Topical area indicator systems also provide useful information for
monitoring progress by measuring, tracking, and anticipating or
forecasting events. The Business Cycle Indicators system is a key
example--leaders can use this set of indicators as a tool to forecast
business conditions and to take action to deal with expected
fluctuations in the economy before they reach crisis levels. The
Business Cycle Indicators are designed to monitor, signal, and confirm
cyclical changes, such as recessions, in the economy at large--and are
frequently cited by newspapers and television. In addition, the leading
indicators are often used to report on the extent of economic growth
and signal the overall health of the economy.
Topical area indicator systems also can be used to develop and further
a set of policy objectives or a national agenda, in part, through
building consensus and uniting stakeholders around the development of
an indicator set. The underlying concept behind Healthy People, for
example, is to provide a consensus set of national objectives and
indicators to measure progress toward these objectives. The highly
participatory process Healthy People has used in establishing goals and
indicators is an important element that has helped rally awareness and
commitment for the broad set of health objectives at the federal,
state, and local levels for more than 20 years.
Many National Topical Area Indicator Systems Depend on the Federal
Statistical System, as Well as Private Sector Suppliers and Providers:
All of the national topical indicator systems we examined largely
depend on data and indicators gathered by the federal statistical
system--the federal agencies that collect and disseminate statistics as
part of their missions.[Footnote 53] These agencies have been organized
to support specific government activities and congressional needs for
statistics to help inform policy making in their areas of
responsibility. The result is that we have statistical agencies for
labor, health, education, transportation, science, agriculture, and
justice, among others.
The decentralized nature and wide-ranging character of the system is
evidenced by the fact that over 70 agencies conduct statistical
activities. Ten principal federal statistical agencies collect,
analyze, and produce statistics as their primary mission. As with other
federal agencies, the statistical agencies have been established over
time to meet specific needs and so they are diverse. The benefit of the
federal statistical system is having a variety of smaller entities,
which presumably may be more adaptable to meet the needs of specific
audiences. However, this has also been a disadvantage in that at times
it has hindered the sharing of indicators among agencies that serve
similar populations or work on similar issues. For example, many
agencies that collect indicators on similar populations or work on
similar issues have different funding streams and variable levels of
available funding, answer to different congressional oversight and
appropriating committees, were created at different times for different
reasons, and operate under different laws and orders.
New Institutional Approaches Have Enhanced Indicator Development and
Information Collection:
We identified several recent efforts to increase coordination within
the federal statistical system and enhance access to and dissemination
of data across agencies, topical boundaries, and legal limitations that
could also increase the opportunities to leverage federal statistical
information. One major effort to enhance coordination is the
Interagency Council on Federal Statistics, which provides a vehicle for
coordinating statistical work and information when activities and
issues cut across agencies.[Footnote 54] In 1995, Congress provided
explicit statutory authority to include the heads of all the principal
statistical agencies on this Council. Another effort to enhance access
to and dissemination of statistical data is the Confidential
Information Protection and Statistical Efficiency Act of 2002 (CIPSEA),
which established a uniform set of safeguards to protect the privacy of
individually identifiable information acquired for statistical
purposes. CIPSEA permits sharing of certain business data between the
U.S. Census Bureau, the Bureau of Economic Analysis (BEA), and the
Bureau of Labor Statistics (BLS).[Footnote 55] An additional effort to
improve coordination and expand access to federal statistical
information was the establishment of the Fedstats Web site
([Hyperlink, http://www.fedstats.gov]), which provides users access to
statistics from over 100 federal agencies.
Perhaps even more significant is the emergence of interagency forums
that have been designed to enhance public-private partnerships and
increase the federal statistical system's ability to organize
information around broader sets of public concerns. For example, the
Federal Interagency Forum on Aging-Related Statistics and the Federal
Interagency Forum on Child and Family Statistics are designed to
coordinate, collaborate, and integrate federal information to improve
reporting and dissemination of information to the policy community and
the general public; they also try to produce more complete indicators
with more consistent definitions.
Gaps in our knowledge about important national issues and populations
exist in all topical areas, as do inconsistencies in how we collect
information on them. In some cases, these knowledge gaps appear to be
standing concerns, while in other cases new challenges or events have
rendered existing information collections insufficient. Some topical
indicator systems have served as springboards for identifying knowledge
gaps and a means to work on collecting new or different types of
indicators to fill these gaps or enhance consistency, although the
changes have tended to occur incrementally.
Both the Federal Interagency Forum on Child and Family Statistics and
the Federal Interagency Forum on Aging-Related Statistics seek to
identify and remedy knowledge gaps in information about their
respective populations, many of which appear to be long-standing
concerns. Accordingly, their regular reporting includes sections
devoted to presenting a description of measures that are in need of
development. These lists include many important aspects of children's
and older Americans' lives for which regular indicators are lacking or
are in development, such as homelessness; long-term poverty; mental
health; disability; neighborhood environment; and information on the
social, intellectual, and emotional skills of preschoolers. The forums
have been used to discuss ways to collect new measures and improve
existing ones; and in some cases, agencies have fielded surveys to
incorporate new measures. Moreover, in some instances topical area
systems have demonstrated how indicators on similar issues or
populations are collected inconsistently across various agencies,
including different definitions of concepts like homelessness. For
example, the work of the Interagency Forum on Aging-Related Statistics
has led to a number of developments, such as the establishment of the
Study of Asset and Health Dynamics Among the Oldest Old and the
acceptance of more standardized age categories for use across federal
agencies.
The indicator system maintained by the Federal Interagency Forum on
Child and Family Statistics, referred to as America's Children: Key
National Indicators of Well-being, grew from a public policy need to
integrate information on subjects relating to children and their
families, such as economic security, health, behavior and social
environment, and education. It also originated from a need to
understand the problems of the shared populations served by various
federal agencies and stimulate discussions of collaborative solutions.
At the outset, member agencies were concerned that they did not have,
in one place, a comprehensive picture of the health and well-being of
children and that while there was an abundance of indicators, they were
located in too many different places. This forum, which started
informally in 1994, was formally established by presidential executive
order in 1997, and today it comprises over 20 agencies that have some
jurisdiction over children's issues.[Footnote 56]
The Federal Interagency Forum on Child and Family Statistics has
evolved to focus on the development of a set of indicators and led to
an ongoing series of reports on these indicators, which have been
published annually since 1997.[Footnote 57] Generally, efforts are made
to keep indicators the same so that changes over time can be measured;
however, indicators have been added and refined as data have improved
or become available, or based on comments from interested parties. For
example, a new regular indicator added to the health section of the
2003 report was children who are overweight. This indicator reflects
growing national concerns about obesity among Americans. Figure 9 shows
one of the indicators used to monitor the numbers and trends of
overweight children and adolescents, which shows a dramatic increase in
the number of children who are overweight today as well as significant
differences among various racial and ethnic groups. In addition, in
some years following publication of each report, a symposium has been
held with representatives from the private sector and academia to seek
feedback and identify any significant gaps in knowledge about
children's issues. Recently, to make better use of its resources, the
Federal Interagency Forum on Child and Family Statistics decided to
update all indicators annually on its Web site
([Hyperlink, http://www.childstats.gov]), and to alternate publishing
the more detailed America's Children report with a new condensed
version--America's Children in Brief: Key National Indicators of Well-
Being--that only highlights selected indicators. Accordingly, in July
2004, the Forum published the brief, and in July 2005 the Forum will
publish the more detailed report.
Figure 9: Percentage of Children Ages 6 to 18 Who Are Overweight, by
Gender, Race, and Mexican-American Origin, Selected Years 1976-1980,
1988-1994, 1999-2000:
[See PDF for image]
Source: Centers for Disease Control and Prevention, National Center for
Health Statistics.
Note: Data from the National Health and Nutrition Examination Survey.
[End of figure]
In determining its list of key indicators for America's Children, the
Federal Interagency Forum on Child and Family Statistics chose
indicators that were easily understood by broad audiences; objectively
based on substantial research connecting them to child well-being and
using reliable data; balanced so that no single area of children's
lives dominates the report; measured regularly so that they can be
updated and show trends over time; and representative of large segments
of the population, rather than one particular group.
The Federal Interagency Forum on Aging-Related Statistics was created
in 1986 to coordinate information related to the aging
population.[Footnote 58] The impetus for the Forum on Aging-Related
Statistics was a need to improve the quality of information on the
aging population, which has been growing and will become an even larger
population with the retirement of the baby boomers. Major topics of
concern include economic security, health status, health risks and
behaviors, and health care. The Federal Interagency Forum on Aging-
Related Statistics encourages collaboration among federal agencies to
ensure that they know as much as possible about the health and well-
being of the aging population.
Like the Federal Interagency Forum on Child and Family Statistics, the
work of the Federal Interagency Forum on Aging-Related Statistics
eventually led to the development of an interagency set of key
indicators on the health and well-being of the aging population,
culminating in the publication of its first, and so far only, report in
2000, entitled Older Americans 2000: Key Indicators of Well-
being.[Footnote 59] Figure 10 provides an example of one of the
indicators contained in this report that is related to the ability of
older Americans to access health care: the percentage of Medicare
beneficiaries age 65 or older who reported having had problems with
access to health care between 1992 and 1996. According to an official
of the Federal Interagency Forum on Aging-Related Statistics, an
updated version of their indicators report is expected in late 2004.
Figure 10: Percentage of Medicare Beneficiaries Age 65 or Older Who
Reported Having Had Problems with Access to Health Care, 1992-1996:
[See PDF for image]
Source: Medicare Current Beneficiary Survey.
[End of figure]
Topical indicator systems are also devising ways to address knowledge
gaps that have been exposed by new challenges, such as changes in the
global economy. For example, in the science and engineering area,
research and development work is increasingly being conducted by a
wider variety of parties, as there have been significant increases in
research and development partnerships, alliances, and
interdisciplinary research. However, it appears that current indicators
in science and engineering are not sufficient to measure the trend of
increased outsourcing of research and development. In response, the
National Science Foundation is carrying out strategies to capture this
information and change some of its data collection systems to address
these data gaps.
Topical area indicator systems have also exposed instances when
indicators are not collected or presented in the same way, which could
cause confusion or pose difficulties in monitoring trends over time.
For instance, across the health and aging areas, there are reportedly
numerous different definitions of disability in federal programs. One
of the primary missions of both the Federal Interagency Forum on Child
and Family Statistics and the Federal Interagency Forum on Aging-
Related Statistics is developing ways to improve consistency in
information collection efforts and in how concepts are defined.
Topical Areas Are Evolving in Different Ways, toward Creating a Broader
Picture of the Nation's Position and Progress:
It is evident that many of those working in the topical fields clearly
understand the need both to broaden the scope of their work and to
ultimately integrate it into a broader, more comprehensive view of
society. Hence, the forces working toward more comprehensive indicator
systems include both citizens and professionals in topical and
disciplinary communities. The following are just a few examples of such
efforts.
* Economics and non-market accounts. The Bureau of Economic Analysis
and others are working on a project to apply national economic
accounting methods to sectors not included in the gross domestic
product accounts, such as research and development. This is exemplified
in the Blueprint for an Expanded and Integrated Set of Accounts for the
United States, which was presented at the Conference on Research in
Income and Wealth - New Architecture for the U.S. National
Accounts.[Footnote 60] Further, European statisticians, particularly
in the Netherlands, have developed frameworks for integrating
environmental information into the national accounts, and research in
this area is a priority in the EU.
* Social and cultural indicators. Many private and public sector
efforts currently sponsor either research or regular publications that
bring together information on social and cultural indicators. European
nations including Germany, the Scandinavian countries, and the
Netherlands in particular, have developed social and cultural indicator
systems that have had an impact on the social policies pursued by their
governments.
* Sustainable development. For at least the past 15 years, the
environmental community (including governments, scientists and
researchers, non-governmental organizations, and businesses) has
struggled worldwide to expand its work to ensure that socioeconomic
development policies include a consideration of the environmental
impact by developing an overall conception of sustainable development.
This was first formalized at the international level, primarily by
governments, when the United Nations sponsored a summit on sustainable
development in Rio De Janeiro in 1992. It was followed by a summit in
Johannesburg 10 years later, which was much broader and attempted to
reach a more diverse community throughout civil society. In addition,
the EU has adopted sustainable development goals and mandated the
development of action plans from member countries and the development
of a system of indicators for measuring progress on sustainable
development. As a result, many of the EU member countries are
developing their own sustainable development indicator systems.
* Well-being and happiness. A significant body of academic research
focuses on how to measure overall individual and societal well-being
and happiness, as larger constructs with which to assess society. For
example, researchers in the Netherlands created a World Database of
Happiness, which stores available research findings on happiness and
provides access to related indicators that form the basis of these
findings. Another recent example was a June 2004 the Brookings
Institution panel on the relationship between money and happiness,
titled Informing Policy Choices Using the Economics of
Happiness.[Footnote 61]
* Quality of life. Perhaps the broadest set of efforts has to do with
using quality of life as an integrative framework intended to move
beyond the more strictly economic idea of "living standards" to a more
holistic and broader conception of a society's overall status and
progress. For example, the International Society for Quality of Life
Studies has done extensive work on these topics and has several
academic journals related to these topics, such as Social Indicators
Research and the Journal of Happiness Studies.[Footnote 62]
The next step beyond efforts to broaden the scope within a topical area
or create new crosscutting topical areas leads naturally to
comprehensive key indicator systems, which pull all these together in
an integrated fashion for one or multiple jurisdictions. The
interrelationships between topical and comprehensive key indicator
systems appear to be highly complementary. While topical systems form
the essential underpinning for aggregating information into
comprehensive systems, comprehensive systems create a broad picture
that helps illuminate areas where new topical indicators could be
developed.
The Practice of Developing Comprehensive Key Indicator Systems Is
Active and Diverse:
We found evidence of potentially hundreds of comprehensive key
indicator systems throughout the United States. In this study, we
focused on 26 comprehensive key indicators systems in the United States
at the subnational level that were highly diverse in terms of
geographic location, size of the jurisdiction, level of governance,
culture, situational conditions, political and legal structures, key
public issues, and longevity. In addition, we studied 3 comprehensive
key indicator systems outside the United States at the national and
supranational levels--for a total of 29, as shown in table 4. (See app.
III for more information on the comprehensive systems we studied in the
United States and abroad.)
Table 4: Comprehensive Key Indicator Systems Reviewed for This Study,
by Level of Jurisdiction:
U.S. local/regional level:
* Baltimore's Vital Signs;
* Boston Indicators Project;
* Burlington Legacy Project;
* Chicago Metropolis 2020;
* Neighborhood Facts (Denver);
* Hennepin County Community Indicators (Minneapolis);
* Community Atlas of Hillsborough County (Tampa area, Florida);
* Social Assets and Vulnerabilities Indicators (Indianapolis);
* Indicators for Progress (Jacksonville, Fla.);
* King County Benchmarks (Washington);
* Milwaukee Neighborhood Data Center;
* New York City Social Indicators;
* Compass Index of Sustainability (Orange County, Fla.);
* Portland Multnomah Benchmarks (Oregon);
* Santa Cruz County Community Assessment Project (California);
* Santa Monica Sustainable City Program (California);
* Sustainable Seattle;
* Index of Silicon Valley (California);
* State of the Region (Southern California);
* Benchmarking Municipal and Neighborhood Services in Worcester
(Massachusetts);
U.S. state level:
* Results Iowa;
* Maine's Measures of Growth;
* Minnesota Milestones;
* North Carolina 20/20;
* Oregon Benchmarks;
* Social Well-being of Vermonters;
National level outside the United States:
* German System of Social Indicators;
* United Kingdom Sustainable Development Indicators;
Supranational level:
* European Structural Indicators (European Union).
Source: GAO analysis.
[End of table]
In each case, we found active efforts to assemble indicators and focus
on institutionalizing a new tool for informing the democratic process
in their communities. As shown in figure 11, the longevity of the
efforts we reviewed in the United States and abroad ranged from
approximately 4 to 30 years.
Figure 11: Relative Longevity of Selected Comprehensive Key Indicator
Systems in the United States and Abroad:
[See PDF for image]
[End of figure]
The systems we studied have similarities in that each provides a public
good by serving as a freely available source of key indicators about
economic, environmental, and social and cultural conditions of a
particular jurisdiction or group of jurisdictions. However, the most
significant difference among them regarded their basic orientation and
purpose. One group of systems is oriented more toward learning and
information exchange. They enable citizens, researchers, and leaders to
learn more about and serve as instruments to monitor conditions in
their jurisdictions. Occasionally, these systems help to inform the
activities of others, such as making policy and fiscal decisions.
Another group of systems is more outcome oriented around goals or
aspirations, however explicit or implicit. These systems go a step
further by using the indicators as a way to monitor and encourage
progress toward a set of goals or a vision for the future that has been
established by the people and institutions within a jurisdiction. It
appears that outcome-oriented systems tend to create more focused,
relevant information for their audiences, which can aid them in
overcoming some common challenges.
Comprehensive Key Indicator Systems Create a Unique Public Good: A
Single Source of Information about Conditions in a Jurisdiction
Available to Many Audiences:
All the systems we studied have a simple idea in common: bringing
together diverse sources of information into an easily accessible,
useful tool--which can be considered a public good--for a broad variety
of audiences and uses in their jurisdictions. Figure 12 illustrates the
diversity of data sources which a comprehensive indicator system could
aggregate. For example, the Boston Foundation's Boston Indicators
Project brings together indicators from many public and private sources
at all levels of government, including the U.S. Census Bureau, and
city, university, and not-for-profit sources. In addition, systems can
cut across different geographic boundaries and make different
comparisons. Some systems we studied present information at a state or
regional level, while others present information down to the
neighborhood level.
Figure 12: Boston's Data Items by Source:
[See PDF for image]
[End of figure]
Comprehensive Key Indicator Systems Differ Primarily by the Degree to
Which They Are Learning-Oriented or Outcome-Oriented:
Some indicator systems solely provide information for mutual learning
about the economic, environmental, and social and cultural conditions
of a jurisdiction. The indicators in these systems are primarily
selected based upon the information needs of their target audiences and
are grouped into topical area categories without specific links to
jurisdictional or regional goals. The information is often presented on
Web sites with limited commentary or analysis of results. Systems of
this kind that we examined were housed in academic or not-for-profit
organizations. For example, the Social Assets and Vulnerabilities
Indicators (SAVI) system in Indianapolis collects, organizes, and
presents information on "community assets," such as schools, libraries,
places of worship, hospitals, and community centers. The system also
includes indicators on health, education, criminal justice, and welfare
that may highlight what are referred to as "vulnerabilities," such as
neighborhoods with high levels of crime and unemployment.
At the other end of the spectrum are systems that use indicators as a
way to monitor and encourage progress toward outcomes, such as a set of
goals or a vision for the future, that have been established for a
jurisdiction or group of jurisdictions. For example, the Oregon
Benchmarks system measures progress toward a strategic vision and
related goals for the state--known as Oregon Shines. The indicators are
organized around three broad goals: quality jobs; engaged, caring, and
safe communities; and healthy, sustainable surroundings. Each of these
three broad goals has numerous objectives and specific targets
associated with it, and related indicators to measure progress. In most
cases, both types of comprehensive key indicator systems have drawn
upon the rich body of information already developed in topical areas
within the three domains of economic, environmental, and social and
cultural. In addition, some of the systems have evolved by changing in
design or focus to adapt to different circumstances, such as user
demands for more understandable information or other types of
indicators.
We identified less diversity among the learning-oriented type of
comprehensive key indicator system and studied fewer of these types of
systems than those that are linked to goals or visions. Key
illustrations of learning-oriented comprehensive systems include
Neighborhood Facts, Denver, and the Social Assets and Vulnerabilities
Indicators, Indianapolis.
It should be mentioned that nothing in theory prevents an organization
from having purposes that incorporate aspects of both learning and
outcome orientations. For instance, a learning-oriented indicator
system might be drawn on for the purposes of policy analysis. Or, an
outcome-oriented system could include significant educational and
outreach programs to increase the understanding of its audiences.
Neighborhood Facts, Denver:
The Neighborhood Facts project in Denver provides a comprehensive
source of information on neighborhood conditions in that city, which
has a population of half a million people and is the state capital of
Colorado. It has not established goals or targets for what neighborhood
conditions should be or the levels of progress that are expected. Thus,
a system like Denver's collects select pieces of information from
diverse sources and organizes them so they are useful and easily
accessed by their target audience. This system performs a range of
activities, such as publishing regular reports with updated information
on the indicators and maintaining a centralized database. Another
activity is providing training and technical assistance to the public
or other organizations in using the indicator information, particularly
smaller organizations with fewer resources or less expertise.
Created in 1991, the Neighborhood Facts system describes conditions in
Denver's 77 neighborhoods. The system is managed by the Piton
Foundation, a private foundation funded by a Denver energy company, the
Gary-Williams Energy Corporation. The Piton Foundation was started in
1976 to provide opportunities for families to move from poverty and
dependence to self-reliance. The impetus for Neighborhood Facts was a
desire among public and private leaders to provide citizens,
particularly those in neighborhoods with high concentrations of low-
income individuals, with the information necessary to take action to
improve conditions in their neighborhoods and to become more
independent.
The indicators cover such topical areas as demographics, housing,
economics, health, education, and crime. Neighborhood-level
information can be compared to citywide information. For example, the
system reports on the number of renters who pay more than 30 percent of
income on housing for a particular neighborhood, which was identified
by Piton as a key indicator of Denver's housing situation. Leaders
could use this information to determine which areas of the city might
be candidates for lower cost options or additional housing units, or
community activists can use it to push for corrective actions.
Indicator information is obtained from local, state, and federal
sources, such as the U.S. Census Bureau. Neighborhood Facts regularly
updates the information on its Web site and provides periodic e-mail
bulletins to those who sign up online to receive them. Since 1994, it
has published a comprehensive report on indicator results every 5
years. Piton staff provide some training to the public on how to access
and use the information contained in the report and on the Web site,
which is important considering the focus on assisting low-income
residents and small community groups. See figure 13 for a sample of
information in the Neighborhood Facts interactive online database.
Figure 13: Neighborhood Facts Database Sample, Denver:
[See PDF for image]
Source: The Piton Foundation.
Note: See [Hyperlink, http://www.piton.org].
[End of figure]
Social Assets and Vulnerabilities Indicators, Indianapolis:
Initiated in 1993, the Social Assets and Vulnerabilities Indicators
(SAVI) system provides information on the economic, environmental, and
social and cultural conditions in the Indianapolis metropolitan area,
which had a population of over 1.6 million people in 2000. The
Indianapolis metro area is made up of 10 counties with very different
economic structures--Boone, Hamilton, Hendricks, Marion, Hancock,
Morgan, Johnson, Putnam, Brown, and Shelby counties. Marion County is
the center of population in the metropolitan statistical area (MSA) and
in the State of Indiana overall.[Footnote 63]
SAVI began out of an effort to update a community assessment conducted
by the United Way of Central Indiana. The overriding principle of the
project is to increase the accessibility of information about human
services needs, assets, and resources and to provide that information
at a reasonable cost to nonprofit and neighborhood groups. Further,
organizers were concerned that there was too much costly redundancy in
data collection throughout the Indianapolis region, as well as a desire
to have public and private leaders work from the same information base
about conditions in the metropolitan region. SAVI collects, organizes,
and presents information on "community assets," such as schools,
libraries, places of worship, hospitals, and community centers. The
system also includes indicators on health, education, criminal justice,
and welfare that may highlight what are referred to as
"vulnerabilities," such as neighborhoods with high levels of crime and
unemployment. The system allows users to match assets with
vulnerabilities. For example, if the indicators showed that the most
prevalent ailments in the Indianapolis region are treatable through
outpatient care, yet indicators also show that there is an
overabundance of hospital beds, leaders might be prompted to convert
unused hospital space to outpatient treatment centers.
SAVI is managed by the Polis Center, a private not-for-profit
organization located in Indiana University-Purdue University at
Indianapolis. The United Way of Central Indiana is the community
trustee of the project. SAVI is funded primarily by local community
foundations, Indiana University-Purdue University at Indianapolis, and
local governments.
SAVI aims to provide a common source of information for community-level
decision making. The system integrates 10 large data sets
(approximately 40 data sets in total) that are collected by others--
mostly by federal, state and local agencies--and processes and presents
the data at the regional and neighborhood levels. An important part of
the program is teaching the public how to use its interactive database
through online support and tutorials. See figure 14 for an example of
the SAVI interactive Web site, which is currently being modified.
Figure 14: SAVI Web Site Sample, Indianapolis:
[See PDF for image]
Source: The Polis Center.
Note: See [Hyperlink, http://www.savi.org/].
[End of figure]
Comprehensive Key Indicator Systems Are Diverse, Particularly Those
That Are Outcome-Oriented:
Most comprehensive key indicator systems we examined in the United
States at the state, local, and regional levels, and in Europe at the
national and supranational levels, are outcome-oriented in that they
monitor the progress of jurisdictions in meeting certain goals or
aspirations for the future and simultaneously provide information on
the condition or position of jurisdictions to a wider group of users.
However, these systems are diverse and vary in several major ways,
including their aims and the activities they perform, their
organizational structures, sources of funding and data, and the
geographic level of data they present.
In most instances, the organizers of these systems selected their
indicators after the goals or visions of a jurisdiction were
established. However, goals or indicators typically undergo periodic
updating through an iterative process of stakeholder review. For
example, the EU developed its European Structural Indicators system to
assess progress in achieving a set of policy goals for the economic,
environmental, and social renewal of the EU that were agreed to by
member countries. The indicators form the basis for a mandated annual
report that policy makers use to monitor progress in achieving the
goals and take appropriate action. Moreover, numeric targets are
sometimes attached to the indicators, specifying the exact degree to
which the indicators are expected to change over time. For example, the
Oregon Benchmarks set a target for crime to decrease by 4 percent over
a 10-year period. Outcome-oriented systems are designed to respond to
the needs or attract the attention of a particular audience of
stakeholders--that is, those who can take action to achieve the goals
or those who are otherwise interested in seeing progress being made
toward them. However, the systems are also available--either through
public reports, a Web site, or both--to other organizations or
individuals to provide information about the condition or position of a
particular jurisdiction, regardless of whether they agree with or are
interested in the goals or visions around which an indicator set is
organized.
Comprehensive key indicator systems also vary in their aims and the
activities they perform, their organizational structures, and their
funding arrangements. Their various aims include holding others
accountable for agreed-upon policies or strategic goals; raising
awareness of issues revealed through indicator trends to spur action
among leaders inside and outside government; and demonstrating
connections among goals and indicators in crosscutting areas, such as
sustainable development and quality of life. For example, the United
Kingdom's Sustainable Development Indicators system shows various
indicators of social progress, economic growth, and environmental
protection that are related to the country's ability to meet the needs
of present citizens without compromising the ability of future
generations to meet their own needs. Along with working toward multiple
aims, comprehensive key indicators systems perform a variety of
activities, such as regularly reporting to the public on progress being
made toward achieving their goals or vision, or on general conditions
in the jurisdiction. Some organizations choose to report results with
little or no commentary on how much progress has been made. In
contrast, others offer extensive commentary and analysis, such as
assigning grades to signify the level or degree of progress, offering
recommendations for ways to make more progress, or both. Further, based
on emerging trends that some indicators have highlighted, a few systems
have acted on their own to address them. Others have provided
nonfinancial assistance, for example, training or technical assistance,
to other organizations or entities to use the information to enhance
their ability to take action.
The organizational structures and funding sources of these
comprehensive key indicator systems also varied. Some have been
established within government agencies; not-for-profit organizations,
such as civic groups, academic institutions, or foundations; or through
partnerships between public and private organizations. In some cases,
new organizations were created to develop and implement the systems. In
other cases, the systems were initiated in existing organizations.
Sources of funding included exclusively public, exclusively private, or
a mixture of the two. Diversity exists even within a particular type of
funding. For example, private funding might come from one or more non-
profit foundations or a for-profit corporation. The Index of Silicon
Valley system in California was initiated by a non-profit organization
that is a consortium of leaders in the government, academic, civic, and
business communities, among others; and its funding sources are
similarly diverse. In contrast, the Oregon Benchmarks system was
initiated and is managed by the state government, and receives its
funding exclusively from the state.
Several comprehensive key indicator systems at the state, local, and
regional levels in the United States and at the national and
supranational levels in Europe illustrate the similarities and
differences between those that are linked to desired outcomes, such as
a jurisdiction's goals or visions for the future. The following are
examples of outcome-oriented systems: the Boston Indicators Project,
Boston; the Index of Silicon Valley, California; the Oregon Benchmarks,
State of Oregon; the Sustainable Development Indicators, the United
Kingdom; and the European Structural Indicators, European Union.
Example of a Comprehensive Key Indicator System at the Local Level--
Boston Indicators Project:
Since 1999, the Boston Indicators Project's system has reported on
progress toward shared goals for Boston, provided comprehensive
information about Boston's progress in meeting those goals, and has
compared the city's position to that of other cities and the nation as
a whole. Boston is a racially and ethnically diverse city, with a
population of nearly 600,000 (according to the 2000 U.S. census),
making it the 20th largest city in the United States and a major
northeastern hub. It also serves as the capital of the State of
Massachusetts. (See app. IV for additional information on the Boston
Indicators Project.)
Staff of the Boston Foundation, a private, not-for-profit community
foundation, manages the indicators project. The project is funded
through private sources yet receives some in-kind public support. The
project is a collaborative effort between the Boston Foundation, the
Boston Redevelopment Authority, and the Metropolitan Area Planning
Council. The staff's main activities are to use the indicator system
and its reports to raise awareness of emerging issues among public and
private leaders as well as citizens, provide a comprehensive source of
information, train and educate groups and individuals on how to use the
information, provide a common source of information for civic
discourse, and facilitate collaborative strategies to make progress
toward citywide goals. According to organizers, the impetus for the
system was a major change in Boston's economic and social conditions in
the 1990s, including a transition to a more technology-based economy.
Accordingly, government and community leaders called for a convenient
source of information to assess the city's position and progress in a
time of rapid change.
The indicators are organized along 10 goal areas: civic health,
cultural life and the arts, economy, education, environment, housing,
public health, public safety, technology, and transportation. For
example, in the section on the environment, one goal is having
accessible green and recreational spaces, and a related indicator is
the amount of green space available per 1,000 children. This indicator
could be used in a number of ways. If the indicator showed that
Boston's green space acreage was not keeping up with the growth in the
number of children in a particular neighborhood, it could be a sign
that city leaders should consider increasing the amount of public open
space in that neighborhood, among other options.
Further, the project groups some of its indicators by crosscutting
topics, such as children and youth, race/ethnicity, and sustainable
development, to help users see important connections among various
issues and how they might contribute to problems such as poor race
relations or racial disparities. For example, under race/ethnicity, one
could view those indicators related to monitoring the conditions of
Boston's racial and ethnic communities, such as the degree of racial
segregation in Boston's neighborhoods and the unemployment rate by
race. Indicators are drawn primarily from existing statistical sources
and supplemented by a few public opinion surveys that the project
conducts.
The project publishes reports on indicator trends every 2 years and has
published comprehensive reports in 2000 and 2002, with another one
planned for 2004. It also maintains an interactive Web site, which is
illustrated in figure 15. The goals and related indicator measures were
selected through a highly participatory process involving more than 300
residents from diverse public and private organizations, neighborhoods,
and racial and ethnic groups.
Figure 15: The Boston Indicators Project's Interactive Web Site:
[See PDF for image]
Note: See [Hyperlink, http://www.tbf.org/indicators/].
[End of figure]
The Boston Indicators Project is an example of a system that has
evolved over time. Initially, it aimed to promote public awareness of
issues though its indicators report and make information more
accessible to the community. Because of widespread support and use of
the system, managers have expanded their activities to link the
system's broad goals and indicators to the development of a new civic
agenda for action. The project's managers have brought together a group
of local leaders from government, business, academic, and not-for-
profit organizations to develop a mutually agreed-to civic agenda,
including long-term goals and benchmarks, that would include specific
actions to address certain issues identified through the project's
indicators. Managers say they believe such an agenda will allow the
indicator system to have a greater impact on the city and make it more
relevant to the public. The civic agenda will appear for the first time
in the project's 2004 comprehensive report.
Example of a Comprehensive Key Indicator System at the Regional Level-
-Index of Silicon Valley:
Launched in 1995, the Index of Silicon Valley annually reports on
progress in achieving a set of goals--largely related to sustainable
development--for California's Silicon Valley region. The Silicon Valley
is commonly considered to be all of California's Santa Clara County, as
well as part of San Mateo County; Scotts Valley in Santa Cruz County;
and Fremont, Newark, and Union City in Alameda County. With a
population of more than 2 million people, the Silicon Valley region has
a larger population than 18 U.S. states.
The indicator system is managed by the Joint Venture: Silicon Valley
Network (JVSV), an independent and private, nonprofit organization
funded by private corporations, individuals, foundations, and local
governments in the region, which also constitute the target audiences.
JVSV has a board of directors consisting of leaders from business,
labor, government, education, nonprofits, and the community. The
impetus for the system was a perceived need for leaders inside and
outside of government to work together toward common goals, since the
Valley itself is so diverse--containing hundreds of businesses,
educational, and research institutions, as well as myriad local
governments.
The system is organized around four broad themes--innovative economy,
livable environment, inclusive society, and regional stewardship--and
17 goals under these themes. The related indicators deal with topical
areas, such as education, health, housing, the environment, economic
development, workforce preparedness, transportation, and civic
involvement. For example, one goal is for the region's innovative
economy to increase productivity and broaden prosperity. This goal is
measured in part by an indicator of the number of fast-growth companies
in the Silicon Valley. In this case, if the number of fast growth
companies was shown to be declining, depending on the cause of
variation, this trend could spur collaborative efforts in the region to
attract businesses that create rapid job growth, such as gazelle
companies (especially fast-growing companies) that generate the most
wealth, new technology, and new jobs in the Silicon Valley and across
the United States. Figure 16 is an example of one of the indicators
used in this system--the number of publicly traded gazelle firms in the
Silicon Valley, which has declined from a high point in 1996.
Most indicators are obtained from existing sources, although some
original surveys are conducted. JVSV selected the goals and
accompanying indicators after consulting with thousands of residents
and regional leaders in the public and private sectors. Planning for
the effort began in 1992.
Figure 16: Number of Publicly Traded Gazelle Firms in the Silicon
Valley:
[See PDF for image]
[End of figure]
JVSV aims to raise awareness among public and private leaders of issues
highlighted by the indicator results by communicating results through
an online database, oral presentations, e-mail updates, and mass
mailings of its reports. In addition, JVSV tries to tackle specific
issues that emerge by facilitating various regional collaboration
activities, such as seeking investors to fund and implement efforts to
facilitate progress toward certain regional goals.
Example of a Comprehensive Key Indicator System at the State Level--
Oregon Benchmarks:
Work on the Oregon Benchmarks system was initiated in 1989, and its
intent is to measure progress toward a strategic vision and related
goals for the state as a whole--known as Oregon Shines[Footnote 64]--
and to provide a single source of comprehensive information on
economic, environmental, and social and cultural conditions in Oregon.
The State of Oregon has a population of slightly over 3.5 million, and
it is a mix of high technology, urban areas--with over 530,000 people
concentrated in Portland--and rural, agricultural areas. While the
state benefited from the technology boom of the 1990s and became a
high-technology hub, its economy has also suffered the effects of the
downturn in this industry. The state had one of the highest
unemployment rates in the United States as of July 2004. (See app. V
for additional information on the Oregon Benchmarks.)
The Oregon Shines strategy was developed in the late 1980s, when the
state was recovering from another serious recession. Oregonians helped
to create Oregon Shines as a blueprint for the state's economic
recovery, and the benchmarks system was created shortly thereafter to
monitor the state's progress in achieving it. The system is managed by
the Oregon Progress Board (Board), a unit of the state government that
is chaired by the governor and consists of other appointed leaders
inside and outside government. It also has a small government staff and
is funded by state government appropriations. The Board developed, and
continues to revise, the indicators based on extensive feedback
sessions with other leaders and citizens, such as holding meetings with
residents across the state.
The indicators are organized around three broad goals related to Oregon
Shines: quality jobs; engaged, caring, and safe communities; and
healthy, sustainable surroundings. Under these goals are 90 indicators
regarding the economy, education, civic engagement, social support,
public safety, community development, and the environment. There are
numeric targets attached to each of the indicators. As an example of a
particular goal and indicator, under the "safe, caring and engaged
communities" goal, "students carrying weapons" is measured by the
percentage of students (grades 9-12) who report carrying them--based on
a statewide survey (see fig. 17). In the case of Oregon, the number of
students carrying weapons has declined in the past 10 years. However,
if this indicator showed that the number of students carrying guns
began to increase, it could result in leaders determining that
corrective actions might be necessary to address the problem. Oregon's
system provides information at both the county and state levels.
Approximately one-quarter of the indicators are derived from a state
survey and the rest are obtained from existing federal, state, and
local sources.
Figure 17: Students Carrying Weapons--Percentage of Students Who Carry
Weapons in Oregon:
[See PDF for image]
[End of figure]
Source: Oregon Progress Board.
A report on the indicators has been published every 2 years since 1991,
and its target audience is state government officials, other leaders
throughout the state, and residents of the state. The Board promotes
the results throughout the state so that state agencies will have clear
benchmarks to aim for and others outside of government can work to help
the state achieve its indicator targets. In fact, since 2002 the
indicator system has been part of the state government's performance
measurement process and state agencies are required to specify how
their programs and policies will lead to improvement in areas measured
by the indicators.
The Oregon Benchmarks system is another example of one that has
evolved--in this case, from exclusively monitoring and communicating on
the level of progress toward achieving Oregon's high-level, statewide
goals to also facilitating the state government's performance
measurement system. Specifically, the main mission of the Progress
Board and its staff has become facilitating the state's performance
measurement process and providing information to help various leaders
hold state government agencies accountable for making progress toward
indicator targets. However, organizers told us that they do want to
lose their statewide visioning focus. Legislation enacted in 2001
mandated that the Board establish guidelines for state agencies to link
their performance measures to the indicators and develop a set of best
practices for doing so. Further, the Board has established a system for
reporting progress on performance measures that are linked to the
Oregon Benchmarks, although each agency is responsible for reporting on
its individual performance. These changes were made largely in response
to calls from political leaders to make the system more relevant to the
policy-making process and justify its continued existence in the midst
of a serious downturn in the state government's fiscal situation.
Example of a Comprehensive Key Indicator System at the National Level-
-United Kingdom's Sustainable Development Indicators:
Since 1999, the United Kingdom's Sustainable Development Indicators
system has measured progress toward the government's sustainable
development strategy in the areas of social progress, economic growth,
and environmental protection. The United Kingdom is a constitutional
monarchy with a parliamentary system of government. It has the fourth
largest economy in the world, and its population in 2002 was nearly 60
million--the third largest in the EU and the 21st largest in the world.
Its overall population density is one of the highest in the world, as
its population is concentrated in an area of land that is about the
same size as Oregon. The United Kingdom's capital, London, is by far
the largest city in the country with over 7.2 million people, making it
the 13th largest city in the world.
In the late 1990s, the ruling government committed itself to the goal
of achieving a better quality of life for U.K. citizens and, in 1999,
developed a comprehensive sustainable development strategy for pursuing
that goal. A set of indicators was developed alongside the strategy to
monitor progress. The published strategy document identified a core set
of 147 indicators and committed the government to report annually on
progress against a set of 15 headline indicators--the so-called quality
of life barometer. This strategy is intended to ensure that the
government meets the needs of present citizens without compromising the
ability of future generations to meet their own needs.
The Department for Environment, Food, and Rural Affairs (DEFRA) manages
the comprehensive sustainable development strategy, along with the
indicator system, on a day-to-day basis, although DEFRA must closely
coordinate with other ministries of the government that have
jurisdiction over other areas in the strategy. The indicator system is
funded entirely by the national government.
The system contains 15 "headline" indicators in areas related to social
progress, economic growth, and environmental protection, such as
health, jobs, crime, air quality, traffic, housing, educational
achievement, and wildlife, as well as 132 other indicators in these
areas. Indicators are obtained primarily from national government
agencies with jurisdiction over the various topical areas, including
DEFRA.
For example, one headline indicator measuring progress toward the goal
of maintaining high and stable levels of economic growth and employment
is the percentage of people of working age who are currently employed
(see fig. 18). If this indicator showed that the number of working-age
individuals who are employed started to decline, it could raise
questions and spur efforts to identify the root causes of the decline
(which could range from cyclical conditions or demographic shifts to
competitiveness issues). Then, the government or others could determine
whether there was a need to design solutions to fit the nature of the
problem. For example, they might consider enhancing job training
programs or conclude that incentives to encourage businesses to
increase hiring were needed to boost employment, or they might decide
not to intervene. It is interesting to note that the U.K. system
reports employment in a positive light (as opposed to "unemployment" as
in the United States).
Figure 18: Percentage of Working-Age People Who Are Currently Employed
in the United Kingdom by Region for 2000 and 2003:
[See PDF for image]
[End of figure]
The system provides information on the indicators at the national
level. Where possible, the definitions used are consistent with
international definitions, allowing comparisons with other countries to
be made. Regional versions of the 15 headline indicators are also
published annually. In addition, the national indicators have
influenced other regional indicators and indicator development at a
sub-regional level. The first national report, a comprehensive baseline
assessment for all 147 indicators, was issued in 1999 and fully updated
in 2004; reports assessing progress based on the 15 headline indicators
are issued annually. Also, a Web site contains updated indicators. The
system was designed with the intention that the United Kingdom would
use the information to modify its policies and budgets to achieve the
goals contained in the strategy, particularly in areas in which the
United Kingdom is not making sufficient progress or is lagging behind
other countries.
Example of a Comprehensive Key Indicator System at the Supranational
Level--European Union's European Structural Indicators:
Since 2001, the European Structural Indicators system of the EU has
measured progress toward goals for the economic, environmental, and
social renewal of all of Europe, which were established in an agreement
that was ratified by member countries. The EU is the latest stage in
the ongoing process of European integration begun after World War II to
promote peace and economic prosperity. The EU is a treaty-based,
institutional framework that defines and manages economic and political
cooperation among its 25 member states: Austria, Belgium, Cyprus, the
Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece,
Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the
Netherlands, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, and
the United Kingdom. The EU member countries have a total population of
over 450 million people, compared to over 290 million in the United
States. Standards of living measured by GDP per capita are around 30
percent below U.S. levels. Since the 1950s, European integration has
expanded to encompass other economic sectors; a customs union; a single
market in which goods, people, and capital move freely; and a common
agricultural policy. Some EU countries have also adopted a common
currency (the euro). The EU has also adopted a range of social policies
related to reducing inequalities and promoting social cohesion. Over
the last decade, EU member states have taken additional steps toward
political integration, with decisions to develop a common foreign
policy and closer police and judicial cooperation. The EU sees
enlargement as crucial to promoting stability and prosperity and
furthering the peaceful integration of the European continent; it also
has several candidate countries that are expected to join in the coming
years.[Footnote 65]
The goals for the renewal of the EU were outlined in the Lisbon
Strategy of 2000 (and modified in 2001), a 10-year blueprint to promote
sustainable economic growth, social cohesion, and environmental
protection that member countries agreed to work toward by implementing
related policies within their own borders. The impetus for creating the
European Structural Indicators system was the need to track the
progress of member countries in achieving the ambitious goals of the
Lisbon Strategy and identifying areas that need improvement. The system
is managed by the European Commission (EC), the EU's executive
apparatus, which is partially funded by contributions from member
countries. A European Council, which consists of representatives of
member countries, makes decisions about the general direction of the
system and which indicators to include.
The indicators are organized into five key areas: employment,
innovation and research, economic reform, social cohesion, and the
environment. For example, there is an indicator for the long-term (12
months or more) unemployment rate for men as a percentage of the
working male population. Figure 19 illustrates the tremendous variation
in the male unemployment rates among the EU countries, as well as among
other non-EU countries, such as the United States. Indicators are
presented at the national level to facilitate comparisons among member
countries. This indicator could be used to show which EU countries have
the highest male long-term unemployment rates in comparison to other
members, potentially bringing down the averages for the EU overall. It
could also point out which countries need to take action to boost
employment within their borders, and thereby contribute to the overall
social cohesion and economic security of all of Europe. Data for the
indicators are obtained from countries and coordinated by Eurostat, the
EC's statistical agency. The EC is required to report each year to the
Council on progress in meeting the Lisbon Strategy. The progress report
based on the structural indicators (and accompanying analyses) has been
published every year since 2001.
Figure 19: Long-term Unemployment Rates for Men, 1999-2002:
[See PDF for image]
Note: Data from Update of the Statistical Annex, 2004 Report from the
Commission to the Spring European Council: Structural Indicators.
[End of figure]
In response to changing circumstances, this indicator system was
recently redesigned to improve its utility in monitoring and reporting
on progress toward the Lisbon Strategy's goals and to encourage leaders
of member countries to take action to meet those goals. Leaders from
member countries agreed that the system needed to focus attention on a
limited number of what were considered the most important indicators.
However, the number of indicators kept increasing, and some changed
from year to year, making it difficult to focus on a few important
challenges or monitor progress toward the Lisbon Strategy over time. As
a result, the EC reduced the number of indicators that appeared in its
2004 report to a few headline indicators, and EC officials told us that
the indicators that will be reported to member country leaders annually
will not change for at least 3 years. Eurostat continues to maintain
and update the full set of about 100 indicators on its Web site for the
benefit of other interested parties who want more detail.
Outcome-Oriented Systems May Be More Relevant to Target Audiences:
Outcome-oriented systems can help create focused and relevant
information for their audiences that may enhance the use of and
continuing support for these systems. Audiences could be more likely to
use the information if it is relevant to decisions that affect their
lives and work. Relevancy is difficult, but not impossible, to
determine if there is no focus on outcomes. For instance, a learning-
oriented system that was especially disciplined and focused on
determining the relevancy of information for its audiences could evolve
towards an outcome orientation while gaining the initial advantage of
building early momentum without battles over determining common aims.
Relevancy and quality also affect use. The more the information is
used, the easier it is to create a cycle of stakeholder support and
funding that can eventually lead to positive effects. Similarly,
developers of indicator systems are more likely to identify the most
significant and appropriately constructed indicators if they can,
through civic dialogue and research, define a set of common aims and
aspirations for their jurisdictions.
Although it was not intentional, of the 29 systems we examined, many
were focused on outcomes in one way or another. Our interviews with
officials representing these systems revealed that an outcome
orientation--whether outcomes were formative and implicit or well
advanced and explicit--had an impact on the system by making it
somewhat easier to select indicators that were relevant to the system's
audiences.
The vocabulary usage surrounding discussions of outcomes is sometimes
inconsistent, and thus potentially confusing. This is because outcomes
can be defined in forms ranging from:
1. the general--what could be called an aim, vision, or aspiration
(e.g., a healthy population); to:
2. a more focused articulation of intent with direct implications for
existing institutions and programs--what could be called a goal or
objective (e.g., reduce the nation's level of obesity); to:
3. a specific objective--what could be called a target (e.g., reducing
teen pregnancy in a city by 10 percent from its current level over a 4-
year period).
Whether outcomes are stated in general or specific terms is not
necessarily a reflection on their utility or legitimacy. An unrealistic
goal that is very specific can create problems in contrast to a vague,
general aspiration that has broad support and builds common ground. For
instance, a frequently observed phenomenon associated with systems that
try to measure performance and make links to results is the
manipulation of data in order to meet specified goals, targets, or
mandated requirements.
Different methods of developing an outcome-orientation can also be
highly interrelated. Positive or negative experiences with targets
(e.g., the inability to effectively measure an area like the fine arts)
could lead a jurisdiction to back off to more general goal statements.
Building consensus around aspirations could, over time, lead
progressively to statements of goals and then eventually to targets as
a jurisdiction gains the confidence and experience in managing to
greater levels of specificity and detail. It is important to clarify
terminology and recognize these interrelationships in any discussion of
comprehensive key indicator systems.
[End of section]
Chapter 3: Comprehensive Key Indicator Systems Are a Noteworthy
Development with Potentially Broad Applicability:
The implications of comprehensive key indicator systems for the United
States are significant. Our work covered a diverse set of systems in
different geographic regions of the United States and abroad, from
small scale (under 1 million in population) to large scale (over 450
million), with widely differing demographics, cultures, political
dynamics, and economic structures. Although the comprehensive key
indicator systems we reviewed were diverse in many respects, our
analysis revealed similarities in the challenges they faced and the
types of positive effects they experienced.
These similarities provide evidence of a pattern in development and
implementation that can provide useful lessons learned for others who
are considering establishing or enhancing such systems. Further,
comprehensive key indicator systems represent a positive step in the
evolution of measurement practices. Prior efforts, including developing
useful data on a wide range of topics and systematic efforts to measure
performance, form the basis for developing more comprehensive
information systems to address increasingly complex and interrelated
issues. It appears that comprehensive key indicator systems have broad
applicability to all levels of society and forms of governance--from
neighborhoods to nations as a whole. However, the commonalities we
discuss here should not be interpreted as a "one-size-fits-all"
approach. Local factors would have to be taken into account.
A Diverse Set of Systems Faced Similar Challenges:
Despite the diversity of the comprehensive key indicator systems we
studied across the United States and around the world, we found that
similar challenges existed when developing and implementing these
systems. The five common challenges we identified involved some issues
that were difficult to overcome, took years to address, or both. In
addition, some challenges require ongoing attention. The exact nature
and magnitude of the challenges varied from place to place based on
various factors, including the system's purpose and target audiences as
well as the features of the particular jurisdiction, such as its
political and economic structures. The common challenges we identified
in the course of our work are:
* gaining and sustaining stakeholders' support for a system,
* securing and maintaining adequate funding,
* agreeing on the types and number of indicators to include,
* obtaining indicators or data for the system, and:
* effectively leveraging information technology.
Gaining and Sustaining Stakeholders' Support:
The challenge of gaining and sustaining support is continuous, even
among systems we reviewed that already had strong levels of political
and financial support and large user bases. For instance, we found that
organizers faced challenges due to concerns about how the indicators
might be perceived and used. Some systems that were able to garner the
strong support needed to start an effort experienced difficulties in
maintaining that support over time. It was also challenging to ensure
that leaders, policymakers, and a wide range of interested parties
viewed the indicator systems as relevant and useful.
Seeking broad support and commitment helped comprehensive key indicator
systems avoid "capture" by one party or particular interest group. Some
systems have instituted broad-based governing structures at the outset
to address this issue. For example, the North Carolina Progress Board's
(which runs North Carolina 20/20) members are appointed by the
governor, leaders of the legislature, and the Progress Board itself.
Further, to keep its operations as independent as possible, the
indicator system's board represents a cross-section of the state and
includes a former governor and representatives from the academic
community. The Progress Board reports directly to the Board of
Governors of the State University system.
Involving a range of stakeholders helped ensure a mix of interested
parties would use the system over time and identify needed refinements
to ensure its continued relevance. For example, the Portland Multnomah
Progress Board--organizers of a city/county comprehensive indicator
system in Oregon--has benefited from having strong support from the
county chairperson and the mayor. However, uncertainties regarding who
would continue to champion the indicator system in the future when
these elected officials might change represented a continuing
challenge, according to Progress Board officials. They highlighted the
importance of ongoing communications to build continuing support,
explain what the indicators measure, and their usefulness. This can be
accomplished through briefing policymakers and outreaching to
businesses, community leaders, and other interested parties on the
usefulness of having a single, convenient source of information on the
economic, environmental, and social and cultural conditions of their
jurisdictions.
Officials we interviewed identified several specific types of
challenges they encountered in gaining and sustaining support for their
comprehensive key indicator systems, including (1) perceptions of bias
or a lack of independence because the indicator system was initiated or
supported by a particular official or political party and (2) questions
about comprehensive systems being out of touch and not used in policy
making.
Perceptions of bias or a lack of independence. Support for a
comprehensive key indicator system can be undermined if it is viewed as
being nonobjective and biased because of its association with a
particular political leader or party.
While leaders' support can help an indicator system come into existence
and survive for a time, an indicator system that is viewed as one
administration's or one party's initiative can be vulnerable to changes
or elimination as administrations or circumstances change. Several of
the state-level comprehensive indicator systems that we examined were
closely associated with a particular governor and experienced
challenges related to securing and maintaining political support over
time, particularly among legislative bodies or those of the opposite
political party. This perception of a lack of independence played a
role in the history, development, and near demise of the Oregon
Benchmarks system, which is managed by the Oregon Progress Board. Four
successive governors of the same political party have championed this
system. When it came into existence in the late 1980s, the then-
governor's political party controlled the state legislature. However,
by 1994 the opposing political party had gained control of the entire
legislature, and some of the new legislators were suspicious of the
goals and targets of the indicator system. They believed the targeted
levels set for many of the benchmarks were part of a strategy to
increase public funding for the other party's favored programs. In
1995, the legislature allowed the authorization for the Progress Board
to expire, although the newly elected governor reestablished it by
using executive authority.
A strategy used by Oregon Progress Board's executive director to
encourage the legislature to restore the authorization for the Board
and the benchmarks was to demonstrate the value of the system through
education about what the indicators measure and how they could be used.
Eventually, management was able to gain support of two key legislators,
who were appointed to the Board. The Board also instituted a broad-
based structure to ensure greater independence and bipartisan support
from multiple communities. The Board and the indicator system were
eventually reauthorized by the legislature on a permanent basis in
1997. This system has refocused its efforts to become more useable and
relevant to leaders and keep it on a more stable course. To justify its
continued existence in a tough state fiscal crisis, the Oregon
Benchmarks has become a formal part of the state government's
performance measurement system, and agencies are required to link their
individual performance reports to the higher-level indicators.
Questions of relevance and usefulness. Ongoing support for a
comprehensive indicator system could be compromised by questions about
the value of and need for an indicator system that brings together
indicators in particular domains or topical areas that are already
available elsewhere.
Policy making in the United States and around the world tends to be
considered and made in individual topical areas or domains, such as
tax, health, and education policy. Governments at all levels, including
executive branch agencies and legislative committees, also tend to be
organized along the lines of specific topical areas. A comprehensive
indicator system designed around a crosscutting area, such as a
sustainable development framework dealing with economic development,
environmental quality, and social and cultural concerns, would,
therefore, not have a built-in audience. This increases the difficulty
of encouraging leaders to think about issues in that framework, and to
use the indicator system as a tool for doing so. For example,
organizers of the United Kingdom's Sustainable Development Indicators
said it was unclear whether their system has prompted leaders to focus
on comprehensive sustainable development strategies, even with support
from the Prime Minister. They have undertaken an ongoing communications
strategy, including an annual national report and media events,
although they acknowledge that changing the way policymakers use
information in making decisions will be an evolving process.
Leaders may continue to reach out for information already available in
their individual topical areas and make policy accordingly, possibly
rendering a comprehensive indicator system underutilized at best or
irrelevant at worst. To overcome this challenge, comprehensive systems
have found it necessary to conduct extensive outreach to make sure the
public is aware of and understands what the indicators monitor, and how
this information could be used by different individuals and groups.
This has been accomplished in a variety of ways, including
presentations and training, or even redesigning their systems to appeal
to their target audiences.
For example, organizers of Baltimore's Vital Signs indicator system
told us they continually make presentations and conduct training
sessions for citywide stakeholders, including the Baltimore City
Council, the Mayor's staff, the Baltimore City Department of Housing,
and the Association of Baltimore Area Grant makers. The purpose of this
outreach is to make sure leaders, neighborhood groups, and citizens
understand what the indicators are and what they measure so everyone
can be on the same page about which economic, environmental, and social
and cultural conditions are changing, or not changing, in the
community. Further, several of their stakeholder organizations,
including the Association of Baltimore Area Grant makers, have sent the
Vital Signs report to their members to promote wider use of the
indicators.[Footnote 66] Figure 20 provides an example of an indicator
from the Vital Signs system--the median number of days it takes for
homes to sell in a particular area of Baltimore.
Figure 20: Median Number of Days It Takes for Homes to Sell in a
Particular Area of Baltimore:
[See PDF for image]
[End of figure]
In addition, other factors can affect perceptions about a system's
relevance and usefulness. These factors include situations when
information does not match the comprehension level of the target
audience (such as being overly technical), or the system does not cover
areas that are meaningful or important to key stakeholders.
Due to questions about relevance, the Burlington Legacy Project (BLP)
system in Vermont is being refocused based on feedback from and
underutilization by public and private leaders in the city.
Essentially, critics said that the system's indicators were not linked
to the information that Burlington leaders and residents needed, and
was unable to answer the basic question--how are we doing in improving
quality of life and sustainability? Organizers decided that they needed
an index to serve as a comprehensive measure that accounts for and
links economic as well as social and environmental health, which they
felt was fundamental to assessing quality of life and sustainability.
In response, BLP is redesigning the system by not reporting exclusively
on individual indicators, and instead is developing a single index of
the quality of life in Burlington--consisting of data from 26 topical
areas. Managers believe this index will attract wider attention from
leaders, the public, and the media, and will become more relevant to
them.[Footnote 67] (For a graphic of the index, see fig. 5 of this
report.)
Securing and Maintaining Adequate Funding:
Securing and maintaining adequate funding can be difficult,
particularly in light of current and growing fiscal challenges. In some
cases managers have been forced to curtail the system's activities and
in a few instances operations were nearly shut down due to fiscal
constraints. For example, the Benchmarking Municipal and Neighborhood
Services in Worcester (Massachusetts) system had to scale back the
number of neighborhoods it covers with one of its survey tools because
the data are too resource intensive to collect.[Footnote 68] Other
systems, like the Oregon Benchmarks and Minnesota Milestones that
relied solely on their state governments for funding, have been subject
to funding crises. The Oregon and Minnesota indicator systems were
nearly abolished when their states experienced economic downturns. In
Oregon, funding for the system was abolished by the legislature but was
later reinstated at a lower level. The Minnesota state legislature
eliminated line item funding for Minnesota's system, but for a time it
was able to continue with a reduced level of funding within the state's
operating budget; today, it is no longer an active system. We found
that a lack of diversified funding sources made indicator systems more
vulnerable to fiscal constraints due to their dependence on one source
for most or all of their funding.
Systems that relied on multiple funding sources, such as government,
corporate, and not-for-profit foundations, could make up for reductions
from one source by turning to others for additional funding or possibly
by reaching out to new funding sources. For example, corporate funding
for the Index of Silicon Valley system, which operates in a geographic
area that was hit hard by the downturn in the technology industry
during the late 1990s, was reduced. By relying upon multiple sources,
managers were able to make up for the declining corporate funding by
seeking additional support from others. Specifically, several local
governments increased their funding to make up for it--despite their
own fiscal constraints--because these governments saw the system as a
valuable tool for enhancing collaboration on issues of mutual concern,
such as transportation.
Agreeing on the Types and Number of Indicators to Include:
Agreeing on which indicators to include, and how many to include, in a
system can be challenging, particularly when starting up a new system.
However, these issues continue to present challenges as indicator sets
are revised over time. The reasons for this challenge are that
selecting the key issues and conditions that are important to a
jurisdiction, and selecting which specific indicators to use, involves
a high level of subjectivity and value judgments. This is coupled with
a need to be continually responsive to emerging issues and demands.
The number of possible indicators that could be selected to measure key
issues and conditions is generally quite large. Accordingly, selecting
indicators is not a value neutral activity, and different individuals
and organizations sometimes prefer different indicators. For example,
an indicator concerning higher education can be measured in different
ways, such as by the number of students who enroll in college, or the
number who actually graduate. Further, there are numerous ways to
measure whether public education is successful. For example, the
Indicators for Progress system in Jacksonville, Florida published a
report in 2003 that discusses different ways to measure public school
success.[Footnote 69] Figure 21 shows several of the indicators
mentioned in that report.[Footnote 70] According to an official of the
Jacksonville Community Council Incorporated (JCCI), this figure also
shows that the indicators JCCI tracks can be used as part of a citizen-
based advocacy process to catalyze community improvements.
Figure 21: Different Indicators Used to Measure the Success of Public
Schools in Jacksonville, Florida:
[See PDF for image]
Source: Jacksonville Community Council, Inc.
[End of figure]
In addition, in some cases, stakeholders have debated whether to
express indicators in positive or negative terms. During the
development of the Boston Indicators Project, for instance, organizers
avoided using deficit, or negative, measures, such as the prevalence of
school violence. Instead, the system used indicators expressed in terms
of desired, positive objectives, such as graduation rates. Similarly,
the New York City Social Indicators comprehensive system elected to
report almost exclusively on conditions that are related to positive
objectives. Baltimore's Vital Signs system reports on births at
satisfactory weights--as an indicator of maternal and child health--
rather than low birth weight births, which is an indicator of maternal
and child risk.
Some organizations have sought to involve a wide community of public
and private stakeholders in developing and revising their indicator
systems, particularly those with a community-wide focus. For example,
Baltimore's Vital Signs system included 200 residents of the city and
over 200 other leaders from various communities (e.g., business,
funders, and policy makers) in their indicator selection process.
However, officials cautioned that when indicator systems involve a
diverse group of stakeholders, it is important to build sufficient time
into the process of selecting indicators to allow stakeholders to
address differences and reach consensus, and it usually is an iterative
process. The process of identifying and agreeing on indicators took
over six months for both the Boston Indicators Project and the Compass
Index of Sustainability in Orange County, Florida. Developing consensus
necessitated a series of large and small community meetings along with
reaching agreement among various committees of public and private
stakeholders. The officials believed that the inclusive nature of the
process vastly increased the potential users of a system as well as its
overall quality. They told us that bringing various groups or
individuals into the process and involving them in its development and
evolution makes these groups and individuals more likely to use
indicators regularly, and encourage others to do so as well.
Organizers of many of the comprehensive indicator systems we studied
also found it challenging to limit the total number of indicators
included in the system. In order to reach agreement and limit tensions
among stakeholders, one tendency can be to simply increase the number
of indicators so most of the stakeholders are content because their
preferred indicators have been included. However, doing so can make the
system unwieldy and overly complex, thereby decreasing the chances it
will be used and referenced by policymakers, the media, and others who
often prefer a limited, simple set of key indicators that they can
monitor over time. Managers of indicator systems and experts emphasized
that, to the extent possible, a system should try to keep the total
number of indicators to a minimum. Some systems have put strict limits
on the number of indicators right from the start. For example, the
North Carolina Progress Board, which oversees the statewide North
Carolina 20/20, limits the number of indicators to four for each of its
goals, although it is continuing to refine goals and performance
targets.
In some cases, indicator systems have added more indicators over time
as updates occurred, only to later go through a simplification process
based on feedback from users.
The EU recently redesigned its European Structural Indicators system to
improve its effectiveness in (1) monitoring and reporting on progress
being made toward the Lisbon Strategy's economic, environmental, and
social goals and (2) encouraging leaders of member countries to take
action to meet those goals. The number of indicators had increased over
time and some of the indicators changed from year to year. Leaders from
member countries had expressed concern that the growing number of
indicators made it difficult to identify and focus on the most
important indicators for effectively monitoring emerging trends over
time. To address these concerns, the EU decided to identify and report
on a limited number of 14 headline indicators, as shown in table 5. The
number of structural indicators included in the 2004 annual report was
reduced from 42 to 14 indicators so that country leaders could more
easily focus their attention on and understand progress toward goals.
In addition, these indicators have been fixed for a three-year period
to facilitate benchmarking and monitoring. The full set of indicators
is still available online to those users who want more detail.
Table 5: European Structural Indicators--Headline Indicators:
* GDP per capita;
* Labor productivity;
* Employment rate;
* Employment rate of older workers;
* Educational attainment (20-24);
* Research and development expenditure;
* Comparative price levels;
* Business investment;
* At risk of poverty rate;
* Long-term unemployment rate;
* Dispersion of regional employment rates;
* Greenhouse gas emissions;
* Energy intensity of the economy;
* Volume of freight transport.
Source: European Commission.
[End of table]
According to the officials we interviewed, using a set of selection
criteria all stakeholders agree to in advance helps ensure that the
indicator selection process works effectively from the outset and keeps
the total number of indicators under control. Applying these criteria
can help facilitate decisions not to use some of the potential
indicators right from the start, and it can also be used to rank a
possible list of indicators. Many of the groups we reviewed have
developed such criteria, and from these, we identified six common
criteria used for selecting indicators. Specifically, selected
indicators should be:
* relevant to key issues, policies, or goals,
* easy to understand and meaningful to a variety of audiences,
* drawn from reliable sources,
* available from existing sources or not resource intensive to collect,
* updated regularly, and:
* comparable across geographic areas or various population groups.
Obtaining Indicators or Data for the System:
Challenges related to obtaining indicators, or aggregating data to
compute them, are particularly critical because most comprehensive key
indicator systems rely heavily (or in many cases exclusively) on
indicators and data that are already available from other public and
private organizations. Specifically, officials identified challenges
in (1) obtaining existing indicators or data from the organizations
that collect them, (2) addressing quality or comparability problems,
and (3) finding that indicators or data are not available to measure
key issues or trends.
Obtaining Existing Data:
Organizers of comprehensive key indicator systems have encountered
challenges in obtaining existing indicators or data to compute the
indicators from entities that collect them, particularly when these
data have not been previously or routinely released to the public or
posted on the Internet. System organizers told us that such challenges
are most prevalent at the beginning of a system's development and
experienced primarily by systems that are not officially part of a
governmental unit. For example, Baltimore's Vital Signs effort had
difficulty obtaining data from the city's police department and public
school system, although the problems were eventually resolved through
negotiation with key officials. The Orange County, Florida, Compass
Index of Sustainability also experienced problems in getting data from
agencies, although once the system's first report was released,
agencies and local leaders benefited from its use and are now more
supportive. Officials said that a lack of cooperation from data
producers stemmed from concerns that the data might be used in
unintended ways or would be used to assess an agency's operations;
limited time or resources to make the data more useable to an indicator
system; and the data producers' concerns about privacy.
Some system organizers said that an effective way to increase
cooperation by data producers is to include them as key stakeholders in
the design and implementation of the system, including the process of
selecting the indicators. One system established formal memorandums of
understanding. Indianapolis's Social Assets and Vulnerabilities
Indicators (SAVI) system negotiated and ratified agreements with its
data providers--laying out terms and conditions for both parties as to
what the organizations will provide and when and how the data will be
used--in order to forge a formal, ongoing relationship. Further, the
Hillsborough County Community Atlas system conducted an assessment
(involving public and private organizations) to determine data needs in
the community and the capabilities of local organizations to contribute
to a Web-based data sharing system.
Addressing Quality or Comparability Issues:
The indicator systems we reviewed across the United States rely, for
the most part, on data producing organizations to ensure that they are
providing valid, quality data. Some system managers told us that they
sometimes try to work with agencies to improve data quality or
encourage them to collect other types of data. Indicator systems
generally have limited data quality control processes. For example,
managers of the Southern California Association of Governments' State
of the Region system told us they have set the standard that they will
only accept indicators and data from official sources--particularly
government agencies or organizations with track records of producing
reliable data.
Organizers of comprehensive key indicator systems have encountered
quality and comparability problems that prevented the use of some
indicators without devoting substantial resources to improve the
quality and comparability of the data. In some cases, reliable, quality
data are simply unavailable. The Jacksonville, Florida, Indicators for
Progress system, for example, found it difficult to obtain reliable
measures of water quality in the region.
Another problem faced by system managers has been that available data
have been collected by different agencies or jurisdictions, and in some
instances these agencies and jurisdictions have not used common or
consistent definitions or units of measurement. As a result, much of
this information becomes unusable or irrelevant in a comprehensive key
indicator system. The EU continually faces problems trying to harmonize
indicators across countries and utilize consistent terms and concepts
of measurement, which tend to vary by country. The international
statistical community, including the OECD, IMF, World Bank and UN, has
ongoing efforts aimed at improving the comparability of indicators.
Further, sometimes data are not disaggregated to the smaller geographic
levels that systems want to report on, or they are not disaggregated by
other socioeconomic variables of interest, such as race, age, or
gender. For example, in the case of the Boston Indicators Project, the
police department reported crime statistics by district or precinct,
using four-block areas, while educational data were available by
neighborhood or school. This made it difficult to analyze possible
interrelationships between crime and educational factors.
Lack of Available Indicators or Data:
In some instances, the indicators necessary to measure key issues are
not available at all or are not available in a timely fashion. These
gaps are frequently identified during the initial development of
indicator systems. The most commonly identified areas where gaps exist
across the indicator systems we reviewed were health insurance and
health care, child care, the aging population, crime, and educational
data, as well as some topical areas in the environmental domain. In
addition, one of the major sources of demographic information is the
decennial U.S. census, which is conducted once every 10 years. Many
subnational indicator systems in the United States rely heavily on the
Census Bureau, but by the end of the 10-year period, these data may
significantly lag behind actual changes in the population. Officials
provided several specific examples of gaps they had identified, such as
those listed below.
* The Portland Multnomah County Benchmarks system officials reported
that data were not available for about 12 issues that they would like
to include. They hope to be able to find data and are encouraging
agencies and other organizations to collect data on issues such as the
environment. According to organizers of this system, over the past
several years, they have been able to whittle the number of data gaps
from 20 down to 12, as local agencies have improved their performance
measurement efforts.
* The Compass Index of Sustainability in Florida's Orange County wanted
to report on a variety of issues related to its large retired and aging
population. In the process of developing this system, however,
organizers found that the county did not have sufficient data to
monitor the health and well-being of the aging population. The first
indicator report by this system commented on this lack of data, which
resulted in improved data collection efforts throughout the county,
including an extensive survey of the aging population in Orange County.
To overcome the challenge of indicators or data not being available, or
not being regularly updated, organizers of indicator systems have
turned to several remedies to collect their own data or spur additional
data collection efforts. For example, the Maine Economic Growth Council
(MEGC), which oversees the statewide Maine's Measures of Growth system,
has developed proxy, or substitute, indicators on occasion. In one
instance, MEGC used an indicator on the estimated loon population as a
proxy measure of the extent of contamination in Maine's lakes. Also,
data for 8 of the 61 indicators that the MEGC system tracks are derived
from surveys of citizens and businesses that it conducts itself.
The U.S. Census Bureau is in the process of implementing the American
Community Survey, which will collect and disseminate census information
more frequently. Most U.S. subnational indicator systems currently rely
heavily on the "long form" data from the decennial census. Every U.S.
household receives the short form, which has limited utility for
indicators, as it includes only the demographic basics of race,
ethnicity, and age. In the census year, one in six households receives
the long form, which asks a detailed series of questions regarding such
topics as income, occupation, education, and journey to work. This is
valuable information to support the creation and maintenance of
indicators at all levels of society. The U.S. Census Bureau American
Community Survey (ACS) provides data annually and has been implemented
on a nationwide basis since 2000 for all states and for all counties
and metro areas with more than 250,000 residents. Current plans,
contingent on continued congressional support and funding, could
quadruple the sample size in 2005 and eventually allow for presentation
of data at the census tract and block levels. At present, 800,000
households are surveyed annually; in 2005, the number is expected to
increase to 3 million. A substantial investment in data, such as for
the ACS, could make even more information widely available to support
the development of comprehensive key indicator systems in the United
States.
Effectively Leveraging Information Technology:
The development of advanced computer information technologies has
transformed the tools available for comprehensive key indicator
systems, although the extent to which various systems have leveraged
these technologies varied. According to many of the system managers,
effectively using technology, including the Internet, has made it
possible for comprehensive key indicator systems to transfer data
quickly, made key information more widely available, and helped foster
dialogue among groups with mutual interests. For example, on its Web
site, Indianapolis's SAVI presents a set of tools that enable users to
interact with the data in different ways. Figure 22 lists the various
tools on SAVI's interactive Web site.
Figure 22: SAVI Interactive Tools:
[See PDF for image]
Source: The Polis Center.
Note: See http://www.savi.org.
[End of figure]
Although new information technologies may make it faster, easier, and
cheaper to collect and share data, gaining access to new technologies
can be costly. Costs were one factor that limited the extent to which
some organizations have been able to take advantage of new
technologies, and some systems had to scale back on planned technology
initiatives due to resource constraints. For example, the statewide
Social Well-being of Vermonters system briefly used Geographic
Information Systems (GIS) to display the results of some indicators
(e.g., children's health indicators were analyzed by county), but the
effort was put on hold mainly because of resource constraints.
However, several systems have been transformed over time by new
technologies, and many of the systems' officials told us that they
would like to do more innovative things than they are doing now. The
evolution of the Minnesota Milestones state-level system illustrated
how improved information technologies transformed indicator systems'
operations over time.[Footnote 71] The system progressed from issuing a
printed report to an interactive system where individuals can
manipulate the data themselves, including sorting them by geographic
area, subject, or indicator and creating customized reports.
Organizers of some comprehensive key indicator systems see potential in
other developing technologies. For example, the Boston Indicators
Project expressed interest in work being done by organizations such as
the Massachusetts Institute of Technology on data warehousing and the
interoperability of different data systems to facilitate sharing
between systems. The Boston Foundation is also collaborating with
Boston's Metropolitan Area Planning Council to develop a regional data
repository project for community planning and research, which would
create a deep, searchable database (a data warehouse and portal) with
mapping capacity. Other officials are looking into improved tools for
developing interactive query capabilities so that users of indicator
Web sites could directly manipulate and analyze the data behind the
indicators.
Comprehensive Key Indicator Systems Show Evidence of Positive Effects:
The diverse systems we reviewed showed evidence of common types of
positive effects, such as improved decision making, enhanced
collaboration on issues, and increasing the availability of knowledge.
Even though we found anecdotal evidence of positive effects on their
respective jurisdictions, this information must be interpreted with a
number of considerations, which are discussed below.
* These positive effects are a function of how different stakeholders
use indicator information along with other resources and information to
inform their decisions made within the context of various political,
economic, and other factors. Because the information they produce can
be used by individuals, the media, businesses, nonprofits, interest
groups, professionals, and governments (among others), the variety of
uses and possible benefits is theoretically wide ranging.
* Determining a cause and effect relationship between the use of
indicator systems, better decisions, and improved problem solving is
beyond the scope of this report. On the basis of common sense, it is
not an unreasonable link to make. But in reality, so many different
factors affect decision making that teasing out the role of indicator
systems as a single causal factor necessitates further research.
* We did not perform complete cost, benefit, risk, and options analysis
for any of the systems reviewed. Nor did we find that other systems had
done so. Hence, the question of how to evaluate the value of these
systems and what their value is relative to other possible uses of
public and private funds remains open.
In spite of these analytical difficulties, our work shows that numerous
investments have been made and sustained over significant periods of
time. Specifically, comprehensive key indicator systems have:
* enhanced collaboration among diverse parties to address public
issues;
* provided a tool to encourage stakeholders to make progress toward
economic, environmental, and social and cultural outcomes;
* informed and facilitated policy making, program planning, fiscal
decision making, and improved research; and:
* increased knowledge about key economic, environmental, and social and
cultural issues, as well as the conditions of certain populations.
It can take years for an indicator system to become a widely used and
effective tool for identifying and monitoring conditions, and tracking
long-term trends that are most important to citizens of a jurisdiction.
For example, based on indicator results that showed declining
graduation rates, leaders of the Indicators for Progress system in
Jacksonville, Florida, partnered with a variety of mutually interested
business leaders and school system officials to press for educational
reform in that jurisdiction. Their collaborative efforts resulted in
the county school board implementing several new initiatives.
Enhancing Collaboration to Address Public Issues:
Comprehensive key indicator systems can reveal significant public
policy problems and help to address them by facilitating collaboration
among various parties inside and outside of government. These systems
serve as useful tools for highlighting economic, environmental, and
social and cultural trends to broader audiences that can include
elected officials, agency heads, the media, and the public. The more
focused attention that an indicator system or corresponding report can
bring to certain conditions may bring increased pressure to bear on
diverse parties in the public and private sectors to collaborate on
strategies to address them. Some indicator system managers have even
convened groups that work on collective strategies to address areas of
common interest. Accordingly, these kinds of efforts help break down
traditional boundaries between various actors and organizations and
encourage recognition of interconnections among various domains as well
as ways that crosscutting approaches could provide solutions to long-
term challenges. Some key illustrations follow.
Chicago Metropolis 2020. This indicator system's report highlighted the
extent to which the Chicago metropolitan region suffered from severe
traffic congestion. Without the profile and attention given to it by a
key indicator system, information on traffic congestion might not have
had the same level of impact on public debate. Figure 23 presents
actual traffic congestion levels for 1996, as well as projected levels
for 2030 if current trends continue without intervention. The report
and subsequent public attention was a key factor that led to the
governor signing legislation to create a task force, whose
recommendations are aimed at transforming transportation and planning
agencies into a more coherent regional system that considers the impact
of decisions on other jurisdictions and a broader range of economic,
environmental, and social and cultural impacts. Because authority over
transportation policy in the region was fragmented along the lines of
several state agencies and a variety of city and suburban governments,
until this task force, no single entity, including the city of Chicago,
had been able to act on these problems in a holistic and crosscutting
manner.
Figure 23: Traffic Congestion in Chicago--Actual 1996 and Projected
2030:
[See PDF for image]
[End of figure]
Chicago Metropolis 2020 continues to monitor traffic congestion and
urban transportation trends, as illustrated by testimony presented by
the organization's leadership to the Regional Transportation Task Force
in early 2004 (see fig. 24).
Figure 24: Travel Trends Placing Stress on the Chicago Regional Traffic
System:
[See PDF for image]
[End of figure]
Index of Silicon Valley. This system highlighted shared regional
problems that negatively affected economic growth by hindering new
businesses and development. The Smart Permit Initiative was organized
to tackle these problems. The organizers of this initiative worked with
business and government leaders to create a regulatory streamlining
council whose efforts led to officials in 27 cities and 2 counties
agreeing to standardize their building, plumbing, electrical, and
mechanical codes and related regulatory requirements for new
businesses. The council agreed to reduce approximately 400 local
amendments to these codes to 11. According to officials, these changes
have reduced confusion in building codes among cities and counties,
saved businesses time in getting products to market, and lowered
construction costs for new projects.
Indicators for Progress (Jacksonville, Fla.). The leader of this
system--the Jacksonville Community Council, Inc. (JCCI)--encouraged
regional officials and members of the local media to focus on
significant problems in the county's public school system that had been
highlighted by its indicator report. JCCI leaders produced a separate
report in 1993 on the implications of Jacksonville's public education
problems and recommended ways to address them.[Footnote 72] The effort
tried to demonstrate linkages such as those between indicators for
education excellence and other quality of life indicators, including
job growth and crime. Using these findings, JCCI leaders initiated a
high degree of collaboration among public and private officials.
Eventually, its report and several years of advocacy by JCCI officials,
citizen volunteers, business leaders, public school officials, and
others led the school board to create a commission that outlined over
150 recommended improvements, many of which have been put into action.
Santa Cruz County Community Assessment Project (CAP). This system was
designed to monitor and improve quality of life in this county in
California and reports on 128 indicators related to the economy,
education, health, public safety, social environment, and the natural
environment. A summary of the system's report is sent to every
household in the county. CAP results led to eight new community-wide
efforts, including projects to reduce child injuries, child abuse and
neglect, school absenteeism, juvenile arrests, and childhood obesity.
One key project was to limit youth alcohol and drug abuse. The CAP had
shown growing alcohol and drug abuse by youth in the Santa Cruz area,
which affected other conditions measured by indicators, such as school
achievement, college readiness, and crime. After spotlighting the
connection between these indicators and securing communitywide
recognition of the problems, CAP leaders established a coalition of 110
representatives from public schools, county services, the county
sheriff's department and four city police departments, businesses,
public officials, not-for-profit organizations, parents, and students.
The coalition created a coordinated alcohol and drug prevention
strategy for Santa Cruz. Following collaborative efforts to implement
this strategy, CAP indicators showed that juvenile felony drug arrests,
juvenile arrests for driving under the influence, as well as youth
alcohol and drug abuse, started to decline (see fig. 25).
Figure 25: Percentage of 9th Graders Reporting Use of Alcohol in the
Last 30 Days:
[See PDF for image]
[End of figure]
Providing Tools to Encourage Progress:
Among jurisdictions that established a set of desired economic,
environmental, and social and cultural outcomes in the form of goals or
targets or shared aspirations for the future, those that used
comprehensive key indicator systems found them to be effective devices
for monitoring and encouraging progress toward these outcomes. Some
jurisdictions used comprehensive key indicator systems as
accountability tools to assess the extent to which various parties,
including government agencies, not-for-profit organizations, and
businesses, contributed to achieving results. Indicator systems and
their reports have also been used to highlight instances when progress
is not being made for a broader audience and to encourage interested
parties and stakeholders to take action. Some key illustrations follow.
Santa Monica Sustainable City. This comprehensive key indicator system
for the City of Santa Monica, California provided information on a
range of indicators that officials used to assess the extent to which
city departments and others contributed to a 1994 citywide plan for
reaching sustainable development goals. Indicators are used for
assessing both city government operations and the community as a whole
in achieving these community-wide goals. In response to what the
indicators were showing, the City Council developed a service
improvement program to increase bus ridership. They also surveyed the
public to identify needed improvements in services. The city's
transportation department restructured its bus program along these
lines and eventually increased bus ridership by 25 percent, 15 percent
greater than targets established prior to this coordinated effort.
Oregon Benchmarks. The State of Oregon's comprehensive key indicator
system continues to evolve as a tool to help agencies collect and
report information to the legislature and the governor. The use of
these indicators can help demonstrate agencies' contributions toward
statewide goals set forth in Oregon Shines and enhance agencies'
accountability for achieving these goals. Chaired by the Governor, the
Oregon Progress Board sets up the system fore reporting progress on
performance measures that are linked to benchmarks. State agencies are
required by law to link their annual performance measures to the Oregon
Benchmarks. The intent is to better align agencies' policies so they
work in concert and focus on moving the indicators in a desired
direction. This provides a mechanism to help encourage state officials
to focus on each agency's contributions toward key objectives and, in
some cases, has spurred policy discussions from a more holistic,
integrated perspective. As shown in figure 26, for example, numerous
state agencies contributed to a benchmark related to child abuse and
neglect, demonstrating the shared nature of many challenges.[Footnote
73]
Figure 26: Oregon State Agencies Whose Programs Are Linked to Child
Abuse or Neglect:
[See PDF for image]
Source: Oregon Progress Board.
[End of figure]
European Structural Indicators. This system provides a tool that is
used to determine how well member countries are meeting policy goals
spelled out in the 2000 Lisbon Strategy for the economic,
environmental, and social renewal of the EU. When the EU's executive
apparatus determines, based on a review of the related indicators, that
a member country has not made sufficient progress toward a particular
goal, it can recommend specific actions in the areas of the economy and
employment to be undertaken by a particular country to help further its
progress. In addition, each country's progress--or lack thereof--is
spotlighted in an annual, publicly released report.[Footnote 74] EU
officials told us that recommendations have been adopted by member
countries and have led to changes in those countries' policies. For
example, in response to EU recommendations, Spain has agreed to
implement new policies to help raise its employment rate among women,
which had been much lower than that for men. This would contribute to
the EU goal of reducing social and economic disparities among men and
women. Further, officials from the EU and some member countries told us
that merely publishing the information and providing the annual report
to the leaders of all member countries helps influence them to improve
performance, thereby contributing to the improved performance of Europe
as a whole and in relation to other nations to which EU members compare
themselves, including the United States.
Informing Decision Making and Improving Research:
Various public and private organizations use indicator systems to
facilitate better-informed and more fact-based policy making, program
planning, and fiscal decision making, as well as to improve the quality
of research on key economic, environmental, and social and cultural
issues. Indicator systems facilitate these processes by bringing
together relevant information in a centralized, reliable location, and
allowing leaders and citizens to easily access it. Because
comprehensive key indicator systems provide indicators on a wide range
of topical areas, they enhance opportunities to identify
interrelationships and analyze crosscutting issues. These systems also
provide the capacity for all leaders to work from the same information
set and make decisions based on it. Finally, a system can serve to gain
economies of scale by eliminating the need for other organizations or
individuals to spend time and resources pulling together information
from numerous disparate sources. Several examples from our fieldwork
illustrate these positive effects.
Boston Indicators Project. This system provides comprehensive
information on Boston and many community-level organizations have used
its Web site and reports to better inform their decision making and
program planning. For example, grant-making organizations, such as the
project's main organizer, the Boston Foundation (Foundation) itself,
have used this indicator system when reviewing proposals to verify the
data presented in the proposals as well as for making decisions. One of
the factors that the Foundation considers when assessing the merits of
grants proposals include whether the proposal targets a topic for which
indicators show negative trends or is aimed at filling existing
knowledge gaps. Similarly, grant seekers may use the indicators when
selecting topics for research when they realize that grant managers'
standard operating practices include referring to this system
regularly. Accordingly, the Boston Indicators Project saves all of
these organizations and individuals time and money because they do not
have to collect or aggregate this information on their own. More
importantly, it facilitates coordination of research and helps shape
fact-based decision making that is focused on meeting priorities and
contributing to continued progress.
In one specific example related to the Boston Indicators Project,
leaders of the Nuestra Communidad Development Corporation
(Corporation)--dedicated to improving the Roxbury section of Boston--
used an array of the project's indicators to provide evidence to a
national foundation of the plight of housing units owned by senior
citizens, many of which were in poor condition. The Foundation funded
this proposal, and the Corporation has implemented a program that helps
seniors rehabilitate housing units in Roxbury, including rentable units
owned by seniors.
Social Assets and Vulnerabilities Indicators (SAVI). SAVI is used by
community planners, neighborhood groups, researchers, and state and
local government agencies in the Indianapolis metro area to inform
policy and program planning and fiscal decision making. For example,
SAVI helped the Indianapolis YMCA Board of Directors make an important
funding decision by using the system's indicators. The YMCA's Board of
Directors asked SAVI officials for help in applying the system's
economic, public safety, demographic, and program indicators to provide
input on where to locate a proposed new YMCA building. SAVI used its
indicators to map areas of need and found that numerous parts of
Indianapolis were equally in need of better recreation and educational
facilities--that is, no one part of the city was a clear-cut choice
based on analysis of the indicators. As a result, the YMCA made a
decision to not construct a single new building but instead created a
"YMCA Without Walls" program offering a variety of new services
throughout the city in existing facilities, such as churches, schools,
and community centers. The YMCA also used SAVI indicators to determine
which services to locate in various parts of the city, such as locating
after-school programs in parts of the city with high concentrations of
low-income children.
United Kingdom's Sustainable Development Indicators. This system's
national report helped focus attention on several problems, such as the
growing amount of household waste being produced in the country. The
Sustainable Development Indicators reports over several years showed
that household waste in the United Kingdom was growing at a rate of 2
to 3 percent per year.[Footnote 75] These reports highlighted this
existing indicator to a broader audience. The vast majority of this
waste is disposed of in landfill sites, raising broader environmental
concerns because landfills are responsible for about one-fifth of the
country's emissions of methane--a major greenhouse gas that also
worsens air quality. The Prime Minister eventually directed his
Strategy Unit to analyze options to address these issues, and action
has been taken on a number of the options outlined in the report, such
as increasing taxes at landfills. Also, appropriate tasks and targets,
aligned with the newly developed waste strategy, have been integrated
within agency officials' performance agreements. Figures for the amount
of household waste not recycled or composted saw a decrease in 2002-
2003, the first decrease in recent years.
Increasing Knowledge about Key Economic, Environmental, and Social and
Cultural Issues:
Through the process of selecting indicators and reviewing data sources,
stakeholders and organizations that manage comprehensive indicator
systems sometimes identify areas in which their jurisdictions have gaps
in knowledge about key economic, environmental, and social and cultural
issues. In addition, comprehensive systems may highlight gaps regarding
knowledge about the interrelationships among various indicators and the
development of solutions to crosscutting problems. In some cases, gaps
are also exposed in knowledge of the conditions of certain population
groups, such as the aging population. Once the knowledge gaps are
discovered, the indicator system can help spur new data collection
efforts or the redirecting of existing efforts. Several illustrations
are provided below.
Compass Index of Sustainability (Orange County, Florida). When
developing this system, organizers identified significant gaps in the
county's knowledge about its aging population, a large group in that
jurisdiction. System planners discovered that agencies and other
organizations did not collect basic data on the health and well-being
of the aging population. The system's first report commented on these
knowledge gaps and helped spur county commissioners to appoint a task
force to review existing data collection efforts. This task force
recommended ways to enhance information about the conditions of the
aging population. More information on the aging population will soon be
available from the AdvantAge Initiative Study funded by the Winter Park
Health Foundation in collaboration with the Orange County Commission on
Aging and the Delta Leadership Council of the Senior Resource Alliance.
The survey will establish indicators that the system can continue to
follow into the future.[Footnote 76]
European Structural Indicators. This system has evolved through an
iterative process. Each year participants identify potential indicators
that need to be developed or improved in order to meet the criteria for
the structural indicators. For example, the EU had noted the lack of
indicators on e-commerce and requested that member countries collect
new types of data. These indicators are now included in an online
database of structural indicators. Eurostat has also identified 20
indicators that have yet to be fully developed.[Footnote 77] The EU
uses the following criteria to develop and revise its indicators.
Indicators must be:
* mutually consistent;
* policy relevant (linked to policy goals already established);
* easily understood by the target audience;
* available in a timely fashion;
* available for all or nearly all member countries;
* comparable among these countries as well as to external parties such
as the United States;
* selected from reliable, official sources; and:
* easy to collect and not unduly burdensome on member countries.
Some specific areas in which the EU would like to see progress made are
innovation and research, as well as social cohesion. Although
organizers of the system have sought to include relevant indicators,
they found that for the most part, member countries did not collect
adequate or sufficiently up to date information in these areas. To
address these knowledge gaps, the EU has asked member countries to
increase their collection of data on innovation and research in 2004--
for example, on the amount of information technology investment and
public and private expenditure on human capital--and to increase the
rapidity with which this information is becoming available.
System Costs Are Difficult to Quantify:
We found it difficult to discern the accurate, full costs for
developing, implementing, and sustaining a comprehensive key indicator
system because many of these costs do not appear as line items in the
budgets of the organizations that house them. Many of the systems we
studied are located in larger organizations or agencies. The managers
of these systems tend to borrow or leverage staff and resources from
throughout those organizations or agencies. As such, the full costs of
their time and effort are not really captured. For example, managers of
the Boston Indicators Project, which is housed in the larger Boston
Foundation, told us that they make use of the Foundation's resources,
such as working with its communications department to leverage its
significant media and publishing expertise; organizers also leverage
assistance from the project's partners. Further, because these systems
rely primarily on indicators or data collected by others, the costs
incurred by others to collect data are generally not reflected as part
of an indicator system's own costs.
According to the systems we studied, cost items included printing and
distributing reports, paying staff and consultants, and acquiring and
managing technology, for those that employed more innovative
technology. For example, organizers of the Southern California
Association of Governments' State of the Region system--which consists
of governments in 8 counties, including Los Angeles County, covering a
population of over 17 million people--told us that the association
dedicated approximately $200,000 for its 2002 annual indicators report.
Of this amount, approximately $25,000 went to printing the reports,
which are distributed to various officials, academia, businesses, and
nonprofit organizations in southern California--and are available to
the general public upon request. The rest of the funding was dedicated
to two staff members who were responsible for drafting and processing
the report.
In a different instance, those responsible for Baltimore's Vital Signs
system--which covers a population of over 600,000 people--told us that
they had three full-time staff dedicated to the project, with an annual
budget of approximately $350,000. These figures are for the
organization that runs the system--the Baltimore Neighborhood
Indicators Alliance--although the bulk of their work relates to the
indicator system. Further, organizers told us that they receive a
significant amount of in-kind support from their partners, which is not
reflected in the budget. The one-time start-up costs were approximately
$450,000. Baltimore's Vital Signs is an example of a system that is
working aggressively with technology, in particular geographic
information systems (GIS). Further, in many of the systems we studied,
one to three persons were dedicated full-time to the project. For
example, the Santa Monica Sustainable City indicator system is managed
by one person in the city's Department of Public Works.
The Maine's Measures of Growth system further illustrates these points.
The system is overseen by the Maine Economic Growth Council, which is
affiliated with the broader Maine Development Foundation. The Maine
Development Foundation has a board of directors drawn from its
approximately 300 members, who represent companies, educational
institutions, municipalities, government agencies, and nonprofit
organizations throughout the state. The Maine Development Foundation
has a full-time staff of nine professionals, although it makes
extensive use of volunteers, loaned executives from members, and
consultants to deliver its various core programs. One program director
staffs the Maine Economic Growth Council and runs the Maine's Measures
of Growth indicator system; that director's work is overseen by the
chief executive officer of the Maine Development Foundation. More
research will need to be done to understand the true costs of these
systems and how they vary based on issues such as scale of population
and use of technology.
Comprehensive Key Indicator Systems Have Potentially Broad
Applicability:
From a historical perspective, looking back at the 20th century and
with potentially broad applicability during the 21st century,
comprehensive key indicator systems appear to be a coherent and
noteworthy development. They represent a logical next step in the
evolution from indicator systems for enterprises to performance
measurement systems for governmental institutions to indicator systems
for entire jurisdictions. The most activity and the best organized
communities of practice and knowledge sharing appear to be at the local
level, where the "laboratories of democracy" can generate larger
numbers of efforts at smaller, more manageable scales. However, because
there is also activity at the state and national levels, more research
and sharing of knowledge would be beneficial.
The Systems We Studied Appear to Be a Next Step in the Evolution of
Measurement Practices:
From the beginning of our republic, ideas about measuring conditions
and using information in a democracy were embedded in notions ranging
from the U.S. decennial census and the need for the president to report
on the state of the union, to wider ranging rights concerning freedom
of speech and the press. It was in the 20th century that indicators in
the major topical areas and domains were initiated and fully developed
through public and private cooperation. Many of these bodies of
knowledge have matured over periods ranging from 50 through 75 years
into the indicators we now read about in the newspaper every day.
Comprehensive key indicator systems started their evolution later.
Private sector organizations, academic institutions, and individual
authors have, from time to time, addressed issues of how to assess the
position and progress of a jurisdiction, whether it be a city or a
nation. An example is the widely read volume The State of the Nation by
Derek Bok, President Emeritus of Harvard University.[Footnote 78]
Sustainable, repeatable key indicator systems have appeared in
different jurisdictions with sets of indicators grounded in an
intellectual framework, a diverse set of multi-sector stakeholders, a
group of products and services and institutional support to sustain and
evolve them.
For at least two reasons, the emergence of these comprehensive
indicator systems represents a next step in the evolution of
measurement and information management practices. First, they take
advantage of an innovative set of new information technologies; and,
second, they build on previous efforts at smaller scales and move to
higher scales.
Technology as an Enabling Factor in Indicator System Development:
Changes brought about by the revolution in distributed data collection,
management, and dissemination technologies over the last two decades
have altered the economics of information. Prior to the Internet, the
Web, and the whole set of distributed, open systems that have been
developed, the aggregation, management, and dissemination of
information from disparate sources required a substantial investment.
In the last two decades, the marginal cost of dissemination has
decreased. This means that more groups can take advantage of
investments in sophisticated measurement and information systems that
would not have been feasible before. The change in the economics of
information dissemination has created meaningful new opportunities to
increase the return on investments in data that have already been made
by dramatically increasing the number of people who have easy access to
it in a usable form.
Measuring Jurisdictions as the Next Step after Measuring Institutions
and Governments:
At the institutional level, the private sector, and business
enterprises in particular, were the first to begin the process of
systematically measuring their performance, which became widespread
during the era of Total Quality Management in the 1970s and 1980s and
then developed into the International Standards Organization and "Six
Sigma" practices that feed the executive information and financial
systems in wide use today.[Footnote 79] Starting in the 1980s and
1990s, this movement spread to government entities, which were arguably
more complex to measure and, at times, larger in scale than business
enterprises.
At the government level, examples of measurement reform are the Chief
Financial Officers (CFO) Act of 1990[Footnote 80] and the Government
Performance and Results Act of 1993 (GPRA).[Footnote 81] The CFO Act
spelled out an ambitious agenda to help the government remedy its lack
of timely, reliable, useful, and consistent financial information. For
example, it requires agencies to prepare audited financial statements
annually, thereby improving accountability over government
operations.[Footnote 82] Among the purposes of GPRA cited by Congress
was to improve federal program effectiveness and service delivery by
promoting a new focus on results, service quality, and customer
satisfaction by setting program goals, measuring performance against
goals, and reporting publicly on progress.
Moving beyond enterprise and government indicator systems are indicator
systems covered in this report at the jurisdictional level. These are
systems that substantially increase in scale and complexity as they
attempt to assess the position and progress of multi-sector, multi-
entity jurisdictions (e.g., a city, a region, or a state). As mentioned
previously, this next step by definition creates a wide range of
potential audiences and uses of an indicator system because of the many
different types of individuals, institutions, and communities in a
jurisdiction as opposed to a single business, nonprofit organization,
or government agency.
Working Systems Exist at All Levels of Society and Show Evidence of
Replicability:
We found working systems in jurisdictions at all levels of society,
from neighborhoods to nations, with millions of people. We studied a
set of systems for local, state, or regional jurisdictions covering
about 25 percent of the U.S. population. Figure 27 shows the population
coverage of the systems we studied in the United States at the
subnational level. Although each system faces unique challenges, has a
unique history, and exists in a unique geographic, political, cultural,
and situational context, the existence of such systems with similar
features suggests potential applicability elsewhere.
Figure 27: Population Coverage of Select Comprehensive Key Indicator
Systems in the United States:
[See PDF for image]
[End of figure]
There are most likely more systems in existence that we were not able
to include. At local levels, there is evidence of replicability, as
jurisdictions copy, adapt, or purchase ideas, civic processes,
indicator frameworks, or technology from others. These networks have
focused on sharing knowledge and practices about indicator systems.
This replicability is occurring not only through the well-established
community and neighborhood networks, but also at other levels. For
example, the Boston Indicators Project is not only developing
technology that could be used by other cities and metropolitan regions,
but it has garnered interest from around the country in its
intellectual framework, indicator set, and advanced product or service
design. The City of Dallas, with different demographics than Boston, is
using many of the Boston features in developing its own indicator
system through a public-private partnership (Dallas Indicators), while
adding many elements that fit its population, geography, and political
and economic structures.[Footnote 83]
This notion of potentially broad applicability is important because it
is likely that in spite of the progress made to date, many
jurisdictions in the United States do not yet have such systems. If
these systems eventually demonstrate a high net risk-adjusted return on
investment, and continue to show replicable features and develop more
organized networks for their propagation, then the potential benefit
for the nation could be large.
Evidence Suggests That a System for the United States as a Whole Is
Potentially Feasible:
The existence of meaningful activity at all levels and general features
that demonstrate transferability suggests the potential feasibility of
such a system for the nation as a whole. The fact that other developed
nations have such systems also demonstrates feasibility. The following
factors specifically suggest potential feasibility for a U.S. national
system.
Demonstrated Scalability. We have found working systems at all levels
of society, including neighborhoods, communities, cities, regions,
states, and nations, as already mentioned. They also exist at the
supranational level (e.g., the European Union) and for the world as a
whole (e.g., the United Nations' Millennium Goals). In one example that
bears further research, the OECD plays a role for its member nations
comparable to what might be expected of an institution dedicated to a
comprehensive key national indicator system in the United States.
Evidence of Transferability. We have found elements of existing systems
that are being adapted by other entities (e.g., Dallas and Boston) as
systems share and transfer practices, processes, information,
intellectual frameworks, and/or technology to to better meet specific
needs and interact with one another, especially at the local levels.
Also, as a result of the EU adopting policies, such as the Lisbon
Strategy and Sustainable Development, which require member countries to
provide standardized data for indicator systems to measure progress in
achieving agreed-upon goals, many members are now developing related
goals and indicator systems for their own countries.
Demonstrated Comparability. We found working systems for population
levels, such as the EU, that are equal to or greater than that of the
United States, which make them roughly comparable in terms of size and
complexity. However, significant differences remain in terms of
political and economic structures, geographic location, demographics,
and culture.
Credible Activity. There is a significant amount of activity across the
United States in terms of both population coverage and geographic
locations. Furthermore, there currently exists a broad-based coalition
of leading individuals and institutions that is planning how to create
and implement a key national indicator system for the United States--
known as the Key National Indicators Initiative.
Even accounting for the unique geographic, political, economic,
cultural, and situational factors in the United States, this evidence
of demonstrated scalability, transferability, comparability, and
credible activity, suggests that a U.S. system may not only be
feasible, but is actually in the early stages of development.
More Information Is Needed on Costs, Effects, and Other Issues:
Comprehensive key indicator systems appear to be a noteworthy
development in governance and demonstrate potentially broad
applicability. However, this should not be interpreted to mean that
they are a fully mature and packaged solution ready for implementation
anywhere, with known costs and benefits, risks, and possibilities that
allow for systematic decisions on whether to invest in them or not.
Organizers of systems appeared to make decisions to develop and
implement indicator systems based on various rationales. Some
emphasized the importance of having better public information,
available to a broad range of people, to support better decision making
and public problem solving. Others started their projects to achieve
better information on where the real problems exist to make better
policy and fiscal choices.
At this stage of development, there are as many unanswered questions
about these systems as there are areas of knowledge and information,
but one question in particular is important: What is the return on the
investment realized by jurisdictions that have invested in these
systems? As previously mentioned in this report, we have not found
enough evidence to make any sort of definitive determination on the
return on investment. And, given the difficulty of valuing information
and its impacts, such rational economic determinations will take years
to emerge, as they have in the private sector. Hence, return on
investment is an important area for further research and evaluation.
Still, such knowledge should not necessarily be seen as a precondition
for starting new indicator efforts. In many cases, it is a common sense
idea to want better, more easily usable and broadly available
information for the public and other audiences on the position and
progress of a particular community. Further, the lessons learned in
this report may be enough to warrant initial expenditures that explore
the possibilities of comprehensive key indicator systems in new areas
around the United States and the world.
[End of section]
Chapter 4: Congress and the Nation Have Options to Consider in Taking
Further Action:
If Congress or another entity chooses to support the development of a
comprehensive key indicator system, certain features should be applied
to the design and organization of it. Purpose and target audience are
the most important design features to consider at the outset. Other
features, including creating a broad-based governing structure,
ensuring independence and accountability, acquiring diversified
funding sources, and developing strategies to obtain needed indicators
or data, will also need to be considered, decided upon, and factored
into the design and organization of a system.
These design features can be achieved by starting with any of three
different organizational options--public, private, and public-private.
The comprehensive key systems that we reviewed could be classified into
one of these three types of organizations. Therefore, we identified
these three types of organizations as potential options for a national
comprehensive key indicator system in the United States. Most of the
efforts we studied tended to take on a public-private character over
time regardless of the option with which they began because they had
both public and private audiences and stakeholders among other reasons.
Some of the systems we reviewed also presented indicators based upon
both public and private information sources.
However, beyond these general features, there are a host of contextual
factors that are critical in the implementation of any system, ranging
from geography and demographic characteristics to cultural and
situational considerations. A healthy implementation approach will take
into account both general and customized factors and weigh them
appropriately in any particular implementation.
Certain Design Features Should Guide the Development of Any System,
Including a U.S. National System:
Our work in the United States and around the world strongly suggests
that the development of a national comprehensive key indicator system-
-or a comprehensive system at any geographic level for that matter--
would greatly benefit from considering and applying several critical
design features to its organization. The features below were drawn from
our research, but there are other, complementary sources of design
criteria for indicator systems which are worthy of note. Specifically,
countries have followed the so-called "Bellagio Principles" in
developing their overall indicator systems.[Footnote 84] In addition,
communities in the United States appear to commonly use information and
guidelines in The Community Indicators Handbook.[Footnote 85]
At the outset, establishing a clear purpose and identifying a defined
target audience and its needs are most crucial. Decisions about how to
incorporate other important features into the system's design should
follow decisions about purpose and target audience. Specifically,
organizers of a comprehensive key indicator system will then need to
consider and make decisions about how to:
* ensure independence and accountability,
* create a broad-based governing structure and actively involve key
stakeholders,
* secure stable and diversified funding sources,
* design effective development and implementation processes,
* identify and obtain needed indicators or data,
* attract and retain staff with appropriate skills,
* implement marketing and communications strategies for target
audiences, and:
* acquire and leverage information technologies.
The importance of each feature, and decisions regarding its application
to a U.S. national system, will be greatly influenced and challenged by
the scale, magnitude, and complexity of the jurisdiction within which
such an effort takes place. For example, a national effort covering 290
million people would be affected by a more diverse and fragmented group
of stakeholders, increased political conflict, and greater
organizational and legal constraints than a city, region, or state.
Also, it would likely necessitate a larger amount of fiscal and
personnel resources than an effort at a smaller scale.
Establish a Clear Purpose and Define Target Audiences and Their Needs:
Organizers should decide whether the system would be intended to focus
on providing information allowing users to learn more about the
conditions of their jurisdiction, or whether it would also have an
outcome orientation and measure progress toward specific goals or a
shared vision for the future. Additionally, the decision about audience
focus forms the underlying construct for the entire system. This could
be a choice to focus on the needs of a smaller audience, such as civic
leaders, versus a broader audience that includes individuals and
institutions in the private and public sectors. Most of the systems we
reviewed purposely chose broader audiences but have had differing
degrees of success in reaching and attracting these audiences.
Paramount to the design of any system is the establishment of a clear
purpose and a defined target audience. Once decisions about purpose and
target audience are set, decisions about the incorporation of other
important design features, such as sources of funding and appropriate
governance structures, will naturally follow. Related decisions include
the activities that the managing organization will perform, and the
products and services it will deliver. For example, a system that is
aimed at monitoring and spurring progress toward a set of specific
policy goals with targets attached to them would need to ensure that it
had a governance structure, as well as development and implementation
processes, that incorporates those officials who are positioned to take
action to meet those targets, such as the heads of key agencies or
legislative leaders. The specificity of a system's purpose is directly
related to its ability to define success or failure, make corrections
or document best practices, and to ultimately evaluate the value of the
effort for both users and stakeholders.
In contrast, if the system is not structured based on outcomes but is
designed primarily to help various groups learn more about the
conditions of their jurisdiction, then a more inclusive, collaborative
governance structure and processes that include user, provider, and
supplier communities--such as accountability, finance, business, and
statistical groups--would be more appropriate. This would help ensure
that the indicators included in the system reflected a broad-based
consensus on the key economic, environmental, and social and cultural
conditions to track and may increase the likelihood that the system
will be widely used.
Organizers could elect to design the system for a specifically targeted
audience, such as government policymakers, or a wider audience, to
include not only government policymakers but business leaders,
researchers, not-for-profit organizations, advocacy groups, the media,
and citizens. This decision also forms the underlying construct for the
entire system, including what implementation processes are needed, how
it will be funded, and which indicators will be selected. For example,
an indicator system aimed at a wide variety of communities, including
government policymakers, business leaders, researchers, not-for-
profit, and statistical agencies and organizations would need to be
developed and implemented using highly collaborative processes to
ensure that diverse viewpoints are incorporated. Further, if organizers
decided to develop such a system, it would need to have a great deal of
independence so that it could have broad appeal and relevance to those
with differing ideologies, economic situations, religions,
ethnicities, and races.
Ensure Independence and Accountability:
A comprehensive key indicator system should be insulated from political
pressures and other sources of potential bias to the greatest extent
possible. If the indicator system is perceived as biased toward a
particular ideological or partisan perspective, or perceived as less
than transparent, the information it presents is less likely to have
credibility and legitimacy among many users. To attract as diverse a
group of stakeholders as possible, it is critical for the indicator
system and its managers to be seen as credible, trusted conveners who
have successfully coordinated a participatory process for developing
and revising the system over time. Without the credibility that comes
from a strong degree of independence, some users may lose trust in the
accuracy and objectivity of the information.
Furthermore, experts and practitioners commented that the system should
be designed so that debates among leaders are about what the indicator
trends are showing, alternative interpretations and solutions, and how
to address issues and opportunities. A well-designed system should have
a minimal, ongoing level of discussion about whether organizational
processes themselves are delivering quality information with
appropriate transparency.
One way to ensure independence and accountability would be to make the
actions of the organization managing the system and key decisions
accountable and transparent to the organizing entity, donors, other
funders, and the public. Without this, the credibility and independence
of the organization could be called into question. For example, a
managing organization should be required to submit an annual report and
audited annual financial statements to its major funders. Similarly, a
U.S. national system could be required to submit a report to Congress
if it received federal funding. These documents and the organization's
use of funds should be subject to external review to avoid questions
about credibility, integrity, and independence.
Ensuring independence and accountability would be even more critical at
the U.S. national level, which operates in a highly partisan
environment, and has a much greater diversity of stakeholders who are
often fragmented along the lines of ideology, wealth, race, gender,
ethnicity, and sexual orientation.
Create a Broad-Based Governing Structure and Actively Involve
Stakeholders:
A comprehensive key indicator system should be governed by a structure
that includes a blend of public and private officials and represents
views from various communities of practice, including the
accountability, statistical, scientific and research, business, media,
leadership, finance, public interest, and not-for-profit communities.
They are the individuals who will make decisions about how to apply and
implement the various design features and set the policies for the
indicator system's staff to follow. They will also make decisions
regarding the overall direction of the system, including the services
and products that the managing organization will deliver. For example,
comprehensive systems that represent large geographic areas, such as
states, have found it useful to create broad-based governing boards
appointed by governors, legislative leaders, and the boards themselves.
These members can include representatives of business, educational
organizations, labor organizations and other nonprofit organizations;
executive branch officials; and state legislators. Members should
ideally be chosen in a transparent, reliable way. A broad-based
governing structure is important because it could help build interest
and acceptance among diverse possible users of an indicator system and
increase access to needed indicators or data.
Our fieldwork shows that such diverse involvement from leaders of
different communities can help to build consensus around a set of
selected indicators and increase use of the system by different groups.
In fact, the single best way to ensure active involvement from an array
of diverse stakeholders is to incorporate leaders from key communities
as part of the management of the system. Moreover, this governing
structure could benefit from having subcommittees that are dedicated to
tackling specific aspects of developing and managing a system, such as
securing funding or designing strategies to communicate the results of
the indicators and value of the system to others.
In addition, recognizing that most systems will be revised over time,
organizers will continue to benefit from soliciting views from a broad
range of citizens, elected officials, government staff, business
leaders, advocacy groups, academic institutions, and not-for-profit
organizations in developing the system and identifying or revising the
indicators. Increased stakeholder involvement generally strengthens
the support for and use of a comprehensive indicator system and
enhances its overall credibility and quality. Having diverse
representation in its governance structure will be even more crucial in
a national effort because of the range of different interests and
viewpoints that exist across the country.
Secure Stable and Diversified Funding Sources:
Securing adequate funding that remains stable over time to run the
system at the outset, when costs are higher, as well as later when
costs sometimes level off, is crucial to a system's long-term
sustainability. Accordingly, an indicator system could draw upon
funding from a vast number of possible sources, including federal,
state, and local agencies; private corporations; not-for-profit
foundations; and academic institutions. Such opportunities would be
even greater at the national level. As described earlier, securing and
sustaining funding has been a major challenge for some comprehensive
key indicator systems, particularly those that depend on a single
source of funds, as these systems can be vulnerable to fluctuations in
a particular source.
One way to ensure stability is to diversify the number and types of
funding sources. Doing so can potentially reduce an indicator system's
vulnerability to funding uncertainties or cuts. Seeking funding from
both public and private sources also allows more varied stakeholders,
or funders, to be brought into the system and encourages the diverse
communities they represent to use the system. Moreover, diversity even
within one type of funding is also helpful. For example, public funding
could be drawn from sources such as direct appropriations, government
agency contracts and grants, or all of the above. The extent to which
organizers can diversify funding varies and depends in part on
applicable legal constraints.
Design Effective Development and Implementation Processes:
Having well-defined and effective processes and systems in place to
carry out the basic functions of the organization and the system's
design is important for comprehensive key indicator systems to operate
effectively. Specifically, it is important to have transparent,
collaborative, and repeatable processes in place to develop and modify
an organizing framework for the indicators, select and revise the
indicators, acquire indicators or data to compute indicators, engage
data producers, assess the quality and reliability of the indicators or
data, seek and maintain funding, and develop and implement
communications and marketing strategies, among other things. Issues
regarding the quality of indicators and their supporting data are
especially important because of the high profile given to information
in a comprehensive key indicator system.
For example, a comprehensive indicator system should have a defined,
agreed-upon process for selecting and revising the indicators to be
included in the system. This process should be guided by criteria for
selecting indicators--criteria that have been agreed to by the system's
governing leaders and are acceptable to the communities they represent,
as well as other potential users. Such criteria guide the selection
process, help to reduce tensions among stakeholders, and help achieve
consensus among them. Many of the indicator systems we analyzed in the
United States and around the world have established such transparent
criteria. Some of the common criteria that have been used by these
systems, and could be replicated by a national system for selecting its
indicator set, include:
* relevance to target audiences,
* aligned with the goals or key issues that the system wants to
monitor,
* easily understood and meaningful to a variety of audiences,
* drawn from reliable sources,
* easily available from existing sources,
* not resource intensive to obtain,
* updated regularly, and:
* comparable across geographic areas or various population groups.
While transparent processes, such as criteria for selecting indicators,
are important, a system's leaders should also have sufficient
flexibility to modify the system's processes as situations change and
some become irrelevant or counter-productive, or as more effective ones
are discovered.
Identify and Obtain Needed Indicators or Data:
Most comprehensive indicator systems report indicators or use data that
are originally collected by other organizations. Identifying and having
the ability to gain access to indicators or data that are provided by
other organizations, including government agencies and the private
sector, is critical to these systems' survival. A national system would
also benefit from being able to combine both public and private sources
of information, assuming the existence of agreed-upon quality assurance
criteria, standards, and processes. In addition to having legal
authority to access the information, the system should have
responsibility, including legal responsibility, for protecting the
confidentiality of the information.
Further, some organizations are reluctant to share information if they
believe that data might be misrepresented or used to make a particular
program or agency look bad. To overcome these and other constraints,
comprehensive indicator systems should establish collaborative
relationships with data producers to convince them to share information
in a timely manner, particularly information that is not readily
available to the public. One effective way to ensure that the system
obtains needed information is to incorporate data producers or key
representatives of the data and scientific communities into the
system's leadership. At the very least it is helpful to have these
representatives at the table when decisions are being made about which
indicators to select as part of the system.
A system's leadership should also develop clear procedures for fair
treatment of data providers. To do this, some systems have established
more formal processes, such as memorandums of agreement that specify
how the data will be used and when and in what form the producers will
provide these data. In addition, if a national system is developed, it
will be necessary to establish access and privacy rights by statute.
Attract and Retain Staff with Appropriate Skills:
A number of human capital issues need to be addressed for an
organization that houses a comprehensive key indicator system. The most
basic would be to establish the nature of the position of chief
executive officer, who would lead the system's staff, and select a
highly qualified person for this position. Because of the high
visibility of the position and the complexity of the organization's
work, particularly at the national level, a person with significant
stature and expertise would be needed.
Having staff with appropriate skills is also critical to ensuring the
system will be operated effectively on a day-to-day basis and working
with the system's leadership to carry out their decisions. A system's
staff would need to include individuals with a wide variety of skills
and knowledge in areas including statistics, information technology
management, economics, accounting, and marketing and communications, as
well as working knowledge of key economic, environmental, and social
and cultural issues. In addition, these individuals must bring highly
collaborative skills to the table, including experience in facilitating
group processes and consensus-based decision making. Such skills are
important for staff since they would be responsible for managing
processes to continually engage key stakeholders and ensure the
effective running of the system, including cooperation from data
providers.
In addition, other concerns to consider include the exact types and
number of employees, the salaries they would be paid, the benefits they
would receive, and the protections that would apply to them. An
additional human capital issue concerns temporary staff. It is useful
if the system's staff could rely on occasional outside assistance to
supplement the permanent staff, for example, through fellowships,
interagency personnel agreements, internships, and exchanges with other
organizations and government agencies. This element would help to break
down potential barriers, promote a better understanding of the needs of
various statistical entities, and help build public-private
partnerships.
Implement Marketing and Communications Strategies for Target Audiences:
A comprehensive key indicator system would need to have multifaceted
marketing and communications strategies that are tailored to diverse
target audiences. Marketing and communications strategies are intended
to spread the word about the existence and features of the system;
disseminate information on what the indicator trends are showing
regarding economic, environmental, and social and cultural conditions
and trends; and encourage a broader base of individuals and
organizations to make use of the system. Effective marketing and
communication strategies are critical to ensuring widespread
understanding and use of the system, as well as ongoing political and
funding support for it.
In particular, the media, whether print or electronic, are a critical
audience for a system because they play a vital role as both users of
indicators and providers of information to diverse audiences throughout
all segments and levels of society. They can help spread the word about
what the broad indicator set communicates, what specific indicators or
sets of indicators measure and what they mean, how they can be used by
various audiences, and what major trends may be worth paying attention
to on a regular basis.
Based on the experience of others, some specific aspects of an all-
encompassing strategy might include:
* conducting briefings and demonstrations for key legislators, agency
officials, and their staffs;
* maintaining an interactive Web site;
* making presentations at the conferences of various communities, such
as the accountability, statistical, scientific and research, business,
media, leadership, finance, and public interest and not-for-profit
communities;
* reaching out to the media so that they report on the system;
* publishing a variety of comprehensive and topical or domain-specific
reports on indicator trends;
* holding open workshops for leaders and their staffs, as well as for
citizens;
* providing training sessions and other learning opportunities;
* making technical assistance available to users by phone or e-mail;
and:
* conducting media events for the release of new reports or major
updates, featuring notable leaders as spokespersons.
Acquire and Leverage Information Technologies:
In the past decade, technology has made it much easier and less
resource intensive to collect, coordinate, and exchange data among
various organizations, and disseminate information to a broader
audience. For example, the Internet has revolutionized the way
indicators and base data are made available to the public; some federal
agencies post thousands of pieces of data free of charge on their Web
sites. Innovative technology could also facilitate widespread use of a
comprehensive key indicator system. For example, a highly interactive
Web site would make the indicators widely available and accessible to
public and private leaders as well as citizens. It would also enhance a
system's relevance by allowing users to review certain indicators
selectively, or illustrate indicator trends in different ways, such as
cutting them by geographic regions, race, or gender. In acquiring and
applying technology, a national system in particular could look to a
number of existing systems in the United States and around the world
that are on the cutting edge.
While a national system (or any system for that matter) would benefit
from employing the latest technology, doing so requires extensive
fiscal resources, particularly at the outset. Specifically, a system
would need adequate resources to purchase the technology and upgrade it
over time, as it changes rapidly. A system would also need to have
adequate resources and flexibility to attract and retain technical
staff with relevant expertise to manage the information technology
systems.
Congress Could Choose from a Range of Organizational Options as
Starting Points for a U.S. National System:
If Congress decides to establish a national comprehensive key indicator
system and identify an organization to house it, a number of
organizational options are available to choose from, including public,
private, and combination public-private entities. There are advantages
and disadvantages associated with each option. These basic options, to
a significant degree, also hold for any neighborhood, community, city,
region, or state that is considering a comprehensive key indicator
system.
It is important to note that the specific organizational option
Congress or any other decision maker chooses as a starting point may be
less important than ensuring that key design features are incorporated
into it. This would include considering ways in which multiple
solutions might coordinate with one another until the time is right to
create an overarching institutional structure. Eventually, since most
of these systems tend to involve public-private interactions, the
public-private option appears to offer the highest degree of
flexibility to apply the common design features.
Any Viable Comprehensive Key National Indicator System for the United
States Will Eventually Involve Substantial Public and Private
Interaction:
In terms of organizational implementation, most of the efforts we
reviewed had some public-private character--either formal or informal-
-that provided certain flexibility in terms of many of the key design
features we identified.
* First, assessing the position and progress of a jurisdiction in a
market-oriented democracy like ours would benefit from aggregation of
both publicly and privately produced data, as there is a great deal of
information that is produced by private sector providers.
* Second, both public and private institutions, as well as individuals
and a wide variety of groups, make up any jurisdiction that is being
measured and, thus, have an interest in being engaged.
* Third, much federal government data are tied directly to functional
or programmatic purposes and restricted to areas in which the
government has a vested interest. This represents a built-in constraint
to funding and/or including indicators that are not directly associated
with any federal function or program.
* Fourth, public sector institutions that provide data and indicators
today in most cases collect them from private individuals or
institutions, who may have an interest in seeing more available and
accessible information in return for the burden of their time, expense,
and energy.
As a result, there is little question that any comprehensive key
indicator system would have a public-private character. The issue for
any jurisdiction considering a system is where to start, which is a
complex decision that needs to be made on a case-by-case basis.
Publicly Led, Privately Led, or Public-Private Organizations Are
Options Congress Could Consider as a Starting Point:
We identified three primary organizational options that Congress could
consider if it decides to initiate a national comprehensive key
indicator system. Each option would allow for incorporation of all or
most of the key design features, but to varying degrees. These three
organizational options are (A) a public entity, (B) a private entity,
and (C) a combination public-private entity. Our work revealed that
lasting comprehensive key indicator systems--showing positive effects-
-existed in a number of organizational formats, ranging from strictly
public systems, such as Oregon Benchmarks, to systems housed in
private, nonprofit organizations, such as Chicago 2020. There are
advantages and disadvantages to each option, as well as a great deal of
variety in their basic characteristics. Any key indicator system that
uses information not already in the public domain needs to have the
authority to access it as well as the responsibility for protecting
privacy and other concerns.
We present three options below that lay out some significant advantages
and disadvantages. We also identified existing national organizations
to highlight various characteristics of each organizational option.
Option A: A Public Organization:
Congress could choose a federal agency, or component of a larger agency
or department, to lead the development and implementation of a national
comprehensive key indicator system. Table 6 provides additional detail
on the characteristics, advantages, and disadvantages of the public
option.
Table 6: Characteristics, Advantages, and Disadvantages of the Public
Organizational Option:
A publicly led system would be housed in a federal agency, operating as
either (1) a new organization within an existing agency, (2) a
completely new agency, or a (3) an added responsibility in the mission
and activities of an existing agency. Federal statistical agencies
could be required to provide a new system with access to data. Existing
organizational relationships and processes could be leveraged, such as
ensuring the full participation of federal statistical agencies and
working with successful forums and other models for engaging public and
private external stakeholders, such as advisory committees maintained
by some of the principal statistical agencies.
Advantages:
* A public organization could build upon the significant institutional
capabilities and cultures of professionalism and independence within
the federal statistical system;
* The federal government is already the center of gravity for national
statistics and a public organization could build on this base;
* A public organization could help ameliorate concerns regarding
access to and use of federal statistical information;
* Successful forums and other models currently exist in the federal
system to incorporate stakeholders from inside and outside government,
and could be replicated;
* A public organization could use lessons learned from federal
government experiences in implementing federal laws concerning
transparency and accountability;
Disadvantages:
* Few federal agencies have broad enough scope to house a comprehensive
national system (the U.S. Census Bureau may be an exception);
* Difficulties exist in mixing official and unofficial statistical
information;
* It is an ongoing challenge for information providers to maintain
independence within the national political context;
* A public organization could limit private sector contributions of
funding or staffing by volunteers;
* A public organization could make it easier for funding displacement
to occur, as the organization could have constraints on seeking outside
funding;
* A public organization could be constrained by the federal management
and human capital structures that may apply to it, potentially
affecting the availability of needed talent.
Source: GAO.
[End of table]
To illustrate some of the main features of a publicly led option at the
national level, we selected the U.S. Census Bureau. The Census Bureau
is one of the main federal statistical agencies, as it collects a wide
variety of information across the economic, environmental, and social
and cultural domains. It is a major participant in the federal
statistical system, with an extensive statistical infrastructure and
skill base. As such, it is a viable option for taking a lead role in
developing a national system.
U.S. Census Bureau as an Example of a Public Organization:
The U.S. Census Bureau is a federal agency that has broad statutory
authority to collect and report on statistical information in the
economic, environmental, and social and cultural domains. A primary
responsibility of the Census Bureau is to conduct the decennial census
of Americans. This census has been conducted every 10 years since 1790.
In addition to the decennial census, it conducts more than 100 other
surveys every year. Federal law contains provisions to keep
confidential the information obtained by the Census Bureau.
The Census Bureau is under the jurisdiction of the Department of
Commerce. It is headed by a director, who is appointed by the
President, with the advice and consent of the Senate. There is no
specified term for the director under the statute. It has over 10,000
employees and is funded through federal appropriations. It can be paid
for special analytical products produced at the request of private or
public parties. The agency's workforce expands dramatically when the
decennial census is taken every 10 years--approximately 860,000
temporary workers were hired for the 2000 census. The Census Bureau is
not authorized to receive outside donations, or otherwise obtain
nonappropriated funds. However, it is specifically authorized to obtain
information from any other department, agency, or establishment of the
federal government or of the Government of the District of Columbia.
The agency has 12 regional offices located throughout the country.
Under Title 13, the Census Bureau has authority to access individual
data from other agencies and could use these data to create new
indicator series. One such example under development is the
Longitudinal Economic Household Dynamics program, which uses data from
BLS, the Social Security Administration, and the Internal Revenue
Service to produce "workforce indicators."
The Census Bureau is working to make comprehensive information more
available to diverse audiences. For example, the American FactFinder is
an electronic system for access and dissemination of Census Bureau data
on the Internet. The American FactFinder offers prepackaged data
products and user-selected data tables and maps from the 2000 U.S.
census, the 1990 Census of Population and Housing, the 1997 Economic
Census, and the American Community Survey.
Option B: A Private Organization:
Another option would be for Congress to identify or charter a private
organization to develop and implement a national comprehensive key
indicator system. This organization could either be for-profit or
nonprofit. Because too strong a profit motive could significantly
affect a system's perceived or actual real independence, credibility,
and legitimacy, a nonprofit organization is probably be better suited
to develop a widely accessible system integrating diverse information
on the position and progress of the United States in the economic,
environmental, and social and cultural domains. Table 7 discusses the
option of a private organization in greater detail.
Table 7: Characteristics, Advantages, and Disadvantages of the Private
Organizational Option:
A private, not-for-profit organization chartered by Congress would
provide semiofficial status to a national system, yet set it apart from
the administration or Congress. A common type of congressionally
chartered organization that would be an appropriate and likely venue
for a national system is a Title 36 corporation, which is listed in
Title 36 of the U.S. Code. Noteworthy examples of Title 36 corporations
include the National Academy of Sciences and the National Academy of
Public Administration. Chartered corporations listed in Title 36 are
not agencies of the United States. For example, the corporation's debt
is not guaranteed, explicitly or implicitly, by the full faith and
credit of the United States;
Title 36 status for national organizations tends to provide an
"official" imprimatur to their activities, and may provide them some
degree of prestige and indirect financial benefits. Federal supervision
of congressionally chartered not-for-profit organizations is limited.
Among the few federal requirements for Title 36 corporations are that
they must have independent audits done annually and have the audit
reports submitted to Congress. The House Committee on the Judiciary
forwards all audits received to GAO for review. Title 36 organizations
can receive appropriated funds in the form of federal contracts,
grants-in-aid, and other forms of financial agreement with executive
departments and agencies. These organizations may also receive private
gifts and bequests, although they are not intended to operate for a
profit.
Advantages:
* A private organization would be more adaptable and exposed directly
to competitive market forces;
* A private organization would have a high degree of flexibility in
developing management and human capital policies;
* A private organization would offer the potential to develop
affiliations with a wide variety of groups;
* A private organization could have the ability to take actions subject
to fewer constraints than organizations that are subject to
governmental processes and politics;
* A private organization could solicit funds from a wider range of
potential donors and retain voluntary staff;
* A private organization could be more independent of the political
process than a purely public organization;
Disadvantages:
* A private organization would be separate from the management control
system of the federal government, which could compromise
accountability and integrity;
* A private organization would be disconnected from the political
appropriations and authorization processes, possibly making it more
difficult to encourage policy makers to support it;
* A private organization could have a smaller skill base and
infrastructure to start;
* Housing it in a private organization could lead to competition in
the marketplace, detracting from its status as a public good;
* Private organizations, unless they have highly diversified
stakeholders or strong institutional cultures and processes, can be as
subject to bias or politicization as government organizations.
Source: GAO.
[End of table]
To illustrate how a private organization chartered by Congress might
operate, we selected the National Academies of Sciences (NAS). NAS is
noted for its reputation of providing independent, scientifically
grounded analysis, advice, and recommendations to the nation, could
viably take a lead role in developing and implementing a national
system.[Footnote 86]
National Academy of Sciences as an Example a Private Organization:
NAS is part of the National Academies, which is a society of
distinguished scholars who are engaged in scientific and engineering
research. Specifically, it serves to investigate, examine, experiment,
and report on any subject of science or art where called upon to do so
by any department of the government. Collectively, four research
organizations are known as the National Academies, which is an umbrella
structure for these organizations. NAS was the first of the four to be
created, in 1863, and was later joined by the National Research Council
in 1916, the National Academy of Engineering (NAE) in 1964, and the
Institute of Medicine (IOM) in 1970. NAS is a congressionally
chartered, not-for-profit corporation under Title 36 of the U.S. Code.
NAS includes about 1,800 members, the NAE about 1,900, and the IOM
about 1,200 members. NAS, NAE, and IOM consist of members elected by
peers in recognition of distinguished achievement in their respective
fields.
Congress chartered NAS in March 1863. It is defined officially as a
private, not-for-profit, "self-perpetuating society of distinguished
scholars engaged in scientific and engineering research, upon the
authority of the charter granted to it by Congress." NAS is exempt from
federal taxation and does not receive direct federal appropriations for
its work. Studies undertaken for the government by NAS usually are
funded out of appropriations made available to federal agencies by
Congress.
Option C: A Public-Private Organization:
A third option for Congress is to employ a public-private organization,
which would combine attributes of both a federal government agency,
like the Census Bureau, and a private, not-for-profit organization,
like NAS. Table 8 describes the public-private option in greater
detail.
Table 8: Characteristics, Advantages, and Disadvantages of the Public-
Private Organizational Option:
The public-private option can vary tremendously in organizational
design, funding arrangements, and the existing laws that apply. In
fact, existing organizations, often referred to as "quasi-official
agencies," have little in common with each other, as they were all
created at different times for different reasons. As a result, it is
difficult to find common elements among them. Congress would have a
great deal of flexibility in chartering a public-private organization
and delegating various responsibilities to it for the purpose of
developing a national comprehensive key indicator system. However, such
"quasi-official agencies" are often subject to political and funding
pressures not dissimilar to those encountered by regular executive and
legislative branch agencies. In designing a public-private
organization, Congress would need to decide which existing laws would
apply to the organization, such as the Government Performance and
Results Act, the Privacy Act, or the Inspector General Act. Unlike the
strictly public or private options, for which organizational constructs
are well established, Congress would need to design a new, unique
public-private organization by selecting from a menu of available
features.
Advantages:
* A public-private organization could build on the existing capability
of the federal government but retain a degree of flexibility to adapt
to changing circumstances;
* Establishing a broader base that builds upon both public and private
interests could enhance the ability to form an effective constituency
in Congress;
* The mix of public and private interests could help balance
independence with crucial connections to the political process;
* A public-private organization could solicit donations and retain
volunteer staff;
Disadvantages:
* Because it requires a new organization, it faces difficulties
inherent in starting up;
* There are risks of competing or overlapping with existing federal
functions in an unconstructive fashion if it is not carefully
structured;
* Public-private organizations are not immune to political pressures
and would have to build institutional processes and a culture focused
on quality and independence.
Source: GAO.
[End of table]
In designing a public-private organization, Congress could look at a
list of diverse national organizations for ideas. One key example is
the Smithsonian Institution (Smithsonian), which is a unique hybrid
organization in that it is both publicly supported and privately
endowed, and has a mixture of federal and private employees. We
selected the Smithsonian to illustrate the tremendous amount of
flexibility Congress would have in establishing a public-private
partnership and some key characteristics of one--although it is not a
viable option for taking a lead role in a U.S. national system.
Smithsonian Institution as an Example of a Public-Private Organization:
The Smithsonian is identified in the U.S. Government Manual as a
"quasi-official agency" and its purposes are to conduct scientific and
scholarly research, share resources with communities throughout the
nation, and engage in educational programming and national and
international cooperative research.[Footnote 87] It is the world's
largest museum complex, comprising 14 museums and a national zoological
park in Washington, D.C., and two museums in New York City.
Started in 1846, the Smithsonian is a unique creation of Congress that
is both publicly supported and privately endowed. Specifically, the
Smithsonian is financed in part by trust funds and by federal
government appropriated funds.[Footnote 88] In fiscal year 2003, for
example, the Smithsonian's budget was $786 million, consisting of $559
million in federal appropriations and an estimated $227 million in
private trust funds. Congress does not provide direction or have
control over the trust funds. Federal funds are used for purposes
authorized by Congress, while trust funds are generally used more
freely for collection, acquisition, and the salaries of trust fund
employees. The Smithsonian is unusual in that it has two types of
employees: federal employees who are part of the civil service system
and nonfederal employees (or "trust fund employees"), whose salaries
and benefits are paid from the trust fund. In 1995, the Smithsonian had
6,537 employees--4,492 federal and 2,045 trust fund employees, along
with thousands of volunteers.
The Smithsonian Institution is administered by a Board of Regents and a
Secretary. The Board of Regents includes the Vice President of the
United States, the Chief Justice of the United States, three Senators,
three Members of Congress, and nine other persons (two Washington, D.C.
residents and seven residents of other states, but no two from the same
state). The President Pro Tempore of the Senate appoints the senators,
the Speaker of the House appoints the members of the House, and the
nine other persons are appointed by a joint resolution of the Congress.
Their terms of office range from 2 to 6 years. The Board appoints the
Secretary of the Smithsonian, who serves as the organization's chief
executive officer. To date, the Secretary has always been a trust fund
employee. Each member of the board is reimbursed for his or her
necessary traveling and other actual expenses, but is not paid a
salary.
Varying federal laws and attributes apply to the Smithsonian. For
example, it has a majority federal employee workforce, receives
representation from the United States Attorney's Office, enjoys
absolute governmental immunity in libel suits, receives a large amount
of federal funding, enjoys federal status in taxes and property
transfers, publishes its rules and regulations in the Federal Register
and Code of Federal Regulations, is required to have an inspector
general, and is subject to GAO audits.
Choosing a New or Existing Organization Carries Certain Advantages and
Disadvantages:
A further consideration in designing an organization to house a
comprehensive key indicator system is whether a new or existing entity
is most appropriate; and there are advantages and disadvantages to
each. Unlike existing organizations, the most significant disadvantage
for a new organization is the difficulty of incubating it. The
challenges of funding, establishing networks internally and with key
external communities, and new operating policies and procedures, are
all challenging in a start-up situation. In addition, it is more
difficult to build brand awareness, trust, and credibility. However,
there is the opportunity to begin entirely new, and to design an
organization that suits exactly the key design features that might lead
to developing a long-lasting, well-used indicator system.
On the other hand, at the national level, there may be few, if any,
existing organizations with the necessary size, scope, skill base, and
infrastructure to effectively support an effort of such complexity,
scale, and scope. Two of the organizations we selected for illustrative
purposes--the Census Bureau and the National Academy of Sciences--
appear to satisfy some of the characteristics necessary to support a
national indicator effort, although they may not be sufficient in all
regards. A few other organizations may lend themselves equally well to
a U.S. national indicator system, although not all features of these
organizations may be directly applicable. The advantages and
disadvantages of a new or existing organization are illustrated in
table 9.
Table 9: Advantages and Disadvantages of a New Versus an Existing
Organization:
New organization; Advantages:
* A new organization could be designed in alignment with a system's
purpose and target audiences;
* A new organization would be able to incorporate all design features
with few restrictions;
New organization; Disadvantages:
* A new organization could be difficult to establish and incubate;
* It could be difficult to obtain seed capital with no known
reputation;
* Establishing new networks of stakeholders and users is difficult;
* Establishing new operating policies and procedures is challenging;
* Hiring and training a new workforce can be difficult;
* Building trust and credibility from scratch is challenging.
Existing organization; Advantages:
* An existing organization would likely have well-established networks
of stakeholders and users;
* An existing organization would likely have an established reputation,
prestige, trust, and credibility;
* Funding sources and channels would have already been established;
* Operating policies and procedures would already be in place;
* An existing organization could leverage existing facilities and
information technology;
Existing organization; Disadvantages:
* Few existing organizations would have the necessary scope, skill
base, and infrastructure to support such an effort;
* The system would have to compete with an existing organization's
other projects and programs;
* The system would have to deal with policies and procedures already in
place;
* Organizers would have less flexibility to design a system that is
aligned with its purpose and target audiences;
* Organizers would have less flexibility to incorporate all design
features.
Source: GAO.
[End of table]
A New Public-Private Organization Could Offer Greater Flexibility to
Apply Design Features:
The public-private organizational option could provide Congress and
organizers with a great deal of flexibility to apply effectively and
more easily all of the key design features that we identified as
critical to a lasting, well-used indicator system: ensuring
independence and accountability, creating a broad-based governing
structure and actively involving key stakeholders, securing diversified
funding, designing effective development and implementation
strategies, identifying and obtaining needed indicators or data,
attracting and retaining staff with appropriate skills, implementing
marketing and communications strategies for target audiences, and
acquiring and managing information technology. It could also allow
Congress to combine the best features of both public and private
organizations while minimizing their disadvantages. Further, most of
the experts we interviewed believed that such an organization would be
the best venue for a national system. However, we found no significant
why reason the other options should be ruled out, especially as
potential starting points that might eventually help lay the foundation
for the creation of a public-private partnership.
A public-private organization appears to offer the best possibility of
customizing a design to interact formally with significant public and
private actors in the accountability, statistical, scientific and
research, business, media, leadership, finance, public interest, and
not-for-profit communities. It could combine the best features of
federal support and engagement, while minimizing restrictions of
federal management policies by selectively subjecting the organization
to only certain laws and controls, and allowing it to solicit a wider
variety of public and private funds while having the ability to retain
voluntary staff.
A public-private partnership could also build on existing capabilities
and retain flexibility to incorporate competitive human capital and
other policies, including fewer restrictions on compensation,
marketing, communications strategies, and acquiring and utilizing
innovative technology. Further, it offers a better balance of
independence and connection to the political process. Finally, a
public-private organization affords the best opportunity to construct a
governing structure with a balanced representation from the major
communities and topical areas of knowledge, thus helping to ensure the
organization's credibility and its ability to involve various public
and private entities in its oversight and evolution.
Others Considering Comprehensive Key Indicator Systems Have Similar
Options:
Unique aspects of national, state, and local laws will affect the
specific organizational forms that a comprehensive key indicator system
might take in any one jurisdiction. However, the three basic
alternative starting points and options analysis discussed for a U.S.
national system also apply elsewhere. As shown in table 10, all the
systems we studied had an organizational form that for the most part
fits the categories discussed. Again, any organizational type tends to
take on a public-private character in terms of the stakeholders with
which they informally or formally interact, the types of indicators
they use, and the funds they receive, among other things.
Table 10: Organizational Types of the Systems Studied for Our Review:
Publicly led: European Structural Indicators;
Publicly led: Hennepin County Community Indicators (Minneapolis);
Publicly led: King County Benchmarks (Washington);
Publicly led: Minnesota Milestones;
Publicly led: New York City Social Indicators;
Publicly led: North Carolina 20/20;
Publicly led: Oregon Benchmarks;
Publicly led: Portland Multnomah Benchmarks (Oregon);
Publicly led: Results Iowa;
Publicly led: Santa Monica Sustainable City Program (California);
Publicly led: Social Well-being of Vermonters;
Publicly led: United Kingdom Sustainable Development Indicators.
Privately led: Benchmarking Municipal Neighborhood Services in
Worcester (Massachusetts);
Privately led: Chicago Metropolis 2020;
Privately led: Compass Index of Sustainability (Orange County, Fla.);
Privately led: Index of Silicon Valley (California);
Privately led: Milwaukee Neighborhood Data Center;
Privately led: Neighborhood Facts (Denver);
Privately led: Sustainable Seattle.
Led by public-private partnership: Baltimore's Vital Signs;
Led by public-private partnership: Boston Indicators Project;
Led by public- private partnership: Burlington Legacy Project
(Vermont);
Led by public-private partnership: Community Atlas (Tampa area, Fla.);
Led by public-private partnership: German System of Social Indicators;
Led by public-private partnership: Indicators for Progress
(Jacksonville, Fla.);
Led by public-private partnership: Maine's Measures of Growth;
Led by public-private partnership: Santa Cruz County Assessment
Project (California);
Led by public- private partnership: Social Assets and Vulnerabilities
Indicators (Indianapolis);
Led by public-private partnership: State of the Region (Southern
California).
Source: GAO.
[End of table]
An important advantage for officials at the local level is that they
have many different comparable entities around the country to learn
from in deciding how to construct systems of their own, as well as
organized communities of practice that can help translate general
lessons into specific guidance for a particular jurisdiction.
[End of section]
Chapter 5: Observations and Next Steps:
Observations:
We have identified several areas where we believe that observations are
merited and where we can note certain potential implications. These
observations are supported by our work and the work of others as
reinforced in discussions with many experts and practitioners in the
field. Nevertheless, it is important to take into account that even the
smallest indicator system represents a complex interaction between
people, institutions, sectors, culture, and other contextual factors,
making their evaluation difficult.
A Comprehensive Key Indicator System for the United States Merits
Serious Discussion:
The nation as a whole confronts profound challenges and opportunities
resulting from a variety of factors, including changing security
threats, dramatic shifts in demographic patterns, the multidimensional
processes of globalization, and the accelerating pace of technological
change. However, public debate over the nation's agenda is often based
on information that is limited, fragmented, and incomplete. Difficult
decisions to confront these challenges and opportunities require
reliable, useful, and shared sources of information that are readily
accessible to citizens, advocates, policymakers, and the media.
The United States already has a large supply of data and indicators in
topical areas. So, the natural question asked by many who are initially
exposed to the idea of a national comprehensive key indicator system
is: If we have so much information, on so many issues, from a variety
of different points of view, why do we need a national comprehensive
key indicator system? The common sense answer to this question is that
having information on all the parts is not a substitute for looking at
the whole, whether in life, business, science, or self-governance and
politics. What Abraham Lincoln once said is truer than ever today: "If
we could first know where we are, and whither we are tending, we could
better judge what to do, and how to do it."
The fact that it is possible to get a great deal of information on U.S.
society if one is skilled enough to seek it out, collect it, and
analyze is helpful if one's purpose is to solve a specific problem or
answer a specific question. However, that same large amount of
information in many different places and many different forms is a
hindrance if one's purpose is to take stock of all the problems and
opportunities a jurisdiction faces.
Looking regularly at the most important aspects of the whole is
critical to assessing how we are doing and whether we are moving toward
important aims and aspirations. The same logic that explains why we go
for annual check-ups where physicians evaluate common key indicators of
individual health (e.g., blood pressure or cholesterol), also explains
the essential rationale used by the systems we studied. These
indicators help identify the most important problems, help set
priorities to address them, highlight areas where more information is
needed, communicate a perspective about overall well-being, and inform
us about potential choices. As our work has shown, this logic is now
being extended from neighborhoods to communities, to states, nations,
and to the global level. Without a key indicator system for the nation,
it is difficult to see the relationships among issues, frame problems
in an overall context, or assess the country's position and progress as
a whole.
A comprehensive key indicator system could be used in a variety of ways
to better inform constituencies. For example, businesses could use the
system to access data to help inform market strategies, or individuals
could better understand areas of national life that could improve their
educational or career choices. These constituencies and others, such as
policy makers, the media, and specific communities of interest (e.g.,
the disabled), could use a national comprehensive key indicator system
to:
* highlight areas in which progress has been made in improving people's
living conditions;
* connect debates about the relative merits of competing demands to
reliable data about actual conditions to help determine priorities and
make difficult choices among competing agendas;
* provide information about changes over time, which would contribute
to assessments about the impact of particular interventions and
policies, thereby providing greater accountability and learning;
* facilitate comparisons within or among the states or the nation as a
whole with other countries, which are central to understanding the U.S.
role in the global community and informing decisions about how to best
address emerging issues;
* accelerate the identification of important gaps in the nation's
knowledge of itself and the quality of that knowledge through regular
collaboration and dialogue with other comprehensive key and topical
indicator systems;
* expand the level of knowledge throughout the country as users of
comprehensive key indicator systems pursue more detailed information
from topical indicator systems;
* improve the degree of fact-based consensus on common aspirations,
which could help shift scarce time, energy, and resources from debating
facts and aims to discussing priorities and building bodies of evidence
for the most effective solutions;
* allow various individuals and institutions within a particular
jurisdiction to see themselves in the context of a larger social unit
(e.g., how state issues interrelate with national issues), to compare
themselves to other jurisdictions (e.g., states comparing themselves
with others), and in relation to other communities and neighborhoods;
* if implemented electronically via the World Wide Web, provide many
more people and institutions around the country an accessible and
usable "window" into the nation's critical sources of data, thus
increasing the return on the large investments already made and
leveraging ongoing investments to collect more data more frequently;
* at the federal level, inform a much needed re-examination of program
effectiveness and the mandated creation of a governmentwide performance
plan.
To take one example, a debate is now emerging on how the nation will
respond to the nation's long-term fiscal challenges. As we and other
experts have pointed out, there is a growing gap between the projected
the cost of providing currently promised benefits under the Social
Security, Medicare and certain other federal programs and the projected
financial resources that will be available to deliver them. This gap is
affected by predictable changes in the demographics of the U.S.
population. Resolving such issues will involve many different parties
defining, analyzing, modeling, and interpreting statistical indicators
on demographics incomes, jobs, savings, health care, taxation, and a
variety of other issues. Providing a common base of facts from many
different topical areas on a strategic issue for the country such as
this one illustrates the value that a national comprehensive key
indicator system could provide.
A National Effort May Face Significant Challenges:
If Congress considers supporting the development of a comprehensive key
indicator system for the nation as a whole, it must carefully decide
upon the best direction to take such a large-scale, challenging effort.
If designed and executed well, such a national system could have wide
impact when American citizens, leaders, and institutions pay attention
it, access it, and use key indicators to inform their personal and
professional choices. Building in the design features discussed in this
report, as well as the flexibility to learn from and adopt innovative
approaches, would be important. However, it is difficult to ascertain
how certain design features and organizational options would play out
in the context of a system for the entire nation.
Alternatively, if an effort is poorly planned and implemented, it could
absorb scarce time and resources, fail to meet expectations and make it
more difficult to create such a system in the future. Although any U.S.
system will be imperfect from the start and continuously evolving, a
certain threshold of quality will be important in achieving the
relevance, legitimacy, and utility needed to build momentum and
continuously improve over time.
The challenges of developing and implementing a comprehensive key
indicator system would be great at the national level in the United
States due to a range of significant factors. Because of the scale and
complexity of a national effort, organizers of a national system should
take into account--and develop contingency plans to address--the
following major challenges in addition to those already noted for
smaller-scale efforts.
* Securing and maintaining adequate and stable funding could be
difficult in the current environment of existing and emerging fiscal
challenges and the need to address multiple national priorities.
* Deciding on the purpose and audience will require significant debate.
From one point of view, some common ground on the most important aims
for the nation would have to be found initially, while a broader-based
consensus would evolve over many years. From another point of view, the
system could be designed around the idea of multiple audiences and
simply identify a broad range of important aims.
* Building an audience would require overcoming inertia and some
entrenched interests. Because national leaders have traditionally
considered information and made policies in discrete topical areas, a
national comprehensive key indicator system would not necessarily have
a built-in audience. This increases the difficulty of encouraging
leaders to think about national issues in a comprehensive framework and
use a comprehensive key indicator system for doing so.
* Agreeing on the types and number of indicators would likely require a
long, contentious process to adequately involve and consider the
diverse views of a wide range of public and private stakeholders.
Highlighting certain data in a key indicator system could possibly have
the negative consequences of upsetting certain constituencies and
possibly eroding support for collecting data.
* Obtaining consistent and comparable indicators from a vast array of
sources would be challenging at all levels due to the different ways in
which information is collected, organized, updated, and disseminated,
along with varying degrees of quality and reliability. The long-term
utility of a national system would be significantly enhanced by--and
perhaps even depend on--the ability to:
* disaggregate indicators from a larger scale (e.g., the average
unemployment rate for the nation) to smaller scales of society where
action can be taken (e.g., the unemployment rate in one's city or
community) and:
* aggregate and or/compare indicators from smaller scales (e.g.,
education achievement in a school district) to larger scales (e.g.,
educational achievement in the United States as compared with other
nations).
* Because there are some areas where data simply may not exist (e.g.,
certain aspects of the environment) or are very difficult to measure
(e.g., certain aspects of culture), a U.S. national indicator system
may have an implicit bias in terms of balance towards information that
is quantitative and can be measured. From the outset, this would have
to be recognized by acknowledging measurement limitations and knowledge
gaps. Poor indicator selection or lack of attention to quality, in the
context of a highly visible system, raises the stakes in terms of
misinformation or unintended consequences that might arise.
* Developing new indicators requires the statutory authority to access
the necessary information and should include the legal responsibility
to protect privacy.
* Leveraging costly innovative technology to provide an online, user-
friendly resource would be crucial for the success of such an effort.
Implementing effective human capital management strategies, such as
recruiting and retaining the advanced technical and scientific staff,
are key elements in the success of any high-performing organization or
national initiative like this one.
Key Indicator Systems Could Help Better Inform the Nation at Many
Levels:
One of our nation's distinguishing characteristics is unity built out
of diversity. This diversity finds its expressions in the multiple
levels and branches of government, the different sectors of economic
and social activity (i.e., business, nonprofit and government), the
varied geographic regions, and the widely ranging ethnic, professional,
cultural, and other communities of interest. Another way of putting
this is that every individual plays multiple roles in U.S. society
(e.g., resident of a city and state, member of an interest group,
oremployee working in a sector). In each role, the information needs of
individuals will differ significantly. Therefore, it is vital to
recognize that a key indicator system for the entire nation would
either:
* express only U.S. level indicators (e.g., the average national
unemployment rate) and coordinate with these elements of our society as
they develop indicator systems from their own point of view, or:
* include a capability for the people who use the system to obtain not
only U.S.-level information, but also information for their community,
sector, city, state, or region (e.g., state demographics or
unemployment rates for metropolitan areas).
The nation's leaders and concerned citizens are realizing they require
better knowledge of what is happening and where we are going to support
improved public choices. Although the constituent elements of U.S.
society view emerging challenges and opportunities, as well as their
choices, from unique and varied points of view, the time may be at hand
when it is feasible for many different elements of society to organize
information into comprehensive key indicator systems. As this report
has demonstrated, citizens, public and private sector groups, and their
leaders are encouraging and creating a better overall understanding of
their communities, cities, states, and the nation, our society's
competitive advantage and capacity to define and respond to challenges
and opportunities.
The nation as a whole could benefit from additional elements of society
opting to develop and implement key indicator systems to better
understand economic, environmental, and social and cultural conditions;
trends; levels of progress; and emerging challenges. This could include
the identification of knowledge gaps and development of new indicators,
identification of trends, and generally a richer information base. A
wider range of creative and successful individual efforts would provide
a fuller set of experiences and lessons learned so that the nation
could learn from successes and avoid common mistakes. Moveover, at the
federal level, a comprehensive system could inform a much-needed re-
examination of the base of existing programs, politics, functions, and
activities. It could also inform the mandated creation of a
governmentwide performance plan.
The country can learn a great deal from work already being done. There
are likely to be significant gains in efficiency and effectiveness to
be gained if these systems learn how to coordinate, share, and leverage
experiences and lessons learned. There are critical interrelationships
among such systems that need to be recognized and better understood.
Many public policy issues are implemented primarily at the local level,
where information is translated into action in areas such as schools,
jobs, and public safety. Thus, a primary question about a national
system for anyone from a local point of view will be: can it provide
specific or contextual information, at an appropriate level of
disaggregation (e.g., neighborhoods, census tracts, or blocks), that
can help my community be better informed?
In addition to pursuing information that can be disaggregated below the
national level to elements of U.S. society, it is also important to
aggregate information above the national level to obtain a fuller
understanding of our nation's position and progress in a global
environment and an increasingly globalized economy and society. To see
U.S. issues in a global context and to facilitate comparisons with
other nations on issues like education, innovation, or health care is
likely to require assiduous efforts to develop indicators that can be
aggregated at the supranational or global levels, as well as indicators
providing comparable information across countries. Many entities within
and outside of the United States have been hard at work for years on
developing and implementing such indicator systems, especially in the
international statistical and scientific communities. Their lessons
learned would provide a building block for efforts to develop key
indicator systems throughout the United States.
Next Steps:
It appears that in addition to Congress and the executive branch, users
and providers of information in jurisdictions throughout the United
States (e.g., cities, counties, states, and regions) could benefit from
the findings in this report. Our work in this area may also be of value
to audiences in other nations. Accordingly, our suggested next steps
are addressed both specifically to Congress and more generally to these
broader audiences.
Encourage Awareness and Education:
A substantial effort should be made by various interested parties to
make leaders, professionals, and the public more aware of comprehensive
key indicator systems and to understand the potential implications for
their jurisdiction of interest. Such understandings and awareness could
underpin a broader and more informed dialogue on what current systems
are contributing and what new systems might contribute to informing our
nation. Most importantly, these systems have emerged and endured
because concerned citizens and institutions are beginning to come to
grips with how to define and make choices on the most important issues
and opportunities they face, based on common agreement about their
societal aspirations, and a single source of shared factual knowledge.
Specific actions to encourage awareness and education could include the
following:
* Convening workshops and briefings for public and private sector
leaders.
* Holding public hearings around the country to highlight alternative
points of view on potential costs and benefits, desired uses, risks,
and possibilities.
* Developing a Web-based national clearinghouse on key indicator
systems so that interested parties can conveniently access published
documents or link directly to Web sites to familiarize themselves with
what is currently available.
* Strengthening partnerships between key indicator systems and relevant
media, private information providers, and other organizations that have
an interest in the dissemination of quality information.
Pursue Additional Research:
Even though some comprehensive indicator systems have been in existence
for decades, developments over the last decade in information
technology (e.g., the World Wide Web) and information management (e.g.,
open systems architectures with enhanced data flexibility) have created
significant opportunities to build and sustain key indicator systems.
In theory, the possibilities for interested parties to learn from and
use public information have increased at the same time as the up-front
capital investments required for data aggregation, maintenance and
dissemination costs continue to decline.
The formal research on key indicator systems has still left many
questions unanswered and, therefore, more research is essential to
reducing the risks of failure and increasing the probabilities of
success for undertaking such an endeavor. Among these outstanding
issues are the following four major categories of questions.
* How much is really known about the design of key indicator systems?
For instance, does existing research on topical indicator systems
provide lessons for designing them? Is there a predictable model that
shows how a well-designed system would develop over time?
* How can key indicator systems be effectively implemented? What are
the major differences between implementing one for a small population
group as opposed to a large one? How have people have used these
systems and for what purposes?
* What value does key indicator systems provide? For example, how much
time, money, and effort are required to create them, and are they worth
it compared to other needed investments? How does one define the
success or failure of a system?
* How significant are key indicator systems for market-based democratic
governance? For example, could they change how policy-makers, nonprofit
foundations, and even citizens set priorities and make decisions,
ranging from resource allocations to career and voting choices?
As it is becoming more feasible for jurisdictions to create such
systems, formal research should accelerate. Taking steps to provide
support for such research could substantially aid those involved in
considering or designing and implementing comprehensive key indicator
systems. Specific actions could include the following.
* Coordinating amongst various interested parties to identify a common
research agenda for the field of key indicator systems to help increase
the synergy of existing work and guide the direction of future research
efforts.
* Creating a comprehensive inventory of past and current research
efforts on key indicator systems, including those in other countries.
* Identifying major gaps in the nation's knowledge about key issues and
opportunities that can be brought to the attention of leaders and
policymakers.
* Generating working prototypes of what a key national indicator system
for the United States would look like to flush out the risks and
opportunities involved in building such a system.
* Investigating questions that may be specific to the development of a
national system for the United States. For example, what will be the
respective roles of government, the federal government, business, and
the nonprofit sectors in the system? How will key indicator systems
developed at different levels of society complement one another?
Support New Initiatives to Develop Key Indicator Systems:
A high degree of innovation can take place at local levels, which can
help build the nation's body of experience. Local efforts have been
particularly creative, for example, in developing indicator systems,
such as those focused on quality of life issues, that cut across more
traditional topical areas. One possible way to begin creating and
developing more comprehensive key indicator systems may be to
institutionalize a network or networks of interested practitioners as a
"community of practice." Then, as people become more educated about
these systems, they would have an organized resource available to tap
into accumulated expertise. Such a community of practice or a
clearinghouse could help speed learning curves, reduce risks, and avoid
reinventing solutions. Specific actions could include the following:
* Developing a national community of practice of those who study and
implement key indicator systems at all levels to keep practitioners up
to date on the latest research.
* Participating in an international community of practice, like the
first World Indicators Forum being sponsored by the OECD, to learn from
what is going on abroad and share the U.S. experience with others.
* Identifying criteria to define what success means for a system,
specific best practices and evaluation techniques, all of which could
be included in a sourcebook or practice guide for indicator system
development that would distill existing knowledge for the benefit of
those new to the field.
* Considering funding an effort within the federal statistical system,
under the aegis of the Interagency Council on Federal Statistics or the
Census Bureau, to aggregate a common set of key official statistics--
based on the advice of an independent panel of experts--that would
build on the lessons of the White House Briefing Room, the American
Fact Finder, and Fedstats. Although such a system could not include
private sector sources, it would represent a major advance toward a
national comprehensive key indicator system.
Widen the Dialogue on Options for a U.S. National System:
At this stage, it is important for a broader dialogue to begin that
includes Congress, the administration, and other major suppliers,
users, and providers of information. Such a dialogue could provide an
avenue for exploring complex issues, such as the potential benefits,
costs, and risks involved, in a meaningful way. Involving interested
members of Congress and the executive branch would be critical to
ensuring collaboration across boundaries, facilitating ongoing
attention to strategically leverage national information assets, and
position the nation to better meet emerging challenges and take
advantage of upcoming opportunities. Specific actions could include the
following.
* Hold public hearings or private forums to discuss and debate options
pertaining to a key national indicator system for the United States.
* Convene a national conference of practitioners and potential
stakeholders to (a) share knowledge on existing systems, (b) debate and
discuss whether and how to develop a U.S. system, and (c) help identify
the major topical areas that would be included in a possible national
system.
* Charge the Interagency Council on Federal Statistics with
coordinating a series of discussions between those developing
comprehensive key indicator systems and those who operate topical
systems on issues of mutual concern and interest.
* Encourage discussions between the private groups now undertaking the
development of a national comprehensive key indicator system, members
of Congress, and executive branch officials on the role of the federal
government in investigating and potentially supporting such a system.
[End of section]
Appendixes:
Appendix I: U.S. National Topical Indicator Systems Included in This
Study:
Topical area and name of indicator system: Economy: Business Cycle
Indicators;
History and purpose: First published for government use in 1961 and for
public use in 1968, and currently updated monthly, its purpose is to
forecast and analyze the onset of and recovery from economic
recessions;
Description of indicator system: This system provides the official U.S.
composite leading, coincident, and lagging indexes (three summary
statistics for the U.S. economy). The indexes represent key elements of
an analytic system designed to signal peaks and troughs in the business
cycle, each consisting of 4 to 10 individual indicator series.[A];
Managing organization and key stakeholders: Managed by the Conference
Board (CB) since 1995.[B]; CB has an advisory panel of academic,
government, and private sector experts providing guidance in all areas
relating to the Business Cycle Indicators. CB gathers data from many
sources, about 80 percent from official U.S. government sources, such
as the Bureau of Economic Analysis, the U.S. Census Bureau, the Bureau
of Labor Statistics, and the Federal Reserve Board; and the remainder
from private sources, including Standard & Poor's and the University
of Michigan.
Topical area and name of indicator system: Science and Engineering:
Science and Engineering Indicators;
History and purpose: First published in 1973 and updated every two
years, its purpose is to provide information on the status of U.S.
science, engineering, and technology;
Description of indicator system: This system provides a broad-based
set of quantitative information about U.S. science, engineering, and
technology. Indicators are grouped under eight topical headings, such
as science and engineering labor force; and industry, technology, and
the global marketplace.[C];
Managing organization and key stakeholders: Managed by the National
Science Board[D] (the board of the National Science Foundation);
Members of the National Science Board are selected to be broadly
representative of the views of national science and engineering
leadership based on their distinguished service in these areas.
Topical area and name of indicator system: Health: Healthy People;
History and purpose: First issued in 1979 and updated in 1980, it has
been revised once every decade since then. Its purpose is to provide a
comprehensive set of disease prevention and health promotion objectives
for the nation to achieve, and indicators with which to measure
progress toward them.[E];
Description of indicator system: Provides a national approach to health
improvement that integrates a comprehensive system of two overarching
goals as well as objectives in 28 focus areas, such as cancer, for
improving Americans' health. The overarching goals are to increase
quality and years of healthy life and eliminate health disparities.
Healthy People currently focuses on 10 leading health indicators to
highlight major health priorities, including physical activity.[F];
Managing organization and key stakeholders: Managed by the Department
of Health and Human Services (HHS); HHS uses a participatory process to
stimulate broad multisector involvement by federal, state, local, and
community agencies, as well as the private sector, through the Healthy
People Consortium and its local chapters.[G]; Most states have
replicated the Healthy People process and have their own plans.
Topical area and name of indicator system: Children and Families:
America's Children: Key National Indicators of Well-Being;
History and purpose: Initiated in 1997, its purpose is to provide
comprehensive information on the health and well-being of children. The
full report is updated every two years with brief updates on select
indicators issued in between. All data on its Web site are updated
annually;
Description of indicator system: Provides a comprehensive set of 25 key
indicators measuring critical aspects of children's lives, grouped in
four sections: economic security, health, behavior and social
environment, and education. Also includes nine "contextual measures"
describing the population, family, and environmental context in which
children are living.[H];
Managing organization and key stakeholders: Managed by the Federal
Interagency Forum on Child and Family Statistics, which consists of 20
federal agencies that deal with children's issues.[I].
Topical area and name of indicator system: Aging: Older Americans: Key
Indicators of Well-Being;
History and purpose: Initiated in 2000 with occasional planned updates,
its purpose is to track the health and well-being of Americans aged 65
and over. (Next update is expected in November 2004.);
Description of indicator system: Provides a set of 31 key indicators to
measure critical aspects of older Americans' lives. Indicators are
presented in five sections: population, economics, health status,
health risks and behaviors, and health care.[J];
Managing organization and key stakeholders: Managed by the Federal
Interagency Forum on Aging-Related Statistics, which consists of
numerous federal agencies that deal with aging issues.[K].
Source: GAO analysis.
[A] The composite business cycle indexes include 21 component series.
The 10 leading index indicators are average weekly hours,
manufacturing; average weekly initial claims for unemployment
insurance; manufacturers' new orders, consumer goods and materials;
vendor performance, slower deliveries diffusion index; manufacturers'
new orders, nondefense capital goods; building permits, new private
housing units; stock prices, 500 common stocks; money supply (M2);
interest rate spread, 10-year Treasury bonds less federal funds
(percentage); and index of consumer expectations. The 4 coincident
index indicators are employees on nonagricultural payrolls; personal
income less transfer payments; index of industrial production; and
manufacturing and trade sales. The 7 lagging index indicators are
average duration of unemployment; inventories to sales ratio,
manufacturing and trade; change in labor cost per unit of output,
manufacturing (percentage); average prime rate charged by banks
(percentage); commercial and industrial loans outstanding; consumer
installment credit outstanding to personal income ratio; and change in
consumer price index for services (percentage). Historically, cyclical
turning points in the leading index occur before, turning points in the
coincident Index about the same time, and turning points in the lagging
index after those in aggregate economic activity.
[B] The CB is a private research and business membership group of over
2700 corporate and other members that was chosen by the Bureau of
Economic Analysis (BEA), after a bidding process, to be custodian of
the official Business Cycle Indicators. Assuming responsibility for
computing them was deemed, by the CB, to support its mission to improve
the business enterprise system and to enhance the contribution of
business to society. The CB's first independent release was on January
17, 1996. From October through December 1995, CB and BEA released the
indicators jointly.
[C] The report consists of two volumes. Volume 1 consists of topical
analytic essays on key trends in science and technology. Volume 2 is an
Appendix of Tables that contains 225 statistical measures. Reports and
statistical measures both are grouped under the same eight headings:
(1) elementary and secondary Education; (2) higher education in science
and engineering; (3) science and engineering labor force; (4) U.S. and
international research and development (R&D): funds and technology
linkages; (5) academic R&D: (6) industry, technology and the global
marketplace; (7) science and technology: public attitudes and
understanding; and (8) state indicators.
[D] The National Science Board is responsible, under amendments to the
National Science Foundation Act of 1950, for developing this biennial
report to be rendered to the President for submission to Congress.
[E] Originally published in 1979 as Healthy People: The Surgeon
General's Report, and updated in 1980 as Promoting Health/ Preventing
Disease: Objectives for the Nation, and in 1990 as Healthy People 2000:
National Health Promotion and Disease Prevention Objectives.
[F] Healthy People 2010: Understanding and Improving Health (issued in
2000), sets forth two overarching goals--(1) increase quality and years
of healthy life and (2) eliminate health disparities--with 467 specific
objectives to improve the health of Americans that are organized into
28 focus areas. It also consists of 10 leading health indicators to
highlight progress toward major health priorities. The focus areas are:
access to quality health care; arthritis, osteoporosis, and chronic
back conditions; cancer; chronic kidney disease; diabetes; disability
and secondary conditions; educational and community-based programs;
environmental health; family planning; food safety; health
communication; heart disease and stroke; HIV; immunization and
infectious diseases; injury and violence prevention; maternal, infant,
and child health; medical product safety; mental health and mental
disorders; nutrition and overweight; occupational safety and health;
oral health; physical activity and fitness; public health
infrastructure; respiratory diseases; sexually transmitted diseases;
substance abuse; tobacco use; and vision and hearing. The 10 leading
indicators are physical activity, overweight and obesity, tobacco use,
substance abuse, responsible sexual behavior, mental health, injury and
violence, environmental quality, immunization, and access to health
care.
[G] A central principle of Healthy People is its participatory process,
which stimulates broad multisector involvement in defining and
implementing objectives by federal, state, local, and community
agencies as well as private, voluntary, and other community-based
organizations. At the federal level, lead responsibility for each of
the 28 focus areas is assigned to an agency of HHS's Public Health
Service. These lead agencies have responsibility for engaging multiple
agencies in attaining objectives, forging partnerships with states and
the private and voluntary sectors, and monitoring progress by
collecting necessary data. States are encouraged to develop state-
specific goals and objectives tailored to their individual needs and
conditions, and at the local level, model standards, linked to Healthy
People objectives, provide public health agencies with tools to
determine community health issues.
[H] Nine contextual measures describe the changing population, family,
and environmental context in which children are living, and 25
indicators depict the well-being of children in the areas of economic
security, health, behavior and social environment, and education. The
indicators, grouped by domain, are: (1) population and family
characteristics: child population, children as a proportion of the
population, racial and ethnic composition, children of at least one
foreign-born parent, difficulty speaking English, family structure and
children's living arrangements, births to unmarried women, child care,
and children's environments; (2) indicators of children's well-being:
(a) economic security indicators: child poverty and family income,
secure parental employment, housing problems, food security and diet
quality, and access to health care; (b) health indicators: general
health status, activity limitation, overweight, childhood
immunization, low birth weight, infant mortality, child mortality,
adolescent mortality, and adolescent births; (c) behavior and social
environment indicators: regular cigarette smoking, alcohol use, illicit
drug use, and youth victims and perpetrators of serious violent crimes;
(d) education indicators: family reading to young children, early
childhood care and education, mathematics and reading achievement, high
school academic course taking, high school completion, youth neither
enrolled in school nor working, and higher education.
[I] Members of the Federal Interagency Forum on Child and Family
Statistics include officials from 20 federal agencies, including the
Departments of Agriculture (Food and Nutrition Service), Commerce
(Census Bureau), Defense (Defense Manpower Data Center), Education
(National Center for Education Statistics), HHS (Administration for
Children and Families, Agency for Healthcare Research and Quality,
National Center for Health Statistics and three other agencies),
Housing and Urban Development (Office of Policy Development and
Research), Justice (Bureau of Justice Statistics and two other
agencies), Labor (Bureau of Labor Statistics and 1 other agency), and
Transportation (National Highway Traffic Safety Administration), plus
the Environmental Protection Agency (Office of Environmental
Information), National Science Foundation (Division of Science
Resources Statistics), and the Office of Management and Budget.
[J] Thirty-one indicators in five groups are (1) population: number of
older Americans, racial and ethnic composition, marital status,
educational attainment, living arrangements; (2) economics: poverty,
income distribution, sources of income, net worth, participation in the
labor force, and housing expenditures; (3) health status: life
expectancy, mortality, chronic health conditions, memory impairment,
depressive symptoms, self-rated health status, and disability; (4)
health risks and behaviors: social activity, sedentary lifestyle,
vaccinations, mammography, dietary quality, and criminal
victimization; and (5) health care: health care expenditures,
components of health care expenditures, out-of-pocket health care
expenditures, access to health care, use of health care services,
nursing home utilization, and home care.
[K] Members of the Federal Interagency Forum on Aging-Related
Statistics include the original core agencies--U.S. Census Bureau,
HHS's National Center for Health Statistics and the National Institute
on Aging--along with HHS's Administration on Aging, Centers for
Medicare & Medicaid Services, and the Office of the Assistant Secretary
for Planning and Evaluation; Bureau of Labor Statistics; Social
Security Administration; and the Office of Management and Budget.
[End of table]
[End of section]
Appendix II: Overview of Social and Cultural Indicators:
The United States does not have a national report on key indicators
that covers the entire social and cultural domain, which includes
health, education, and public safety, among other topical areas.
Moreover, there is no regular, broad reporting at the national level
that looks across various social and cultural indicators to describe
the overall social and cultural conditions of the nation, nor are there
official mechanisms to review, analyze, and interpret diverse social
and cultural indicators as they relate to each other and their
implications for society.
However, an array of diverse social and cultural indicators can be
found in the United States as parts of topical area systems at the
national level, such as Healthy People. In addition, many comprehensive
key indicator systems below the national level in the United States
report on a host of key social and cultural indicators, as do a variety
of systems outside the nation, including numerous European countries
and multinational and supranational entities, such as the United
Nations Development Programme's reporting on the Human Development
Index in the Human Development Report, which measures countries'
overall achievements in longevity, knowledge, and standard of
living.[Footnote 89] These systems define the social and cultural
domain in different ways, with some systems defining the domain
narrowly to exclude indicators about the economy and the environment
while others define it broadly to include economic and environmental
indicators, meaning society as a whole. Over time, it has been
difficult to reach consensus on social and cultural issues in part due
to the value judgments that surround them. As described in chapter 1,
in the past there was a decade-long effort in the United States to
produce a national societal indicators report, but that effort did not
endure beyond the early 1980s and has not been attempted since.
National-Level Indicator Systems with a Focus on Social and Cultural
Information:
The United States has national-level indicator systems and statistical
volumes that report on select indicators in specific topical areas
within the social and cultural domain, although there is no national-
level indicator report covering this entire domain. An example is
Healthy People, which is led by the Department of Health and Human
Services (HHS) and includes 10 leading health indicators that are used
to measure the health of the nation over a 10-year period.[Footnote 90]
Each of the 10 Leading Health Indicators has one or more objectives
associated with it, which are intended to reflect the major health
concerns in the United States at the beginning of the 21st century.
Leading health indicators include physical activity, overweight and
obesity, tobacco use, substance abuse, and responsible sexual behavior.
Another example of an indicator system is the Federal Interagency Forum
on Aging-Related Statistics. This forum was initially established in
1986 with the goal of bringing together federal agencies to collaborate
on improving aging-related indicators and includes the National
Institute on Aging, the National Center for Health Statistics, the U.S.
Census Bureau, the Administration on Aging, the Social Security
Administration, the Centers for Medicare & Medicaid Services, and the
Office of Management and Budget, among other agencies. The forum
published its first report in 2000, Older Americans 2000: Key
Indicators of Well-Being, which focuses on important indicators in the
lives of older people along topics such as population, economics,
health status, health risks and behaviors, and health care.[Footnote
91]
In addition, various federal agencies produce periodic reports that
present indicators of national trends and social and cultural
conditions in American society, such as The Condition of Education,
which is produced by the National Center for Education Statistics at
the Department of Education,[Footnote 92] and Crime in the United
States, which is produced by the Federal Bureau of Investigation (FBI)
based on the Uniform Crime Reporting system.[Footnote 93] The Census
Bureau also collects data and produces many publications that pertain
to social and cultural issues. For example, the Census Bureau
administers the following surveys: the National Crime Victimization
Survey, conducted on behalf of the Bureau of Justice Statistics, which
collects information from households on the frequency, characteristics,
and impact of criminal victimization; the Current Population Survey,
which is conducted for the Bureau of Labor Statistics and provides the
primary source of information on labor force characteristics of the
U.S. population; and the Survey of Income and Program Participation,
which collects information on income, the labor force, program
participation and eligibility, and demographics to measure the
effectiveness of existing government programs, estimate future costs
and coverage for government programs (such as Food Stamps), and provide
improved income distribution statistics. Further, in 2002, the Census
Bureau conducted the Survey of Public Participation in the Arts. More
than 17,000 adults over age 18 were asked whether they had read novels,
short stories, poetry, or plays in the last 12 months that were not
required for work or school. Similar surveys were conducted in 1982 and
1992.
Some private research organizations and policy institutes produce
national-level reports on social and cultural indicators in various
subject areas in the United States. For example, the Annie E. Casey
Foundation, a private charitable organization, produces the annual Kids
Count Data Book, which presents national-and state-level indicators on
the status of America's children.[Footnote 94] The report's key
indicators reflect a wide range of factors affecting the well-being of
children, such as health, income, and educational attainment. In
addition, HHS's Office of the Assistant Secretary for Planning and
Evaluation recently provided funding so that the not-for-profit
organization Child Trends, Inc., could produce a forthcoming indicators
report describing the conditions of children and families in the United
States as a whole, entitled Social Indicators: Measures of Children,
Family, and Community Connections. This report measures family
conditions and outcomes along the lines of several "domains," including
family structure, school involvement and civic engagement, and social
connections.
Social and Cultural Indicators as Parts of Comprehensive Key Indicators
Systems:
Many comprehensive key indicator systems at the subnational level in
the United States report on a host of key social and cultural
indicators. The Boston Indicators Project is an example of a citywide
comprehensive key indicator system that includes a variety of such
indicators. Specifically, the project tracks numerous indicators that
are grouped into 10 categories, and a number of the categories are in
the social and cultural domain: civic health, cultural life and the
arts, education, housing, public heath, and public safety. Examples of
specific social and cultural indicators tracked in the Boston
Indicators Project include measures of racial and ethnic diversity,
residents' trust in neighbors, voter participation, strength of the
not-for-profit sector, a "creativity index," and attendance at cultural
events.
Further, the Jacksonville Community Council, Inc. (JCCI) maintains a
regional comprehensive key indicator system--called Indicators of
Progress--that includes numerous social indicators for five counties in
northeastern Florida. In the 2003 Quality of Life Progress Report, JCCI
reported on 115 indicators that reflect trends in nine areas, of which
several are part of the social and cultural domain: achieving
educational excellence; promoting social well-being and harmony;
enjoying arts, culture, and recreation; sustaining a healthy community;
maintaining responsive government; and keeping the community
safe.[Footnote 95] Some examples of social indicators tracked in these
areas include the extent of racism, the divorce rate, library use,
attendance at arts events, health care and public health indicators,
voter registration, crime, and motor vehicle accidents. For example, a
goal related to social and cultural issues in the report is keeping the
community safe, and one of the measures of this goal is the index of
crimes per 100,000 people.
An example of a state-level comprehensive key indicator system that
includes extensive social indicators is the State of Minnesota's
Minnesota Milestones system. The indicators in Minnesota Milestones are
grouped into four goal categories, and two of the four relate directly
to social and cultural conditions. Each of these four categories has
four to five specific goals under it. For example, under the first
category, "people," is the goal "families will provide a stable,
supportive environment for their children," which is measured by
indicators such as satisfaction with child care, child abuse and
neglect, and teen pregnancy. Under the second category, "community and
democracy," is the goal "our communities will be safe, friendly, and
caring," which is measured by indicators such as sense of safety,
violent and property crime, and volunteer work.[Footnote 96],[Footnote
97]
Broad-Based Social Indicator Systems Outside the United States:
Unlike the United States, many other countries have implemented broad-
based social reporting systems, as have some multinational and
supranational entities, such as the United Nations and the World Bank.
Country examples include Germany, the United Kingdom, Australia,
Canada, the Netherlands, and France.[Footnote 98] The national social
reporting systems vary in terms of the extent to which they include
analysis, discussion of implications for public policy, or targeted
goals for future social change. For example, Germany's Datenreport is
based on indicators drawn from the German System of Social Indicators,
which was first developed by the Center for Survey Research and
Methodology (ZUMA) in the 1970s.[Footnote 99] The purpose of this
system is to continually monitor the state of and changes in objective
living conditions and subjective quality of life in German society
along the lines of 13 "life domains" plus an overall "total life
situation" category.[Footnote 100] Some observers have concluded that
some European countries have developed broad-based social indicators
systems at the national level due to factors such as the existence of
extensive, long-standing social welfare policies; a more centralized
tradition of government, including centralized statistical agencies; a
history of reporting on various social conditions nationally; and
concentrations of people in smaller geographic areas.
Moreover, under a contract from the European Union (EU), Germany's ZUMA
developed a European System of Social Indicators, to be used to monitor
social changes in Europe along 14 life domains, including, among
others, population, households, and families; housing; transport;
leisure, media and culture; social and political participation and
integration; and education and vocational training. The system covers
15 EU member states plus Norway; Switzerland; the Czech Republic;
Hungary; Poland; and for comparative purposes, Japan and the United
States.
The EU is placing increased emphasis on social indicators and social
reporting, due to the great diversity of ethnic, racial, and religious
populations that are located throughout Europe, along with vast
differences in the levels of economic development among the countries.
For example, the EU has a comprehensive European Structural Indicators
system consisting of a broad range of key indicators from the social
and cultural, economic, and environmental domains, which is designed to
measure progress toward the 2000 Lisbon Strategy.[Footnote 101] The EU
is also developing a comprehensive sustainable development indicator
system, which will include extensive key social indicators.
Work on social and cultural reporting, and related indicators, has also
been conducted by multinational entities like the United Nations and
the World Bank. The United Nations Secretariat's Statistics Division
compiles social indicators from national and international sources for
a wide range of subject matter fields. The United Nations Research
Institute for Social Development (UNRISD) was created in 1963 as part
of the first United Nations Development Decade, which stressed a "new
approach" to development based on the idea that economic indicators
were insufficient to measure the effects of progress in developing
countries.
The annual Human Development Report of the United Nations Development
Programme--first produced in 1990--introduced the Human Development
Index (HDI), which includes social measures. The HDI is a summary
composite index that measures a country's average achievements in three
aspects of human development: longevity (as measured by life expectancy
at birth), knowledge (as measured by a combination of the adult
literacy rate and the combined primary, secondary, and tertiary gross
enrollment ratio), and standard of living (measured by gross domestic
product per capita). In addition, the World Bank annually publishes the
World Development Report, which includes data on social indicators for
many countries, and maintains "social indicators of development," a set
of social indicators for over 170 economies, which is intended to
describe the social effects of worldwide economic development.[Footnote
102]
Defining and Gaining Consensus on Social Indicators:
We also observed variation in terms of the topical areas that different
organizations include as social and cultural indicators. The social and
cultural domain can be defined narrowly, to exclude economic and
environmental indicators, or broadly to include indicators from the
economic and environmental domains. For instance, a comprehensive
system might define the social and cultural domain to just include
indicators pertaining to health, public safety, social welfare, the
arts, children, and aging.
An example is Australia's comprehensive key indicator system--Measures
of Australia's Progress--which organizes its social and cultural domain
to include various areas of social concern including health, education
and training, work, housing, financial hardship, family and community,
crime, governance, democracy, and citizenship. The other components of
Australia's comprehensive system are indicators relating to the
economic and environmental domains. The EU's European Structural
Indicators system also makes a distinction between economic,
environmental, and social and cultural issues.
In contrast, a number of social indicator systems are based on broadly
defining the term to mean indicators that pertain to any dimension of
society, even including economic and environmental indicators. For
example, both the European System of Social Indicators and the German
System of Social Indicators define the social and cultural domain
broadly to include a variety of economic and environmental indicators
along with what are typically considered social and cultural
indicators, such as public safety or health. In the past, the social
domain was conceptualized more broadly in the United States than it is
today. The United States social indicators movement of the 1960s and
1970s developed in some respects as a response to the dominance of
economic indicators, based on the claim that economic indicators alone
were inadequate to monitor society comprehensively. Specifically, the
Social Indicators III report[Footnote 103] (the last of the three
social reports published by the U.S. government) defined 11 subject
areas related to social conditions in the United States, and included
environment ("housing and the environment") and economic ("work" and
"income and productivity") topics among them.
The term cultural indicator is sometimes used interchangeably with the
term social indicator. It also has a variety of meanings as it has been
used by different groups over time. Some indicator systems have
conceptualized cultural indicators as being related to the arts and the
humanities. For example, Social Indicators III took the approach of
describing cultural conditions through indicators related to the arts,
such as attendance at performing arts events and visits to
museums.[Footnote 104]
Another effort that uses an arts and humanities-based interpretation of
cultural indicators is the Arts and Culture Indicators in the Community
Building Project (ACIP), which was launched in 1996 by the Urban
Institute and the National Neighborhood Indicators Partnership (NNIP),
with support from the Rockefeller Foundation. ACIP is an effort to
develop neighborhood-level indicators of arts and culture for use in
local planning, policy making, and community building, and seeks to
integrate arts and culture into quality of life measures.
The 2002 Creative Community Index of the Cultural Initiatives Silicon
Valley, provides an additional example of cultural indicators in which
the arts and cultural activities are important.[Footnote 105] The
Creative Community Index resulted from a research project to develop
quantitative measures of cultural participation and creativity in the
region. It contains over 30 indicators designed to measure the health
and vitality of cultural activities and the importance of creativity to
the region's vitality.
In contrast to the use of cultural indicators as pertaining to the
arts, William J. Bennett's Index of Leading Cultural Indicators 2001
conceptualizes culture as the overall state of American culture. This
work reports on a wide range of topics pertaining to the state of
American society and culture, such as out-of-wedlock births, crime,
illegal drug use, marriage and divorce, educational achievement, child
poverty, youth behaviors, civic participation, popular culture, and
religion.[Footnote 106]
Further, the General Social Survey (GSS) is designed to measure and
report on the views and attitudes of Americans across a wide range of
topics and the state of our culture and society.[Footnote 107] It is
collected approximately every 2 years by NORC, a national organization
for research at the University of Chicago (formerly known as the
National Opinion Research Center), and has been administered 24 times
since 1972. Specifically, its millennium survey wave in 2000 covered
topics such as users of the Internet, assessments of external and
internal security threats and the balancing of security and civil
liberties, how people assess their physical and mental health, sexual
behavior and drug use, and evaluating the functions of local churches.
The GSS has been sponsored by a number of public and private
organizations, including the National Science Foundation, the Centers
for Disease Control and Prevention, and the MacArthur Foundation, among
others.
Programs of the United Nations' Educational, Scientific, and Cultural
Organization (UNESCO) and other United Nations' agencies demonstrate
another approach to cultural indicators. For example, UNESCO sponsored
a culture and development project with the United Nations Research
Institute for Social Development from 1996 through 1997. The purpose of
the project was to promote better understanding of the relationship
between various countries' cultures and their development, and it
included research on cultural indicators of development. Cultural
indicators and statistics are also included in UNESCO's World Culture
Report, issued in 1998 and 2000.[Footnote 108]
Accordingly, it could be difficult for organizers of an indicator
system to reach consensus on the scope of a social and cultural
indicator system, or on what variables to measure. The diversity of the
ways in which social and cultural indicators have been conceptualized
and used could complicate efforts to develop a national social and
cultural indicator system in the United States, as they appear to have
done in the past. There might also be disagreement about whether
particular characteristics of society are considered positive
attributes as opposed to undesired outcomes. For example, obtaining
agreement on a select set of social and cultural indicators has tended
to be controversial because some of them deal with sensitive moral,
racial, or religious issues, such teen pregnancy and drug use.
Selecting the societal conditions that should be measured or included
in a system involves some value judgments and subjectivity, and is
often colored by factors such as religious or moral beliefs. Moreover,
questions exist as to how to define the parameters of the social and
cultural domain, ranging from narrow to broad definitions, and whether
to include cultural elements.
[End of section]
Appendix III: Comprehensive Key Indicator Systems Included in This
Study:
Name of system/report: Baltimore's Vital Signs;
Jurisdiction(s) and population: Baltimore, Md; Approximately 640,000;
Description: Balitmore's Vital Signs indicators measure progress toward
a shared vision and desired outcomes for strong neighborhoods in
Baltimore. Indicators are grouped as follows: housing and community
development; children and family health; safety and well-being;
workforce and economic development; sanitation; urban environment;
transit, education, and youth; and neighborhood action and sense of
community. In addition, the One Stop Shop program provides access to
the Vital Signs data and other data about Baltimore and its
neighborhoods from a variety of sources;
Managing organization(s): (Public/Private); Baltimore Neighborhood
Indicators Alliance--a collaborative of several private and public
organizations;
Date begun/first reported: Initiative began in 2000. First indicators
report in 2002;
Frequency of updates: Reported annually.
Name of system/report: Boston Indicators Project;
Jurisdiction(s) and population: Boston, Mass; Approximately 590,000
(for the City of Boston);
Description: Indicators measure progress toward shared goals for Boston
and provide comprehensive information about Boston's progress in
meeting goals in civic health, cultural life and the arts, economy,
education, environment, housing, public health, public safety,
technology, and transportation. Crosscutting indicators are presented
in neighborhoods, children and youth, competitive edge, race and
ethnicity, and sustainable development. Indicators also compare some
issues to the state as a whole and to those in selected U.S. cities;
Managing organization(s): (Public/Private); Boston Foundation, a large
not-for-profit community foundation, in partnership with three public
organizations: the City of Boston, Boston Redevelopment Authority, and
Metropolitan Area Planning Council;
Date begun/first reported: Initiative began in 1997. First report in
2000;
Frequency of updates: Reported every 2 years; Periodic updates of Web
site information.
Name of system/report: Burlington Legacy Project;
Jurisdiction(s) and population: Burlington, Vt; Approx; 39,000;
Description: Indicators measure progress and monitor trends in areas
(e.g., economy, neighborhoods, governance, youth and life skills, and
environment) that citizens of Burlington value based upon a
comprehensive plan to guide change for the economic, environmental, and
social health of Burlington;
Managing organization(s): (Public/Private); Burlington Mayor's Office,
Community Economic Development division. There is also in-kind support
from a partnership with the University of Vermont;
Date begun/first reported: Initiative began in 1999. First report in
2000;
Frequency of updates: Reported annually; Periodic updates of Web site
information.
Name of system/report: Chicago Metropolis 2020;
Jurisdiction(s) and population: Chicago metropolitan area; Approx.
8,090,000;
Description: Indicators assess progress toward quality of life goals
for the Chicago metropolitan area (e.g., regional economy,
transportation and land use, housing, community life, education, and
the natural environment). Indicators also serve as benchmarks for
decision makers to consider what actions are needed to sustain
Chicago's status as a globally competitive region;
Managing organization(s): (Private); Chicago Metropolis 2020, a not-
for-profit organization, initiated by the Commercial Club of Chicago, a
membership organization of leading area business and civic leaders,
with an executive council;
Date begun/first reported: Initiative began in 1996. First report in
1999;
Frequency of updates: Reported annually through 2002; Frequency of
future reports is uncertain.
Name of system/report: Neighborhood Facts;
Jurisdiction(s) and population: Denver, Colo; Approx. 560,000;
Description: Provides detailed information and indicators on Denver's
77 neighborhoods. Information resources include data tables, maps, and
graphs about each neighborhood's population, housing, economic, and
education characteristics, and the health and safety of its residents;
Managing organization(s): (Private); Piton Foundation--a corporate
foundation of the Denver-based Gary-Williams Energy Corporation;
Date begun/first reported: First report issued in 1994;
Frequency of updates: Reported every 5 years; Web site information
updated on a quarterly basis.
Name of system/report: Hennepin County Community Indicators;
Jurisdiction(s) and population: Hennepin County (Minn.); Approx.
1,120,000;
Description: Indicators are linked to the mission, vision, and goals of
Hennepin County government to measure progress (i.e., people are
healthy, protected and safe, self-reliant, assured due process, mobile,
and engaged in the community); identify areas for improvement; and
foster a dialogue among businesses, not-for-profit organizations,
faith-based communities, and other units of government;
Managing organization(s): (Public); Hennepin County Office of Planning
and Development;
Date begun/first reported: Initiative began in 1995;
Frequency of updates: Reported annually until 2000; Reported every 2
years since 2000.
Name of system/report: Community Atlas;
Jurisdiction(s) and population: Hillsborough County, Fla. (Tampa Bay,
Fla. area); Approx. 1,070,000;
Description: Indicators measure quality of life at the neighborhood
level to assist various community stakeholders, including citizens,
government, business representatives, and academics in community
planning. Indicators cover economics, infrastructure, information
sharing, civic engagement, arts and culture, diversity, education,
government, health, the environment, visual/physical design, and
economics;
Managing organization(s): (Public/Private); Collaborative effort led
by the University of South Florida Center of Community Design and
Research. Partners include faculty from the University's Department of
Geography, the University's College of Arts and Sciences' University
Community Initiative, and "Tomorrow Matters!"--a local citizen's group;
Date begun/first reported: Initiated in 1997;
Frequency of updates: Planned report; Web site information updated
periodically.
Name of system/report: Social Assets and Vulnerabilities Indicators
(SAVI);
Jurisdiction(s) and population: Indianapolis, Ind., region; Approx.
1,600,000;
Description: Indicators and related data provide comprehensive and
accessible information on "assets" (e.g., agencies, programs, and
facilities in the community) and "vulnerabilities" (e.g., demographics
and social characteristics of the community) for the Indianapolis
metropolitan area. By creating a common source for reference
geographies, such as school districts, transportation routes, health
department districts and service areas, SAVI reduces redundancy in data
development efforts and ensures that stakeholders (e.g., local level
officials and planners) are working with the same reference
information;
Managing organization(s): (Public/Private); Polis Center, an affiliate
of Indiana University-Purdue University at Indianapolis;
Date begun/first reported: Project initiated in 1993;
Frequency of updates: No formal report; Continual updating of Web site
information.
Name of system/report: Indicators for Progress;
Jurisdiction(s) and population: Five counties that comprise the
Jacksonville, Fla. region; Approximately; 1,200,000;
Description: The indicators help monitor progress toward a quality of
life vision for the Jacksonville, Fla. metropolitan area. Goals and
related indicators cover the following topics: achieving educational
excellence; growing a vibrant economy; preserving the natural
environment; promoting social well-being and harmony; enjoying arts,
culture, and recreation; sustaining a healthy community; maintaining
responsive government; moving around efficiently; and keeping the
community safe. Trends are analyzed and action is taken to address
issues. The project engages diverse citizens groups in open dialogue,
research, consensus building, and leadership development to improve
quality of life;
Managing organization(s): (Public/Private); Jacksonville Community
Council, Inc. (JCCI)--a not-for-profit organization. JCCI partners with
the City of Jacksonville, the regional United Way, and the Chamber of
Commerce;
Date begun/first reported: Project initiated in 1985;
Frequency of updates: Reported annually.
Name of system/report: King County Benchmarks;
Jurisdiction(s) and population: King County Wash. (Seattle); Approx;
1,760,000;
Description: Indicators monitor progress toward countywide planning
goals for the economy, environment, affordable housing, land use, and
transportation, to improve the quality of life in King County.
Indicators are reported at the national, state, and county levels to
offer insights into the direction and extent of changes in the region
for policy, planning, and budget decisions;
Managing organization(s): (Public); King County Office of Budget;
Date begun/first reported: Project initiated in 1990. First report in
1996;
Frequency of updates: Reported annually from 1996 through 2002; Since
2003, reports are shorter and published on specific indicator topics
throughout the year.
Name of system/report: Milwaukee Neighborhood Data Center;
Jurisdiction(s) and population: Milwaukee, Wis. metropolitan area;
Approx. 590,000;
Description: Provides comprehensive local-level statistics and
indicators and analysis serving the Milwaukee area. Topical areas
include housing, employment, education, school readiness, health,
family economic status, and civic engagement. The center helps
community organizations understand data to better target their own
resources or to assess program outcomes;
Managing organization(s): (Private); Non-Profit Data Center of
Milwaukee;
Date begun/first reported: Project initiated in 1991;
Frequency of updates: No formal report; Information updated
periodically.
Name of system/report: New York City Social Indicators;
Jurisdiction(s) and population: New York City; Approx; 8,080,000;
Description: Indicators provide information about New York City (i.e.,
demographics, economy and employment, public safety, health, education
and culture, poverty and social services, housing and infrastructure,
and the environment), trends over the current and previous 5 years and
comparisons with other areas, and a narrative summarizing the economic,
social and cultural, and environmental health of the city;
Managing organization(s): (Public); New York City Department of City
Planning;
Date begun/first reported: Initiated in 1989. First report in 1992;
Frequency of updates: Reported annually.
Name of system/report: Compass Index of Sustainability;
Jurisdiction(s) and population: Orange County Fla. (Greater Orlando);
Approx; 965,000;
Description: Indicators measure progress toward sustainable
development goals for the region and for the health and vitality of the
community. Sustainable development goals aim to show the
interconnectedness of the following: nature (i.e., environmental
quality, ecosystem health, natural resources and beauty); economy
(i.e., production of goods and services that make livelihoods possible
and lives comfortable, including transportation, infrastructure,
employment, and economic security); society (i.e., collective dimension
of human life, including government, schools, public safety, and
stability); and well-being (i.e., health, long life, satisfaction and
optimism, and social relationships);
Managing organization(s): (Private); Healthy Community Initiative--a
private not-for-profit organization;
Date begun/first reported: Initiated in 1992;
Frequency of updates: Reported every 2 years.
Name of system/report: Portland Multnomah Benchmarks;
Jurisdiction(s) and population: Portland, Oreg. and Multnomah County,
Oreg; Approx; 678,000;
Description: The benchmarks, based upon the statewide Oregon Benchmarks
program, gauge conditions in the community and measure progress related
to the visions of the City of Portland and Multnomah County in the
economy, education, environment, governance and civic participation,
health and families, public safety, and urban vitality. Benchmarks are
developed to encourage community organizations to focus on outcomes and
increase collaboration;
Managing organization(s): (Public); City Auditor's Office of Portland;
Date begun/first reported: Initiated in 1993;
Frequency of updates: Reported every 2 years.
Name of system/report: Santa Cruz County Community Assessment Project
(CAP);
Jurisdiction(s) and population: Santa Cruz County, Calif; Approx;
250,000;
Description: Indicators measure progress toward quality of life goals
(e.g., economic, education, environment, health, public safety, social,
and natural environment) for Santa Cruz County and raise awareness of
changing trends and emerging issues as well as provide information to
human services agencies and the organizations that fund them. CAP also
supports action plans to achieve its goals;
Managing organization(s): (Public/Private); Managed through a
collaboration of individuals and community groups, including the United
Way of Santa Cruz and Dominican Hospital. Applied Survey Research, a
not-for-profit consulting company, is responsible for the research
component of the system;
Date begun/first reported: Initiated in 1993. First set of indicators
presented in 1995;
Frequency of updates: Reported annually.
Name of system/report: Santa Monica Sustainable City;
Jurisdiction(s) and population: City of Santa Monica, Calif; Approx;
84,000;
Description: Indicators measure progress toward city goals and
strategies for all sectors of the community aimed to conserve and
enhance local resources, safeguard human health and the environment,
maintain a healthy and diverse economy, and improve the livability and
quality of life for all community members in Santa Monica. Goal and
indicator categories are resource conservation, environmental and
public health, transportation, economic development, economic
diversity, open space and land use, housing, community education and
civic participation, and human dignity;
Managing organization(s): (Public); City of Santa Monica's
Environmental Programs Division, Public Works Department;
Date begun/first reported: Initiated in 1994;
Frequency of updates: Reported every 2 years.
Name of system/report: Sustainable Seattle;
Jurisdiction(s) and population: Seattle, Wash; Approx; 570,000;
Description: Indicators promote sustainable development at a local and
regional scale to help solve fundamental development problems and
foster long-term social change through policy advocacy, education, and
civic action. Indicators are provided in the following topical areas:
environment, population and resources, economy, youth and education,
and health and community;
Managing organization(s): (Private); Sustainable Seattle--a not-for-
profit organization;
Date begun/first reported: Initiated in 1992. First report in 1993;
Frequency of updates: Most recent full report in 1998.
Name of system/report: Index of Silicon Valley (California);
Jurisdiction(s) and population: Silicon Valley region of Northern
California; Approx; 2,300,000;
Description: Indicators report on progress toward achieving goals
primarily related to sustainable development and quality of life for
California's Silicon Valley region (e.g., in the areas of environment,
population and resources, economy, youth and education, and health and
community). The project addresses issues raised from indicator results
through collaborative action;
Managing organization(s): (Private); Joint Venture: Silicon Valley
Network--an independent, not-for-profit organization with some public-
private partnerships;
Date begun/first reported: Initiated in 1992; First report in 1995;
Frequency of updates: Reported annually.
Name of system/report: State of the Region (Southern California);
Jurisdiction(s) and population: Local governments in southern
California; Approx; 17,123,000;
Description: Indictors track progress toward a regional comprehensive
plan and goals for the southern California region (i.e., in the areas
of population, the economy, housing, transportation, the environment,
and quality of life). It also serves as a guide for local government
planning in the region;
Managing organization(s): (Public/Private); Southern California
Association of Governments;
Date begun/first reported: Initiated in 1997; First report in 1998;
Frequency of updates: Reported annually.
Name of system/report: Benchmarking Municipal and Neighborhood Services
in Worcester;
Jurisdiction(s) and population: Worcester, Mass; Approx; 175,000;
Description: Indicators measure progress toward strategic goals in the
areas of public safety, education, economic development, municipal and
neighborhood services, and youth services. The effort informs city
agency officials (for management of city services), as well as
interested citizens;
Managing organization(s): (Private); Worcester Regional Research
Bureau, a private, not-for-profit organization;
Date begun/first reported: Initiated in 1998;
Frequency of updates: Reported annually.
Name of system/report: Results Iowa;
Jurisdiction(s) and population: State of Iowa; Approx; 2,944,000;
Description: Indicators are linked to statewide goals (i.e., the areas
of new economy, education, health, safe communities, and environment)
to provide Iowa state government officials' benchmark information for
planning and budgeting;
Managing organization(s): (Public); Iowa Department of Management; Date
begun/ first reported: Project initiated around 1999;
Frequency of updates: Reported annually.
Name of system/report: Maine's Measures of Growth;
Jurisdiction(s) and population: State of Maine; Approx; 1,306,000;
Description: Indicators track progress toward a long-term economic
growth policy for the state of Maine through quality of life measures
on the economy (i.e., prosperity, business innovation, business
climate, and skilled and educated workers); community (i.e., civic
assets, disparities, and health and safety); and environment (i.e.,
preservation, access, and stewardship);
Managing organization(s): (Public/Private); Maine Economic Growth
Council, an independent entity chartered by the state legislature;
Date begun/first reported: Initiative began in 1993. First reported in
1996;
Frequency of updates: Reported annually.
Name of system/report: Minnesota Milestones;
Jurisdiction(s) and population: State of Minnesota; Approx; 5,059,000;
Description: Indicators track progress toward 19 quality of life goals
for the state (e.g., Minnesotans will excel in basic and challenging
academic skills and knowledge; have sustainable, strong economic
growth; and improve the quality of the air, water, and earth). Also,
provides accessible information to make planning and budget decisions;
Managing organization(s): (Public); Minnesota Department of
Administration;
Date begun/first reported: Project initiated in 1991; First reported on
in 1993;
Frequency of updates: Reported every 2 or 3 years; Also periodically
updates data on Web site.[A].
Name of system/report: North Carolina 20/20;
Jurisdiction(s) and population: State of North Carolina; Approx;
8,407,000;
Description: Indicators measure progress toward goals in multiple
domains over a 20-year period to assess the strengths, needs, and
challenges in North Carolina. Goals are linked to the economic
competitiveness of the state and fall within the following categories:
healthy children and families, safe and vibrant communities, quality
education for all, a high-performance workforce, a sustainable
environment, a prosperous economy, a 21st century infrastructure, and
active citizenship and accountable government;
Managing organization(s): (Public); North Carolina Progress Board--
reporting to the state university system's Board of Governors;
Date begun/first reported: Initiated in 1995;
Frequency of updates: Reported every several years along with other
interim reports.
Name of system/report: Oregon Benchmarks;
Jurisdiction(s) and population: State of Oregon; Approx; 3,560,000;
Description: Indicators measure progress toward a strategic vision for
the State of Oregon and related goals. Indicators fall within the goal
categories of (1) quality jobs for all Oregonians, (2) safe, caring,
and engaged communities, and (3) healthy, sustainable surroundings;
Managing organization(s): (Public); Oregon Progress Board--a unit of
state government reporting to a board comprised of the governor and
other leaders inside and outside of government;
Date begun/first reported: Initiated in 1989; First report in 1991;
Frequency of updates: Reported every 2 years.
Name of system/report: Social Well-Being of Vermonters;
Jurisdiction(s) and population: State of Vermont; Approx; 619,000;
Description: Indicators serve as benchmarks to measure outcomes to
improve well-being of children, families, and individuals. Outcomes are
grouped under (1) families, youth, and individuals are engaged in their
community's decisions and activities; (2) pregnant women and young
children thrive; (3) children are ready for school; (4) children
succeed in school; (5)children live in stable, supported families; (6)
youth choose healthy behaviors; (7) youth successfully transition to
adulthood; (8) adults lead healthy and productive lives, and elders and
people with disabilities live with dignity and independence in settings
they prefer; and (9) communities provide safety and support to families
and individuals;
Managing organization(s): (Public); Vermont Agency of Human Services;
Date begun/first reported: Project initiated in 1993;
Frequency of updates: Reported annually.
Name of system/report: German System of Social Indicators;
Jurisdiction(s) and population: Germany; Approx. 83 million;
Description: Indicators monitor the state of and changes in living
conditions and quality of life, covering 14 life domains (including the
economic, environmental, and social and cultural domains). Includes
almost 400 indicators and 3,000 time series;
Managing organization(s): (Public/Private); Center for Survey;
Research and Methodology (ZUMA), a government-funded research
institution in Mannheim, Germany;
Date begun/first reported: Development began in the 1970s. Data are
available online from ZUMA;
Frequency of updates: Indicators continually maintained and updated;
Biennial data report is published with the Federal Statistical Office
of Germany.
Name of system/report: United Kingdom Sustainable Development
Indicators;
Jurisdiction(s) and population: United Kingdom; Approx. 60 million;
Description: To measure progress toward the government's sustainable
development strategy in the areas of social progress, economic growth,
and environmental protection. Includes 15 headline indicators to give a
broad overview and 132 core indicators to focus on specific issues and
identify areas for action;
Managing organization(s): (Public); Department of Environment, Food,
and Rural Affairs;
Date begun/first reported: In 1999, the U.K. government published a
strategy for sustainable development and included baselines for the
indicators;
Frequency of updates: Starting in 2000 reports annually on the latest
information on progress, including all the headline indicators; Major
updates every 5 years.
Name of system/report: European Structural Indicators;
Jurisdiction(s) and population: European Union; Approx. 450 million;
Description: Indicators track progress toward strategic goals for the
economic, social, and environmental renewal of Europe, which are
detailed in the Lisbon Strategy. The indicator system covers the
following topics: employment, innovation and research, economic reform,
social cohesion, and the environment. Starting in 2004, the EU reports
on 14 headline indicators, although the more detailed set of indicators
will be maintained in a publicly available database;
Managing organization(s): (Public); European Commission;
Date begun/first reported: Lisbon Strategy was adopted in 2000 (and
modified in 2001); the structural indicators began in 2001;
Frequency of updates: Reported on annually to the European Council.
Source: GAO analysis.
Note: The World Wide Web links for these systems can be found at
[Hyperlink, http://www.keyindicators.org].
[A] Since we concluded our interviews in the fall of 2003, Minnesota
Milestones ceased to be an active system. State officials told us that
the Minnesota Milestones Web site will be maintained but there are no
plans to update the data in the near future.
[End of table]
[End of section]
Appendix IV: Timeline and Evolution of the Boston Indicators Project:
The Boston Indicators Project is coordinated by the Boston Foundation
in partnership with the City of Boston, the Boston Redevelopment
Authority, and the Metropolitan Area Planning Council. The goal of the
project is to engage the general public, civic and community-based
institutions, media, business, and government in better understanding
Boston's key challenges and opportunities. The project aims to:
* "democratize data" (by creating a container for local data, research,
and reports);
* create a common ground for civic discourse and collaborative
strategies;
* track progress on shared goals along the lines of civic health,
cultural life and the arts, economy, education, environment, housing,
public health, public safety, technology, and transportation; and:
* disseminate results and best practices to a wide audience.
The project took years to develop and has evolved and expanded its
focus in several distinct phases, although there is some overlap
between them.
Participatory Development of the Indicator System (1997-99):
The Boston Foundation and the City of Boston launched the project in
1997, with additional support from the Urban Institute in Washington,
D.C.[Footnote 109] It was intended to engage the community in
developing indicators of sustainability that would measure natural
assets, economic well-being, and human development for the City of
Boston and its neighborhoods. The project developed an open,
participatory approach involving a wide range of practitioners,
academics, policymakers, and other private and not-for-profit sector
leaders. It attempted to take advantage of lessons learned from past
efforts in the United States, and adopt successful practices used by
others who had implemented comprehensive key indicator systems.
An initial planning meeting took place in January 1997, involving about
12 individuals from various community organizations, in addition to
officials from the city's planning and development offices. Planning
focused on the need for a clear vision for the project, as well as some
of the limitations of indicators that had traditionally been used to
measure change in urban communities. Individuals from additional groups
and agencies were invited to subsequent meetings so that by late spring
1997, the group had grown to include about 75 participants who had
developed a broad framework for the project, including a vision, goals,
and a process for developing indicator categories.
The next step involved identifying indicator categories as well as the
indicators themselves. This involved about 150 individuals working in
both large and small group settings and the process took about 6
months. As the effort evolved, participants formed a steering group and
various subcommittees, developed criteria for selecting indicators,
began to identify data sources, and continued to consult widely with
similar projects to try to learn from their experiences. Project
participants decided that indicators should ideally be expressed in
positive or asset-based terms (such as the number of third graders who
can read at grade level or the percentage of healthy babies born).
Accordingly, the Project attempted to identify strengths and focus on
desired positive outcomes, rather than focusing on deficit or negative
terms (such as the school dropout rate or the percentage of low birth
weight babies).
By early 1998, participants in the project had identified over 150
proposed indicators. They began to try to reduce the number of proposed
indicators and identify and collect data, which was difficult and time-
consuming. Over 300 individuals from diverse sectors, neighborhoods,
levels of government, and racial and ethnic groups participated in
working sessions to conceptualize and develop the indicator system.
Even more individuals and organizations assisted with data collection
and analysis, and the initial phase was largely finished by the fall of
1998.
Initial Reporting on the Indicators (1999-2002):
In June 1999, a draft report on the indicators was released at a Boston
Citizen Seminar hosted by Boston College. The mayor of Boston gave the
keynote address at this event, and approximately 250 people attended.
The seminar included a panel of civic leaders and a presentation on the
indicators, and small group discussions. Subsequently, more than 700
copies of the draft report were distributed to senior government
officials, state legislators, and interested organizations and
individuals for review, and their comments were incorporated.
The Boston Foundation and other organizations[Footnote 110] worked with
the Center for Survey Research at the University of Massachusetts in
Boston to design and fund an annual survey to produce qualitative data
for some measures for which data had not been consistently available.
They conducted the first survey in the summer of 2000 in the
metropolitan region, the city, and four Boston neighborhoods. The
project released the final indicators report, The Wisdom of Our
Choices: Boston's Indicators of Progress, Change and Sustainability
2000, in the fall of 2000 at another Boston Citizen Seminar that about
350 people attended. The project distributed 7,500 copies of the
report.[Footnote 111]
Pursuing Two Tracks: Reporting on Indicators and Working toward a Civic
Agenda (2002 through the Present):
In recent years, the Boston Indicators Project has begun to follow two
distinct tracks. One track has continued to produce the indicators
reports every 2 years to measure progress toward a vision for
2030.[Footnote 112] This track has involved maintaining and improving
the project's Web site ([Hyperlink, http://www.bostonindicators.org];
improving data and creating tools for accessing data; developing an
educational curriculum and a seminar series; and conducting briefings
for media professionals. The other main track involves developing a
civic agenda for Boston. This second track has begun efforts to reach
consensus on a Boston civic agenda. The agenda is to consist of short-
term, achievable outcomes that are linked to high-level, long-term
goals. These efforts are also intended to build support from
stakeholders by incorporating various organizations' goals and
encouraging organizations to align their own resources and activities
with the shared civic agenda.
Under the first track, the project released an updated indicators
report, entitled Creativity and Innovation: A Bridge to the Future in
February 2003.[Footnote 113] Like the first report, it was released at
a Boston Citizen Seminar hosted by Boston College and attended by
hundreds of civic leaders. The highly interactive Web site for the
Project was also launched at this time. The structure of the Web site
allows users to search for information by goal categories, or by one of
five crosscutting filters. These filters include Boston neighborhoods,
children and youth, competitive edge, race/ethnicity, and sustainable
development. They allow users to pull relevant information from
different areas of the Web site, identify connections across sectors,
and show local conditions in a citywide, regional, and global context.
The concept for a civic agenda was developed as part of the work of a
leadership group established by the Boston Indicators Project. This
group--composed of individuals (many in leadership positions) from
diverse organizations and sectors, including academia, nonprofit
organizations, foundations, the Boston public school system, and
businesses--meets periodically to discuss issues such as dissemination
strategies for the indicators report, and whether and how the project
could contribute to connecting leaders in Boston and shaping the public
dialogue on important issues. Members had agreed that once the
indicators were in place, the group's next step would be to try to
leverage change by strengthening civic leadership. The group formed a
subcommittee to develop recommendations, criteria, and a strategy for
developing a civic agenda to be released as part of the 2004 indicators
report. The group intends to articulate a set of long-term goals based
on a preferred future scenario, and then create specific strategies, or
pathways of change, to reach these long-term goals. The project also
plans to use measurable benchmarks in tracking and reporting on
progress toward the short-term aspects of the civic agenda.
Key Themes from the Boston Experience:
The project's over 7 years of experience demonstrate the importance and
value of engaging collaborative and highly participatory processes in
developing an indicator system and revising it as circumstances change
or new indicators become available. It also provides an illustration of
the extent to which an indicator system can expand its focus over time
and shows the value of learning from others and sharing information on
successful practices and technologies.
* Collaborative and participatory processes are important. The Boston
experience illustrates that involving a diverse and large group of
public and private leaders and citizens can pay off in terms of
widespread buy-in and use. From the outset, the project involved widely
consultative and participatory processes for developing concepts and
making decisions, including public and private leaders. A large number
of individuals from diverse sectors, neighborhoods, levels of
government and racial and ethnic groups participated in working
sessions to conceptualize the first draft indicators report. The final
versions of the 2000 indicators report, as well as the 2002 report,
were released at public events attended by diverse audiences, and the
draft version of the 2000 report was distributed to 700 different
individuals and organizations for comment. The project's leadership
group includes leaders from many diverse sectors.
* A system's focus can expand over time. The Boston experience also
shows that, with sufficient support and buy-in, an indicator system can
expand its focus and become an agent for change. According to project
officials, a major motivation for the first indicators report in 2000
was to provide access to objective data. Project staff explained that
the first report was immense and contained a huge amount of data. They
said they received feedback that it was too much for potential users to
"get their arms around," and that the data and report needed to be
interpreted and synthesized more so they would be more understandable.
Following the release of the 2000 report, the project began a new phase
with two tracks, one to implement a different approach with the 2002
report, although it did not abandon its original focus on reporting on
and widely disseminating indicators. The second report contained more
interpretation of data, comparisons, and identification of important
trends. While greater interpretation of data may be more useful to
potential users, it may also lead to more friction among leaders. The
civic agenda was based on the idea of analyzing indicators to develop
specific strategies for achieving selected goals. Observers we spoke
with noted that selecting and reaching consensus on strategies involves
more subjectivity and may be harder to accomplish than just reporting
objective information. The implications of moving toward greater
interpretation of data and strategy development are not likely to fully
unfold for some time, as the Project is dynamic and still evolving.
Change can also be noted in terms of the specific indicators used over
time. For example, of the 16 specific indicators of civic health
included in the 2000 report, about half of them were no longer included
among the 23 indicators shown on the project's Web site in June 2004.
* Learning from others and sharing information. The project illustrates
the value of learning from the experiences of others when developing a
system, and once the system has been developed, sharing successful
practices and technology with others. In recent years, the project has
made it a priority to share information and lessons learned with groups
from other cities that are interested in using the Boston indicator
system as a model. Project officials believe this will facilitate
easier replication elsewhere in the United States. For example, a new
comprehensive key indicator system in Dallas, Texas (Dallas Indicators)
has borrowed heavily from the experiences of the Boston Indicators
Project. In addition, project staff have made the Web site architecture
available for licensing, and have received queries from several
interested organizations.
[End of section]
Appendix V: Timeline and Evolution of the Oregon Benchmarks:
The State of Oregon's comprehensive indicator system, known as the
Oregon Benchmarks, had its roots in a strategic planning exercise that
was launched in response to a serious economic downturn in the early to
mid-1980s. This system has evolved over time in several phases, which
are described in detail below, although there is some overlap between
them. The Oregon Benchmarks system started as a way to monitor and
encourage statewide progress toward a set of policy goals and targets-
-and explicitly aimed to be a system for all of Oregon, not just the
state government. In recent years, the system has narrowed its focus
somewhat and become an integral part of the state government's
performance measurement and improvement process.
Oregon Shines Strategic Planning Initiative (1988-89):
In 1988, Governor Neil Goldschmidt launched a statewide economic
planning initiative, based on a vision of Oregon as a diverse economy
built on a foundation of an educated workforce and a high quality of
life. Sixteen committees made up of approximately 180 leaders from the
business, labor, education, and government communities were involved in
drafting reports that the governor's office and the Economic
Development Department used to shape the comprehensive strategic plan.
The resulting document, Oregon Shines: An Economic Strategy for the
Pacific Century, was issued in May 1989 (commonly known as Oregon
Shines), and laid out an economic strategy for the next two decades.
The strategy was based upon the concept of a "circle of prosperity,"
which held that quality communities and a prosperous private sector
reinforce one another, and could be strengthened by pursuing several
initiatives: a well-educated, skilled workforce; an attractive quality
of life achieved through maintaining the natural environment; and an
internationally-oriented business and cultural climate attractive to
global commerce.
Establishing the Oregon Progress Board and Oregon Benchmarks (1989-91):
The Oregon Benchmarks system was developed as a complement to Oregon
Shines, as a tool for following up on the long-range strategy and
assessing progress made toward achieving its broad goals. In the summer
of 1989, the legislature approved the creation of the Oregon Progress
Board as a statutory agency located within the governor's office. The
governor was chair of the Progress Board and appointed all of its nine
volunteer members, who were to translate the strategic vision of Oregon
Shines into a set of measurable indicators. Specifically, the Progress
Board was supposed to develop a set of benchmarks for legislative
approval in 1991, and then report on progress toward the benchmarks
every 2 years. Benchmarks were intended to be broad indicators of the
overall economic, social, and environmental health of the state as a
whole and not simply performance measures for state agencies. Achieving
the benchmarks would be beyond the reach of state and local governments
alone, and would require the combined efforts of citizens, businesses,
advocacy groups, charitable organizations, and academic researchers.
While the Oregon Progress Board was intended to be bipartisan, the
governor and the majority of members of both houses of the state
legislature were Democrats when it was established.
A variety of citizen groups participated throughout 1990 in identifying
the Oregon Benchmarks, and the Progress Board met monthly to oversee
the process. The Oregon Shines strategy was divided into six topics,
with teams of citizens assigned to develop and identify preliminary
indicators. The preliminary recommendations were then presented to the
public in statewide meetings attended by about 500 citizens, and
another 200 organizations and individuals contributed written comments.
Based on that input, the Progress Board developed a master list of 158
benchmarks. Where possible, for each benchmark, historical data were
presented for 1970 and 1980; baseline data were presented for 1990; and
future short-and long-term goal or target levels were set for 1995,
2000, and 2010. In general, the long-term goal levels were ambitious,
based on aspirations for society, and not necessarily realistic.
Examples of ambitious target levels for 2010 included that there should
be no children living below the federal poverty line, there should be
no pregnant women using illicit drugs, and 97 percent of teenagers
should graduate from high school (up from 87 percent in 1990). Early in
1991, the Progress Board sent the set of 158 benchmarks to 18
legislative committees, which recommended some amendments. The state
legislature unanimously adopted the Oregon Benchmarks in 1991.
Beginning to Link Government Programs to the Benchmarks (1991-93):
After the 1990 elections, a number of developments related to the state
political environment and the budget began to affect the Oregon
Progress Board and the use of the benchmarks. Republicans gained
control of the state House of Represfor the first time in 20 years,
although Democrats still controlled the Senate and the governorship. In
1990, Oregon voters also approved a property tax cap (known as Measure
5), which also required that state funds replace any resulting lost
revenues for local school districts. As a result, it was anticipated
that more state funds would need to be allocated for education, which
would make it necessary to reduce noneducation spending. The new
governor, Barbara Roberts, was a strong supporter of the Progress
Board. In anticipation of the need to make budget reductions, she tried
to use the benchmarks as a tool to help set priorities during the 1993
budget preparation process. Basically, state agencies had to submit
base budgets at only 80 percent of the level of the prior year, but
could receive higher percentages if they could show that their programs
contributed to lead benchmarks. The Progress Board estimated that the
policy resulted in a shift in the budget distribution worth an
estimated $130 million toward programs aimed at the lead benchmarks.
The Oregon Benchmarks became more important to state government
agencies, although since the benchmarks had not been developed to
reflect agency programs or structures, there was not always a good fit
between services provided by the agencies and the benchmarks. One
result of this disconnect between agency programs and the benchmarks
was upward pressure on the number of benchmarks, as agencies and
special interest groups pressed the Progress Board to add benchmarks to
reflect their specific areas of work. By 1993, the number of benchmarks
had increased from 158 to 272. The Progress Board, in conjunction with
the Oregon Business Council, sponsored 29 community meetings across the
state that engaged about 2,000 citizens in reviewing the strategic
vision and benchmarks. The state legislature approved the Oregon
Benchmarks again in 1993, although not unanimously this time.
Oregon Progress Board and Benchmarks Affected by Politics (1994-95):
In the 1994 elections, Republicans gained control of both chambers of
Oregon's legislature, a development that eventually had serious
implications for the future of the Progress Board and Oregon
Benchmarks. The statute that had established the Oregon Progress Board
required that the state legislature vote to reauthorize it in 1995, or
it would automatically "sunset." John Kitzhaber, the new Governor who
took office in 1995, supported the Oregon Progress Board and Oregon
Benchmarks system. The Oregon Benchmarks also influenced several local
governments, for example, the Portland Multnomah County Benchmarks, and
a few private statewide agencies within Oregon, as well as the states
of Minnesota and Florida, in developing their own benchmark
initiatives, and won recognition from a number of prestigious
organizations.[Footnote 114] However, some legislators, particularly
Republicans, perceived that state agencies were trying to use the
ambitious target levels set for the Benchmarks to argue for increased
funding for their programs. Some felt that the Oregon Benchmarks
represented a partisan and ideological agenda. Overall, the legislature
was evidently not persuaded of the value of the Oregon Benchmarks
system, because in 1995, a Republican caucus did not approve the bill
to reauthorize the Progress Board. However, a 2-year budget for the
Progress Board had already been approved by the legislature, and
Governor Kitzhaber decided to keep it alive via an executive order.
This effectively meant that the Oregon Progress Board received a 2-year
reprieve during which it could try to regain the support of the
legislature for reauthorization.
New Directions (1995-2001):
In 1995, the Oregon Progress Board received a new executive director
who oversaw a process of working to address concerns expressed by
legislative critics and trying to win more support by making some
changes to the Oregon Benchmarks and the Progress Board. Also in 1996,
the Governor instructed state agencies to identify benchmark linkages
in their budgets and describe how proposed programs would contribute to
achieving benchmark targets. The Progress Board's director advocated
updating the Oregon Shines strategy, based on the argument that
Oregon's economic situation had substantially improved and new issues
had become relevant since the original strategy was issued in 1989. To
update the strategy, the governor established a 45-member task force
consisting of past and present Progress Board members, a Republican
senator, a Democratic representative, local politicians, independent
citizen leaders, and individuals from universities and nonprofit
organizations. In 1997, Oregon Shines II: Updating Oregon's Strategic
Plan was released. Emphasis was also placed on increasing support from
state legislators. As part of this process, the indicators were
revisited and the total number was reduced to 92, and target levels
were made more realistic. Around the same time, the Oregon Progress
Board released the first report card on progress toward achieving the
benchmarks. The Progress Board staff succeeded in winning support for
the Oregon Benchmarks system from several Republican legislators, and
met individually with all of the state senators and other key leaders.
These efforts paid off when, in the spring of 1997, the state
legislature permanently reauthorized the Oregon Progress Board.
With the newly reauthorized Progress Board came a new emphasis on using
the Oregon Benchmarks system as an accountability tool, although the
Board does not want to lose its value as a visioning tool. According to
some observers, the work of the Progress Board was moving toward
performance measurement of state agency programs in order to maintain
the support of key legislators. In March 1999, the Progress Board
presented a benchmark report to the legislature, which assessed
progress toward achievement of each benchmark with a letter grade.
Efforts were made to align the activities of state agencies with the
benchmarks. In 2001, legislation moved the Progress Board to the
Department of Administrative Services (the central administrative
agency of state government, responsible for budget development) and
added a significant focus on helping state agencies link their
performance measures to Oregon Benchmarks. The bill also mandated that
the Progress Board write guidelines on performance measures for state
agencies, and added one legislator from each chamber of the legislature
as a voting member of the Progress Board. The Progress Board issued a
report showing to which benchmarks particular state agencies were
contributing.
Recent Developments (2002 through the Present):
In a special legislative session in the fall of 2002, the Oregon
Progress Board lost all its funding when state government spending was
drastically cut to deal with an ongoing state fiscal downturn. The
current governor, Theodore Kulongoski, managed to set aside some funds
to keep the Progress Board going through the end of the 2001-2003
budget period. As of the fall of 2003, the statute authorizing the
Oregon Progress Board was still in effect, and there was authority for
three staffing slots and modest funding for 2 years. According to the
director, the Progress Board has only managed to survive because it is
so involved in doing performance measurement work that the legislature
considers important. Another observer said it was very difficult to
keep the board "alive" during the last legislative session, because
there is not that much interest in it. The Oregon Benchmarks system
continues to evolve in the direction of serving as a performance
measurement tool for state government. Many leaders we interviewed
believe that this new focus might make the Oregon Benchmarks more
relevant and useful. Recently, the Progress Board assisted the state's
Department of Administrative Services and the governor's budget office
in reviewing the programs of all 87 state agencies and assessing how
the goals and performance measures in their strategic plans link to the
Oregon Benchmarks. The Progress Board also helped state agencies to
develop performance measures as part of their budget requests. In the
future, agencies will be required to explain how their programs tie to
benchmarks in their annual performance measure reports.
Key Themes from the Oregon Experience:
The nearly 15 years of the Oregon Benchmarks experience highlight
several themes, including the importance of having bipartisan and
broad-based support, the extent to which a system can evolve from its
original purpose, and the advantages and disadvantages of being a
completely government-led and funded system.
* Bipartisan and broad-based support is important. The Oregon
experience suggests that support for an indicator system could be
vulnerable if it is perceived as being the creation of a particular
political party, a particular leader, or a single branch of government.
When the Oregon Progress Board was first created, the governor and
majorities in both chambers of the state legislature were from the same
political party. The Progress Board and the Oregon Benchmarks system
continued to enjoy support from the next three governors in succession,
who also belonged to the same party. It was clearly perceived as driven
by the executive branch and the governors' political party. Support for
the indicator system from the legislature decreased after the opposing
political party gained the majority in the legislature. The Progress
Board and the Oregon Benchmarks have come close to being eliminated
twice, due at least in part to perceptions of political partisanship.
Recently, attempts have been made to broaden support across party lines
and increase collaboration with the legislature.
* Indicator systems evolve over time. The Oregon experience also
illustrates that an indicator system can change significantly over time
as its organizers and supporters respond to changes in political or
economic circumstances. Today, the Oregon Progress Board continues its
work, monitoring and reporting on benchmark indicators that track
progress toward future targets. In the 15 years since the state
legislature first established the Oregon Progress Board (13 years since
it approved the first set of Oregon Benchmarks), the system has evolved
from a participatory visioning process intended to develop an economic
strategy and broad goals for the kind of society Oregonians aspired to
have, to its present emphasis on performance measurement and linking
the programs of state agencies to achieving the benchmarks. To increase
its relevance and usefulness to state executive and legislative branch
officials, the Oregon comprehensive indicator system has evolved toward
a greater emphasis on serving as a tool for state government agency
performance measurement.
* Public sector status has advantages and disadvantages. The Oregon
Progress Board and the Oregon Benchmark system have been funded by and
housed within the state government--specifically the executive branch-
-from the beginning. The Oregon experience demonstrates that being led
and financed by the government can have advantages and disadvantages.
Having the support of a high-level public official, such as the
governor, can lead to a great deal of exposure and initial use for an
indicator system. However, such support can also make a system
vulnerable once that leader leaves office or government fiscal
priorities change. Several different governors championed the Oregon
Benchmark system, which helped secure funding and resources for the
program. A downside of the patronage from the governors, however, has
been the issue of perceptions of political partisanship, as described
above. In addition, reliance on state funding made the Progress Board
vulnerable to elimination during a severe fiscal downturn, which has
been the case since 2001. Since 2001, the state, which has one of the
highest unemployment rates in the country, has been forced to make
large budget cuts, placing programs that are perceived to be
nonessential, like the Oregon Benchmarks, in jeopardy.
[End of section]
Appendix VI: The Role of Indicators in the European Union:
Over the past 50 years, efforts to create an integrated European Union
(EU) have expanded from an agreement among six countries to form a coal
and steel common market to a union of 25 countries with a wide array of
common policies and institutions. Indicators and related systems have
played an important role in helping to monitor the position and
progress of member countries and to assess Europe in relation to other
democracies throughout the world, including the United States. The EU
has numerous well-developed and accepted indicator systems specific to
topical areas and domains, as well as those that recognize the
relationships among economic, social and cultural, and environmental
indicators. The European Structural Indicators system, which is linked
to the Lisbon Strategy, is widely accepted as the largest scale, most
comprehensive indicator effort at the EU level.
Background on the EU:
The EU is a treaty-based, institutional framework that facilitates
economic and political cooperation among its current 25 member states-
-Austria, Belgium, Cyprus, the Czech Republic, Denmark, Estonia,
Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia,
Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal,
Slovakia, Slovenia, Spain, Sweden, and the United Kingdom.[Footnote
115] The EU is the latest stage in a gradual process of European
integration that began after World War II to promote peace and economic
prosperity in Europe. Its founders hoped that by creating communities
of shared sovereignty--initially in areas of coal and steel production,
trade, and nuclear energy,--another war in Europe would be prevented.
In the last decade, EU member states have taken significant steps
toward political integration as well, with decisions to develop a
common foreign policy and closer police and judicial cooperation. EU
members work together through common institutions. The EU has been
built through a series of binding treaties.
The institutions of the EU are divided into three "pillars" and
decision-making processes differ in each. Pillar one is the European
Community, which encompasses economic, trade, and social policies
ranging from agriculture to education. In pillar one areas--by far the
most developed and far-reaching--members have largely pooled their
national sovereignty and work together in EU institutions to set policy
and promote their collective interests. Decisions in pillar one often
have a supranational character and most are made by a majority voting
system. Pillar two aims to establish a common foreign and security
policy to permit joint action in foreign and security affairs. Pillar
three seeks to create a justice and home affairs policy to foster
common internal security measures and closer police and judicial
coordination. Under pillars two and three, members have agreed to
cooperate but decision making is intergovernmental and by consensus.
Thus, members retain more discretion and the right to veto certain
measures.
The EU is governed by several institutions. They do not correspond
exactly to the traditional division of powers in democratic
governments. Rather, they embody the EU's dual supranational and
intergovernmental character.
The European Council brings together the heads of state or government
of the member states and the Commission President at least twice a
year. It acts principally as a strategic guide and driving force for EU
policy.
The Council of the European Union (Council of Ministers) consists of
ministers from the national governments. As the main decision-making
body, the council enacts legislation based on proposals put forward by
the European Commission (described below). Different ministers
participate depending on the subject under consideration (e.g., finance
ministers could convene to discuss budgetary policy). Most decisions
are made by majority vote, but some areas, such as taxation, require
unanimity. The presidency of the council rotates among the member
states every six months.
The European Commission (EC) is essentially the EU's executive
apparatus and has the sole right of legislative initiative. It upholds
the interests of the Union as a whole and ensures that the provisions
of the EU treaties are carried out properly. The 25 commissioners are
appointed by the member states for 5-year terms. Each commissioner
holds a distinct portfolio, for example, agriculture. The EC represents
the EU internationally and negotiates with other countries primarily in
areas falling under pillar one. However, the EC is primarily an
administrative entity that serves the Council of Ministers.
The European Parliament consisted of 732 members as of June 2004. They
are directly elected in each member state for 5-year terms under a
system of proportional representation based on population. The
Parliament cannot initiate legislation like national parliaments, but
it shares "co-decision" power with the Council of Ministers in a number
of areas and can amend or reject the EU's budget.
The Court of Justice interprets EU law and its rulings are binding. A
Court of Auditors monitors the EU's financial management. A number of
other advisory bodies represent economic, social, and regional
interests.
The European Central Bank (ECB) was established in 1998, under the
Treaty on European Union, to introduce and manage the new common
European currency shared by 12 of the member countries (the euro). The
ECB is also responsible for framing and implementing the EU's economic
and monetary policy.
Evolution of Indicators in the European Union:
From the beginning, the European Union and its governing institutions
have used indicators as the basis for monitoring conditions, tracking
progress, and making decisions. As the EU has expanded into new areas,
indicators have played an increasingly important role. Many of the
policy agreements among member countries are accompanied by agreements
to develop indicators to measure progress toward achieving the goals
and objectives to which members have agreed. Because of the limited
powers of the EU compared to those of the sovereign member countries,
and because of the great diversity among the member countries, the EU
has promoted evidence-based decision making and the use of high-
quality, impartial, and comparable information as a way of enhancing
the prospects for making progress toward EU-wide goals and objectives.
Closely monitoring levels of progress and encouraging action toward
these goals and objectives are important functions of the EC because in
most cases it is up to individual, sovereign countries to determine how
and to what extent to pursue them.
Economic indicators serve as the basis for a number of key decisions
within the EU. In fact, use of some indicators is written into
important treaties. In one specific example, the Maastricht Treaty laid
out criteria to determine when countries are ready to adopt the euro--
the single European currency that 12 members currently use. Among other
things, the treaty specifies that the annual government deficit of the
country, defined as the ratio of the annual deficit to gross domestic
product, must not exceed 3 percent at the end of the preceding
financial year. The treaty also stipulates that the country must
achieve a high degree of price stability. To monitor these treaty-
driven criteria, the EU uses sets of key indicators. For example, the
European Central Bank has worked with the EC to develop a harmonized
index of consumer prices, a key indicator for monetary policy and the
monitoring of inflation. The EU has also developed a set of euro-
indicators to measure economic development in member countries.
Eurostat, which is a component of the EC, is the statistical agency of
the EU. Eurostat does not collect much data on its own--instead it
relies on data collected by member countries. Accordingly, Eurostat
works with the national statistical offices of the member countries to
obtain the required information. Eurostat has worked with member
countries to harmonize indicator data, improve the quality of
indicators, expand coverage for acceding countries, conduct
methodological work on new indicators, maintain databases, and provide
technical assistance in the development and use of indicators. Because
of the great diversity of the member countries in a wide variety of
areas, including the maturity of their statistical systems, the task of
obtaining high-quality, comparable, and harmonized data for indicator
systems has been a major challenge.
The increasing demands for indicators and data and, in particular,
their increasing use for monitoring EU policies,called for a more
formalized structure for European statistics. In 1997, the EU agreed to
include an article on statistics in the Treaty of Amsterdam. It
supplements the Statistical Law of 1997, which provides the legal
framework for EC statistics and sets out the division of tasks between
the national statistical institutes and Eurostat. The Statistical Law
publicly and legally established basic principles for compiling and
disseminating statistics, in particular those of impartiality and
independence; and it guarantees confidentiality.
Development and Implementation of the European Structural Indicators:
The development of the comprehensive European Structural Indicators
system was a major milestone in the evolution of indicators at the EU
level. It was at a meeting of the European Council in Lisbon, Portugal
in 2000 that the EU established a strategic goal for the next decade: "
…becoming the most competitive and dynamic knowledge-based economy in
the world capable of sustainable economic growth with more and better
jobs and greater social cohesion." Moreover, the member countries
agreed to a series of more specific objectives and targets. Their
agreements are laid out in the Lisbon Strategy. For example, one
objective was to raise the employment rate to 70 percent by 2010 and
increase the number of women in employment to more than 60 percent by
2010. Another less quantitative goal was the creation of an information
society for all businesses and citizens.
The European Council also acknowledged the need for regular discussion
and assessment of the progress made in achieving the Lisbon Strategy's
goal and related objectives on the basis of commonly agreed to European
Structural Indicators. To this end, it invited the EC to draw up an
annual synthesis report on progress on the basis of indicators relating
to employment, innovation, economic reform, and social cohesion. The
Lisbon Strategy acknowledged the links between the economic and social
arenas and the necessity for more comprehensive indicators to measure
progress. In the following year, the Gothenburg European Council added
the domain of the environment to the areas already covered by the
Lisbon Strategy and the European Structural Indicators, thereby making
it a comprehensive strategy cutting across the economic, environmental,
and social and cultural domains.
In the months that followed, the EC and the Council implemented a
structured process for defining, creating, and using the European
Structural Indicators needed for this policy process. The evolving
process has proved to be an important vehicle for achieving consensus
within the EC directorates and among member countries. To identify the
indicators and develop an indicator system, the EC convened a series of
meetings through which it established processes that continue to be
used to this day. Specifically, it established a committee consisting
of officials from all relevant EC directorates (e.g., research,
education and culture, environment, and employment and social affairs)
whose purpose is to discuss which indicators to include and develop a
draft slate of indicators. The economic and financial affairs
directorate coordinates the European Structural Indicators selection
process. Eurostat participates in this committee primarily as a
technical advisor. For example, Eurostat staff advises on what
indicators or data exist and their levels of quality and reliability.
In some cases, indicators already exist while in others they do not.
For example, identifying indicators of employment to include in the
European Structural Indicators was relatively easy because a well-
developed employment indicator system already existed. In contrast,
identifying appropriate indicators in the area of science and
technology has been more challenging. The EU attempts to reach
consensus regarding indicator selection by applying certain criteria
and balancing the number of indicators among the various policy goals.
As a part of its work to identify appropriate indicators, the EU has
adopted a set of criteria for selection of the European Structural
Indicators. Indicators that are part of the system should be mutually
consistent; policy relevant, or linked to policy goals already
established; easy to understand by the target audience; available in a
timely fashion; available for all or nearly all member states, acceding
states, and candidate states; comparable among these states as well as
to external parties, such as the United States; selected from reliable,
official sources; and does not impose undue data collection burdens on
members.
Once the committee has achieved consensus, the EC forwards a proposal
to the Council of Ministers, which consists of officials from member
countries. The Council of Ministers discusses the proposed indicators
and works with the EC to agree on a final list. Once agreement on the
list of indicators has been ratified by the Council, a separate EC
committee, led by Eurostat, works to obtain indicators or data to
compute the desired indicators. Each year, the EC issues a spring
report to the Council that discusses the results of the indicators.
The list of key indicators has been reassessed every year, taking into
account political priorities as well as progress with regard to
development of indicators. The initial list of European Structural
Indicators adopted in 2000 for the 2001 spring report contained 35
indicators. For the 2002 report, the list grew to 42 indicators with
107 subindicators. In the 2003 report, the list of indicators remained
the same, but the coverage was extended to include the 10 candidate
countries to the extent indicators were available. Concerned about the
growing list of proposed indicators, the EC agreed in 2003 to designate
14 indicators as headline indicators for the 2004 report, allowing
leaders to focus on the most important measures of progress in the
Lisbon Strategy. Further, they decided to revise the selection of key
indicators every 3 years rather than annually, making it easier to
assess levels of progress over time. The EC continues to collect and
maintain the larger database of indicators. While the discussion on
progress made towards the Lisbon objectives in the annual spring report
focuses on the headline indicators, reference is made to indicators in
the database when appropriate.
Participants from both the EC and member countries agree that the
process of collaboration is working well. In fact, the processes and
practices established for the European Structural Indicators system,
such as the selection criteria, are increasingly being utilized as a
model for other EU indicator systems.
Key Themes from the EU Experience:
Several themes emerge from studying the experience of the EU and its
governing institutions in developing and implementing indicator
systems, particularly the European Structural Indicators. This system
was started for a specific purpose, had a defined target audience in
mind, and was designed accordingly. The EU's experience demonstrates
the usefulness of having transparent, repeatable processes in place for
coordinating the work of all participants in the selection and revision
of the indicator set and in the analysis and reporting of results. In
addition, the European Structural Indicators system is increasingly
being used as a best practices model for the rest of the EU's new and
existing indicator systems.
* Identifying specific purposes and target audiences is important. The
European Structural Indicators system--and nearly all key topical area
or domain-specific indicator systems--are linked to goals and
objectives that have been ratified in various treaties or otherwise
agreed to by member countries. EU officials told us their key
indicators generally serve to assess progress in meeting these Union-
wide goals and objectives and then to encourage lower-performing
countries to take action to better meet them. The EC can issue country-
specific recommendations and does so regularly if, based on a review of
the key indicators, it finds a particular country is not making
sufficient progress. Merely publishing the comparative figures on
performance of EU member countries helps influence leaders to improve
performance. Accordingly, indicators have become an effective policy
and political tool for the EU, and in recent years the use of
indicators and the demand for data has increased. Moreover, the
targeted audience of EU indicator systems, including the European
Structural Indicators, is fairly narrow. EU officials readily
acknowledge that the targets of their key indicator systems are
primarily policymakers in the EU and member countries--not necessarily
the public, advocacy groups, or researchers. For this targeted
audience, indicators have increased in importance. However, other
possible user groups may access the information, as it is publicly
disseminated on the Eurostat Web site
([Hyperlink, http://europa.eu.int/comm/eurostat/]).
* Structured and transparent processes are also important. Ensuring
coordination among diverse stakeholders in selecting indicators and
assessing progress has been essential to the development of the
European Structural Indicators and other EU indicator systems as well.
This structured and transparent process of collaboration provides for
regular participation of representatives of the EU and member
countries, and the EC and the Council, including political decision
makers and policy and technical experts. To make this all work, EU and
country officials stressed that the selection of indicators and the
selection of data to feed into those indicators should be well-
coordinated processes, with indicator decisions left up to elected and
appointed officials while data selection, collection, and coordination
is left up to the appropriate experts. In addition, it is important for
the coordinating mechanism (in this case, the EC directorates) to have
highly dedicated, intelligent, experienced and collaborative staff with
substantive knowledge of the subject areas and training in relevant
disciplines (e.g., statistics, economics, or law). The initiatives to
develop selection criteria, harmonize data from vastly different
countries with different statistical systems, and improve the quality
of available data require a significant investment of time and effort
initially, but become easier over time. The view from selected member
countries we interviewed is that these processes work well.
* It serves as a best practices model to assist other efforts. The EU
is also using the European Structural Indicators to better coordinate
other indicators efforts, and is trying to make practices designed for
this system serve as a framework for other efforts to develop
indicators of progress. Specifically, EU officials are expanding their
efforts to establish common definitions, data collection standards,
quality standards, and criteria for selecting indicators--as they have
done in the development of the European Structural Indicators. In fact,
if any directorate is proposing to establish new indicators in its
particular policy domain, it must now submit the indicators for comment
among other EC directorates. According to EC officials, the European
Structural Indicators system is an effective model because it is viewed
as an objective, trustworthy measure of progress. The professional work
of the EC and member countries has led to significant progress in
comparing heterogeneous jurisdictions, harmonizing the indicators to
ensure comparability and quality, moving from a national to a European
level, dealing with different levels of resources and maturity of
statistical systems, and balancing national priorities among the member
countries.
[End of section]
Appendix VII: Selected Bibliography on Indicator Systems:
Comprehensive Key Indicator Systems:
Australian Bureau of Statistics. Measures of Australia's Progress 2004.
Canberra: 2004.
Measures of Australia's Progress uses a discussion of human capital,
social capital, natural capital, and financial capital indicators to
asses the extend to which Australia has progressed.
Bok, Derek. The State of the Nation. Cambridge, Mass.: Harvard
University Press, 1996.
The State of the Nation examines the areas of economic prosperity,
quality of life, equality of opportunity, personal security, and
societal values, and compares the progress made in these areas with
progress made in other countries.
The Boston Foundation. Creativity and Innovation: A Bridge to the
Future: Boston Indicators Report 2002. Boston: 2002.
This report provides indicators of civic involvement, the economy,
education, public health, and other measures of well-being.
Chicago Metropolis 2020. 2002 Metropolis Index. Chicago: 2002.
The 2002 Metropolis Index is intended to give residents of the region
benchmarks to assess how the region is doing, and to help them consider
what must be done to sustain its status as a globally competitive
region.
Committee on Geography, Committee on Identifying Data Needs for Place-
Based Decision Making. Community and Quality of Life: Data Needs for
Informed Decision Making. Washington, D.C.: National Academy Press,
2002.
Community and Quality of Life examines the concept of livable
communities, the selection of livability indicators, data needs, and
measurement and analysis issues related to the indicators.
Conference Board of Canada. Performance and Potential 2003-2004.
Ottawa: 2004.
This report identifies issues that need to be address in order to
maintain and improve Canada's quality of life.
Global Reporting Initiative. 2002 Sustainability Reporting Guidelines.
Amsterdam: 2003.
The 2002 Sustainability Reporting Guidelines organizes "sustainability
reporting" in terms of economic, environmental, and social performance
(also known as the triple bottom line).
Jacksonville Community Council, Inc. Quality of Life Progress Report: A
Guide for Building a Better Community. Jacksonville, Fla.: 2003.
This report measures progress toward goals cobering 10 quality of life
topics for the Jacksonville, Florida area.
Maine Economic Growth Council. Measures of Growth 2004. Augusta, Maine:
2004.
Measures of Growth 2004 provides the results of 58 indicators in the
areas of the economy, community and the environment.
Miringoff, Marc and Marque-Luisa Miringoff. The Social Health of the
Nation: How America is Really Doing. New York: Oxford University Press,
1999.
The Social Health of the Nation presents a variety of indicators of
social well-being over several decades.
National Audit Office, United Kingdom. Good Practice in Performance
Reporting in Executive Agencies and Non-Departmental Public Bodies.
London: 2000.
Good Practice in Performance Reporting in Executive Agencies and Non-
Departmental Public Bodies discusses good practices in government
performance reporting to ensure transparent, accountable, and efficient
government services.
New York City Department of City Planning. 2002 Report on Social
Indicators. New York: 2002.
2002 Report on Social Indicators is a compendium of data on the
economic, social, physical, and environmental health of the city. The
data are compiled from city, state, and federal sources and summarized
on either a calendar or fiscal year basis.
Oregon Progress Board. Is Oregon Making Progress? The 2003 Benchmark
Performance Report. Salem, Oregon: 2003.
Is Oregon Making Progress? is a report on the comprehensive effort to
describe progress Oregonians have made in achieving their targets for
90 benchmarks.
President of the Treasury Board of Canada. Canada's Performance 2003.
Ottawa: 2003.
Canada's Performance 2003 reports on the quality of life for Canadians.
Southern California Association of Local Governments. State of the
Region 2003: Measuring Progress in the 21st Century. Los Angeles: 2003.
State of the Region 2003 assesses Southern California's performance
with respect to three overall goals: raise the standard of living,
enhance the quality of life, and foster equal access to resources.
Steering Committee Review of Commonwealth/State Services, Australia.
Report on Government Services 2004. Canberra: 2004.
Report on Government Services 2001 details the performance of
government service provision in Australia in education, health,
justice, emergency management, community services, and housing.
United Nations General Assembly. Implementation of the United Nations
Millennium Declaration: Follow up to the Outcome of the Millennium
Summit. New York: 2002.
Implementation of the United Nations Millennium Declaration: Follow up
to the Outcome of the Millennium Summit details the progress that the
United Nations has made on its millennium development goals which are
to (1) halve extreme poverty and hunger, (2) achieve universal primary
education, (3) empower women an promote equality between women and men,
(4) reduce under five mortality by two-thirds, (5) reduce maternal
mortality by three-quarters, (6) reverse the spread of diseases
especially AIDS/HIV malaria, (7) ensure environmental sustainability,
and (8) create a global partnership for development with targets for
aid, trade, and debt relief.
University at Buffalo Institute for Local Governance and Regional
Growth. State of the Region Progress Report 2002. Buffalo, New York:
SUNY University at Buffalo Institute for Local Governance and Regional
Growth, 2002.
State of the Region Progress Report 2002 offers a second update of the
1999 baseline report, with two components--one focused on the data-
driven performance measures, the other a second look at the
opportunities and challenges that will shape Buffalo-Niagara's progress
into the new century.
Topical Area Indicator Systems:
Chrvala, Carole A. and Roger J. Bulger, eds. Leading Health Indicators
for Healthy People 2010: Final Report, Committee on Leading Health
Indicators for Healthy People 2010, Division of Health Promotion and
Disease Prevention, Institute of Medicine. Washington, D.C.: National
Academy Press, 1999.
Leading Indicators for Health People 2010 describes the efforts of the
Committee on Leading Health Indicators to develop leading health
indicator sets that could focus on health and social issues and evoke a
response and action from the general public and the traditional
audiences for Healthy People.
Committee to Evaluate Indicators for Monitoring Aquatic and Terrestrial
Environment, Board on Environmental Studies and Toxicology, Water
Science and Technology Board, Commission on Geosciences, Environment,
and Resources, National Research Council. Ecological Indicators for the
Nation. Washington, D.C.: National Academy Press, 2000.
Ecological Indicators for the Nation suggests criteria for selecting
useful ecological indicators, provides methods for integration complex
ecological information in indicators that are useful, proposes
indicators that would meet these criteria, examines the state of data
that would be used to calculate these indicators and offers guidance on
communicating and storing ecological indicators.
Council of Economic Advisors, Executive Office of the President. The
Economic Report of the President. Washington, D.C.: 2004.
The Economic Report of the President is a discussion of selected
economic issues prepared by the Council of Economic Advisors and tables
of economic data.
Department of Health and Human Services. Healthy People 2010:
Understanding and Improving Health. Washington, D.C.: Department of
Health and Human Services, 2000.
Healthy People 2010 provides a comprehensive set of disease prevention
and health promotion objectives for the United States to achieve by
2010, with related indicators.
Federal Interagency Forum on Aging Related Statistics. Older Americans
2000: Key Indicators of Well-being. Washington, D.C.: 2000.
Older Americans: 2000 contains statistics regarding the population,
economics, health status, health risks and behaviors, and health care
of older U.S. Citizens.
Federal Interagency Forum on Child and Family Statistics. America's
Children: Key National Indicators of Well-Being 2003. Washington, D.C.:
Federal Interagency Forum on Child and Family Statistics, 2003.
American's Children provides 25 key indicators on the well-being of
children in the areas of economic security, health, behavior and social
environment, and education.
H. John Heinz III Center for Science, Economics, and the Environment.
The State of the Nation's Ecosystems: Measuring the Lands, Waters, and
Living Resources of the United States. Cambridge, United Kingdom:
Cambridge University Press, 2002.
The State of the Nation's Ecosystem is a blueprint for periodic
reporting on the condition and use of ecosystems in the United States.
Kids Count. Kids Count Data Book 2004. Baltimore: Annie Casey
Foundation, 2004.
Kids Count Data Book provides information about the physical health,
mental health, economic well-being, and educational achievements of
children in the United States. Data are available nationwide and for
each state.
National Research Council. Grading the Nation's Report Card: Evaluating
NAEP and Transforming the Assessment of Educational Progress, Committee
on the Evaluation of National and State Assessment, Commission on
Behavioral and Social Sciences and Education, National Research
Council. Washington, D.C.: National Academy Press, 1999.
Grading the Nation's Report Card describes the National Assessment of
Educational Progress' national assessment, the state assessment
program, the student performance standards, and the extent to which the
results are reasonable, valid, and informative to the public.
Norwood, Janet and Jamie Casey, eds. Key Transportation Indicators: A
Summary of a Workshop, Committee on National Statistics, Division of
Behavioral and Social Sciences and Education, National Research
Council. Washington, D.C.: National Academy Press, 2002.
Key Transportation Indicators discusses efforts to review current
transportation indicators and issues associated with their uses as well
considering what kinds of additional indicators are need.
Starke, Linda, ed. State of the World 2004: Richer, Fatter, and Not
Much Happier. New York: W.W. Norton and Co., 2004.
State of the World 2002 provides information on a variety of issues in
sustainable development such as climate change, farming, toxic
chemicals, and other areas.
UNICEF. The State of the World's Children 2003. New York: 2003.
The State of the World 2003 contains a comprehensive set of economic
and social indicators on the well-being of children world wide.
World Health Organization. World Health Report 2002. New York: 2002.
World Health Report 2002 measure the amount of disease, disability, and
health that can be attributed to certain risks and calculates how much
of the burden is preventable.
Background Sources:
Berry, David, Patrice Flynn, and Theodore Heintz. "Sustainability and
Quality of Life Indicators: Toward the Integration of Economic, Social
and Environmental Measures," Indicators: The Journal of Social Health,
vol. 1, no. 4 (Fall 2002).
"Sustainability and Quality of Life Indicators" provides discussion of
approaches to integrate social, economic and environmental indicators
and expanding the scope of our national data system.
Caplow, Theodore, Louis Hicks, and Ben J. Wattenberg. The First
Measured Century. Washington, DC: AEI Press, 2001.
The First Measured Century describes how using statistics to measure
social conditions gained importance throughout the United States
between 1900 and 2000.
Department of Health, Education, and Welfare. Toward a Social Report.
Washington, D.C.: 1969.
Toward a Social Report discusses how social reporting could improve the
nation's ability to chart its social progress and to promote more
informed policy decisions.
Gross, Betram M. Social Intelligence for America's Future: Explorations
in Societal Problems. Boston, Mass.: Allyn and Bacon, Inc., 1969.
Social Intelligence for America's Future is part of a "trial run"
social report ranging from learning and health to crime and the arts.
It discusses information methodology and the use of data to guide
public policy.
For more background information and materials on indicator systems, see
[Hyperlink, http://www.gao.gov/npi/]. These materials were assembled
in advance of the February 27, 2003 forum on Key National Performance
Indicators in order to both provide background on the subject of
national indicators, and to provide support for post-Forum efforts.
[End of section]
Appendix VIII: GAO Contact and Contributors:
GAO Contact:
Christopher Hoenig, Managing Director, Strategic Issues, (202) 512-
6779, [Hyperlink, hoenigc@gao.gov]:
Major Contributors:
Ann Calvaresi Barr, Acting Director, Acquisition and Sourcing
Management:
Susan Ragland, Assistant Director, Strategic Issues:
Tom Yatsco, Senior Analyst-in-Charge, Strategic Issues:
Anne Inserra, Senior Analyst, Strategic Issues:
Elizabeth Powell, Senior Analyst, Strategic Issues:
Bob Yetvin, Senior Analyst, Strategic Issues:
Katherine Wulff, Analyst, Strategic Issues:
Peter Zwanzig, Analyst, Strategic Issues:
Elizabeth Morris, Analyst, Norfolk Field Office:
Michael Volpe, Assistant General Counsel, Office of the General
Counsel:
Andrea Levine, Senior Attorney, Office of the General Counsel:
Other Contributors:
Robert Parker, Chief Statistician:
Allen Lomax, Senior Analyst, Strategic Issues:
Bill Trancucci, Senior Analyst, Strategic Issues:
Chase Huntley, Senior Analyst, Natural Resources and Environment:
Atlanta Field Office:
Bernice Benta:
James Cook:
Laura Czohara:
Catherine Myrick:
Boston Field Office:
Kate Bittinger:
Anne Cangi:
Betty Clark:
Melissa Emery-Arras:
Josh Habib:
Christine Houle:
Shirley Hwang:
Denise Hunter:
Chris Murray:
Nico Sloss:
Tatiana Winger:
Chicago Field Office:
Jackie Garza:
Libby Halperin:
Andrew Nelson:
Tarek Mahmassani:
Cory Roman:
Julianne Stephens:
San Francisco Field Office:
Jeff Arkin:
Elizabeth Fan:
Jeanine Lavender:
Janet Lewis:
Mark Metcalfe:
Susan Sato:
(450228):
FOOTNOTES
[1] In preparation for its World Indicators Forum in November 2004, the
Organisation for Economic Co-operation and Development (OECD)--one of
the major international institutions devoted to indicators, statistical
data, and policy analysis--is developing a "Knowledge Base on National
and International Experiences" of existing and developing national
systems in the 30 OECD member nations as well as others like Brazil,
China, and India. See http://www.oecd.org/oecdworldforum.
[2] Expenditures for federal statistical programs were approximately
$4.7 billion in fiscal year 2004.
[3] Fedstats is an on-line effort that provides links to statistical
information from numerous federal agencies. See http://
www.fedstats.gov. The statistical abstract is available online from the
Census Bureau at http://www.census.gov/statab/www.
[4] While federal agencies (e.g., GAO, the Office of Management and
Budget, and major federal statistical agencies) do not play a formal
role in the effort, they regularly communicate, coordinate, offer
routine advice, observe meetings, and exchange professional knowledge.
[5] The National Academies is the umbrella organization for four of the
nation's premier scientific organizations: the National Research
Council, the National Academy of Sciences, the Institute of Medicine,
and the Institute of Engineering.
[6] GAO, Forum on Key National Indicators: Assessing the Nation's
Position and Progress, GAO-03-672SP (Washington, D.C.: May 2003).
[7] The Boston Indicators Project's categories include civic health,
cultural life and the arts, the economy, education, the environment,
housing, public health, public safety, technology, and transportation.
[8] The term outcome-oriented refers to a general concern with impact
on the conditions of society. Outcome statements range from broad
aspirations (e.g., a healthy population) to specific objectives or
targets for change over a specified time period (e.g., increasing
available jobs by 10 percent over a 4 year period).
[9] The term jurisdiction is used throughout this report to refer to
neighborhoods, communities, cities, regions, states, nations, or other
entities that, by definition, cover a geographic area and incorporate
both public and private activities.
[10] Commission of the European Communities, Report from the European
Commission to the Spring European Council: Delivering Lisbon Reforms
for the Enlarged Union (Brussels: 2004).
[11] Such communities could include, but not necessarily be limited to,
the accountability, statistical, scientific and research, business,
media, civic, leadership, finance, and not-for-profit communities,
including key geographic and demographic groups.
[12] The United States is not directly comparable to the EU, however,
on a range of political, economic, cultural, and geographic dimensions.
[13] These three domains are widely used in the United States and
around the world.
[14] We are using the term "jurisdiction" in this report in the
broadest sense--it could be a neighborhood or community, a state or
local government, a region, or a nation. Therefore, a jurisdiction
could be defined by political or geographical boundaries.
[15] The Conference Board is a nonprofit organization that creates and
disseminates knowledge about management and the marketplace. It works
as a global, independent membership organization in the public
interest.
[16] Other indicator systems exist in each of the three domains. We did
not focus on environmental indicator systems because another
forthcoming GAO product will focus on this domain.
[17] Council of Economic Advisers/Executive Office of the President,
Economic Report of the President (Washington, D.C.: February 2004).
[18] Environmental Protection Agency, Draft Report on the Environment
2003 (Washington, D.C.: 2003).
[19] GAO, Highlights of a GAO Forum: Health Care: Unsustainable Trends
Necessitate Comprehensive and Fundamental Reforms to Control Spending
and Improve Value, GAO-04-793SP (Washington, D.C.: May 2004).
[20] In preparation for its World Indicators Forum in November 2004,
the OECD is developing a "Knowledge Base on National and International
Experiences" of existing and developing national systems in the 30 OECD
member nations, and others like Brazil, China, and India.
[21] The most recent report is Canada's Performance 2003. For more
information, see http://www.tbs-sct.gc.ca/report/govrev/03/cp-
rc1_e.asp#_Toc54511340. Updated reports are issued annually.
[22] The most recent report--Measures of Australia's Progress 2004--was
issued in April 2004. For more information, see http://www.abs.gov.au/
Ausstats/abs@.nsf/0/398ab89dbd6cba6fca256e7d00002636?OpenDocument.
[23] According to OMB estimates, funding for federal statistical
agencies that collect and disseminate information, including many
indicators in nearly every topical area, amounted to over $4.7 billion
for fiscal year 2004.
[24] See Office of Management and Budget, Statistical Programs of the
United States Government, Fiscal Year 2004 (Washington, D.C.: 2003).
[25] The measurement of incomes earned in the United States was a joint
effort by the Department of Commerce and the National Bureau of
Economic Research. The Department of Commerce subsequently assumed all
the work and provided the first measure of U.S. production during World
War II.
[26] GAO, Consumer Price Index: More Frequent Updating of Market Basket
Expenditure Weights Is Needed, GAO/GGD/OCE-98-2 (Washington, D.C.: Oct.
9, 1997).
[27] The composite indexes are the key elements in an analytic system
designed to signal peaks and troughs in the business cycle. The
leading, coincident, and lagging indexes are essentially composite
averages of between 4 and 10 individual leading, coincident, or lagging
indicators. They are constructed to summarize and reveal common turning
point patterns in economic data in a clearer and more convincing manner
than any individual component--primarily because a number of indicators
taken together as a single index has more information than any one
indicator.
[28] Raymond Bauer, ed. Social Indicators (Cambridge, Mass.: MIT Press,
1966).
[29] Department of Health, Education, and Welfare, Toward a Social
Report (Washington, D.C.: 1969).
[30] Office of Management and Budget, Statistical Policy Division,
Social Indicators 1973: Selected Statistics on Social Conditions and
Trends in the United States (Washington, D.C.: 1973); Department of
Commerce, Social Indicators 1976: Selected Data on Social Conditions
and Trends in the United States (Washington, D.C.: 1977); and
Department of Commerce, Social Indicators III: Selected Data on Social
Conditions and Trends in the United States (Washington, D.C.: 1980).
[31] The General Social Survey has been conducted by NORC (formerly
known as the National Opinion Research Center) since 1972. The Bureau
of Justice Statistics has sponsored the National Crime Victimization
Survey since 1973, although it is conducted by the Census Bureau.
[32] Organisation for Economic Co-operation and Development, Living
Conditions In OECD Countries: A Compendium Of Social Indicators (Paris:
1986).
[33] Organisation for Economic Co-operation and Development, Society at
a Glance: OECD Social Indicators 2002 Edition (Paris: 2002).
[34] See http://www.healthypeople.gov.
[35] Department of Health and Human Services, Healthy People 2010:
Understanding and Improving Health (Washington, D.C.: 2000).
[36] Rachel Carson, Silent Spring (Boston, Mass.: Houghton Mifflin,
1962).
[37] Pub. L. No. 92-500, 86 Stat. 816 (codified as amended in 33 U.S.C.
§§ 1251-1387).
[38] Section 305(b) of the Clean Water Act of 1972, 33 U.S.C. § 1315
(b).
[39] See http://www.epa.gov/305b for EPA's past National Water Quality
Inventory reports.
[40] Environmental Protection Agency, National Water Quality Inventory:
2000 Report (Washington, D.C.: 2002).
[41] President's Research Committee on Social Trends, Recent Social
Trends in the United States (Washington, D.C.: 1933).
[42] See http://www.whitehouse.gov/stateoftheunion/2004.
[43] See http://www.fedstats.gov.
[44] See http://www.whitehouse.gov/news/fsbr.html.
[45] U.S. Census Bureau, The Statistical Abstract of the United States
(Washington, D.C.: 2004). See http://www.census.gov/statab/www.
[46] GAO, Forum on Key National Indicators: Assessing the Nation's
Position and Progress, GAO-03-672SP (Washington, D.C.: 2003). Also see
http://www.gao.gov/npi/ for more information.
[47] Since helping to catalyze the effort through the initial forum in
February 2003, GAO has not played a formal role in this effort.
However, GAO and other federal government entities (e.g., OMB and the
White House Council on Environmental Quality) continue to attend
meetings and participate in the ongoing exchange of professional
information and ideas, and to ensure coordination across federal
agencies.
[48] For more information, http://www.oecd.org/oecdworldforum.
[49] GAO-03-672SP.
[50] See app. III for additional information on the comprehensive
indicator systems we studied for this report.
[51] The Conference Board is a nonprofit organization that creates and
disseminates knowledge about management and the marketplace. It works
as a global, independent membership organization in the public
interest.
[52] National Science Board-National Science Foundation, Science and
Engineering Indicators 2004 (Arlington, Va.: 2004).
[53] Of the national topical area indicator systems we examined, only
one, the Business Cycle Indicators, is not produced by a federal
agency. It is an extension of official indexes previously produced by
the Department of Commerce's Bureau of Economic Analysis that continues
to depend, in part, upon federal statistical information.
[54] 44 U.S.C. § 3504(e)(8).
[55] 44 U.S.C. § 3501 note.
[56] Executive Order No. 13045, 62 Fed. Reg. 19885 (1997).
[57] For more information, see http://www.childstats.gov.
[58] In addition to the original three core agencies--U.S. Census
Bureau, National Center for Health Statistics, and National Institute
on Aging--the members of the Forum on Aging-Related Statistics now
include senior officials from the Administration on Aging, Agency for
Healthcare Research and Quality, Bureau of Labor Statistics, Centers
for Medicare & Medicaid Services, Department of Veterans Affairs,
Environmental Protection Agency, Office of Management and Budget,
Office of the Assistant Secretary for Planning and Evaluation in the
Department of Health and Human Services, and the Social Security
Administration.
[59] Federal Interagency Forum on Aging-Related Statistics, Older
Americans 2000: Key Indicators of Well-being (Washington, D.C.: 2000).
For more information, see http://www.agingstats.gov.
[60] Dale Jorgensen and Steve Landefeld, Blueprint for an Expanded and
Integrated Set of Accounts for the United States, presented at the
Conference on a New Architecture for the U.S. National Accounts, April
16, 2004, Washington, D.C. For more information, see http://
www.nber.org/CRIW/CRIWs04/CRIWs04prog.html.
[61] For a summary of the event, which was held on June 3, 2004, see
http://www.brookings.edu/comm/op-ed/20040603happiness.htm.
[62] For more information, see http://market1.cob.vt.edu/isqols.
[63] Following the 1990 U.S. census, the Indianapolis MSA was defined
as 9 counties. Following the 2000 U.S. census, the MSA was redefined as
10 counties--adding Brown and Putnam and dropping Madison. SAVI
includes 11 counties--all counties from the 1990 and 2000 MSA
definitions.
[64] Oregon Economic Development Department, Oregon Shines (Salem,
Oreg.: 1989). Oregon Progress Board, Oregon Shines II (Salem, Oreg.:
1997).
[65] Ten of the current 25 member states (Cyprus, the Czech Republic,
Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and
Slovenia) joined the EU on May 1, 2004. Two other candidates, Bulgaria
and Romania, hope to complete negotiations and be able to join the EU
by 2007. Another candidate, Turkey, remains in a separate category as
it seeks to comply fully with the EU's political and economic criteria
for membership.
[66] This effort should not be confused with the Baltimore city
government's CitiStat, which is an accountability tool used by the
Mayor and city officials to hold city managers accountable and to
measure government results.
[67] Organizers told us that the general progress index still allows
people to drill down into the individual data points to identify major
deterrents to a higher quality of life index number or which individual
factors most positively affect the number.
[68] The Worcester Indicator Project in Massachusetts uses a
computerized neighborhood tracking program adapted from the one used by
the City of New York to collect data on municipal and neighborhood
services. The ComNet survey involves a systematic review by
neighborhood of specific physical conditions, such as the condition of
items like sidewalks and street signs, which are recorded and mapped.
[69] These 3 indicators are among the approximately 115 indicators
included in Jacksonville's Indicators for Progress indicator set, each
of which is measured and reported on annually.
[70] Jacksonville Community Council, Inc., Public Education Reform:
Phase I-Assessing Progress (Jacksonville, Fla.: 2003).
[71] Several other comprehensive indicator systems also maintain
interactive Web sites where users can search for data by different
characteristics, such as the Boston Indicators Project, where Web site
users can pull out data by sector, or by one of the crosscutting
filters (including race and ethnicity, children and youth, sustainable
development, and Boston neighborhoods).
[72] Jacksonville Community Council, Inc., Public Education: The Cost
of Quality (Jacksonville, Fla.: 1993).
[73] This graphic was presented in a special publication of the Oregon
Progress Board--the 2001 Benchmark Blue Book--which has not been
updated since then because Oregon has moved to a new performance
measure reporting system.
[74] Commission of the European Communities, Report from the European
Commission to the Spring European Council: Delivering Lisbon Reforms
for the Enlarged Union (Brussels: 2004).
[75] The government published a separate report in November 2002 on
waste-related issues titled Waste Not, Want Not. Strategy Unit, Waste
Not, Want Not: A Strategy for Tackling the Waste Problem in England
(London: November 2002).
[76] See http://indicators.hciflorida.org/indicators.cfm?id=78 for
more information.
[77] Eurostat, the EU's statistical organization, has responsibility
for ensuring development of standard concepts, methods, and technical
standards for the indicators; working with the national statistical
offices of the member countries to obtain data; and consolidating and
harmonizing data to ensure comparability across the member countries.
[78] Derek Bok, The State of the Nation (Cambridge, Mass.: Harvard
University Press, 1996).
[79] The term "Six-Sigma" is now generally used throughout the business
community to refer to comprehensive quality assurance systems that are
focused on continuously increasing the quality of an institution's
products and services through ever more sophisticated systems of
quantitative measurement and organizational improvement.
[80] Pub. L. No. 101-576, § 303.
[81] Pub. L. No. 103-62, 107 Stat. 285 (1993). See GAO, Results-
Oriented Government: GPRA Has Established a Solid Foundation for
Achieving Results, GAO-04-38 (Washington, D.C.: Mar. 10, 2004).
[82] The Government Management Reform Act of 1994, Pub. L. No. 103-356,
§ 405; the Federal Financial Management Improvement Act of 1996, Pub.
L. No. 104-208, § 803; and the Accountability of Tax Dollars Act of
2002, Pub. L. No. 107-289, § 2, have expanded on the reforms enacted by
the CFO Act.
[83] The Dallas Indicators system is a comprehensive database of key
community indicators. Its is an effort led by the Dallas Foundation and
the Foundation for Community Empowerment, in collaboration with the
Boston Consulting Group, Belo Corp., and the Dallas Citizens Council.
[84] These principles were developed as guidelines for the whole
process--choice and design of indicators, their interpretation, and
communication of results--to measure and assess progress toward
sustainable development. However, they apply more broadly to
comprehensive key indicator systems irrespective of any organizing
framework. They were developed in 1996 at an international meeting of
measurement practitioners at the Rockefeller Foundation's Study and
Conference Center in Bellagio, Italy. The 10 principles for an
indicator system's design are as follows: (1) are guided by a clear
vision and goals, (2) review the whole system as well as its parts and
recognition of interaction among the parts, (3) consider equity and
disparity within the current population and over generations, (4) have
adequate scope, (5) have a practical focus, (6) involves openness, (7)
have effective communication, (8) involve broad participation, (9) are
an ongoing assessment, and (10) provide institutional capacity.
[85] Alan AtKisson, and Tyler Norris et al., The Community Indicators
Handbook (Oakland, Calif.: Redefining Process, 1997).
[86] The National Academy of Sciences presently houses the Key National
Indicators Initiative--the ongoing U.S. effort to begin laying the
groundwork for a national comprehensive key indicator system.
[87] The U.S. Government Manual lists four entities as "quasi-official
agencies" the Legal Services Corporation, the State Justice Institute,
the United States Institute of Peace, and the Smithsonian Institution.
[88] The Smithsonian was created by an August 10, 1846, act to carry
out the terms of the will of British scientist James Smithson, who had
bequeathed his entire estate to the United States "to found at
Washington, under the name of the Smithsonian Institution, an
establishment for the increase and diffusion of knowledge." His bequest
was $541,379.63.
[89] United Nations Development Programme, Human Development Report
2003 (New York: 2003). For more information, see http://hdr.undp.org/
reports/global/2003.
[90] For more information, see http://www.healtypeople.gov.
[91] Federal Interagency Forum on Aging-Related Statistics, Older
Americans 2000 (Washington, D.C.: 2000). For more information, see
http://www.agingstats.gov.
[92] National Center for Education Statistics, The Condition of
Education 2004 (Washington, D.C.: 2004). For more information, see
http://nces.ed.gov/programs/coe/.
[93] The latest available report (as of Sept. 2004) is Federal Bureau
of Investigation, Crime in the United States 2002 (Washington, D.C.:
2002). For more information, see http://www.fbi.gov/ucr/ucr.htm.
[94] Kids Count, Kids Count Databook 2004 (Baltimore: Annie Casey
Foundation, 2004). For more information, see http://www.aecf.org/
kidscount/.
[95] Jacksonville Community Council, Inc., 2003 Quality of Life
Progress Report (Jacksonville, Fla.: 2003).
[96] The two other goal categories are "economy" and "environment."
[97] Since we completed our work, Minnesota Milestones ceased to be an
active system. State officials told us that the Minnesota Milestones
Web site will be maintained but there are no plans to update the data
in the near future.
[98] The Australian Bureau of Statistics began to publish Australian
Social Trends in 1994, Statistics Canada began to publish Canadian
Social Trends in 1986, France's Institut Nationale de la Statistique et
des Economique began to produce the Donnes Sociales in 1973, the German
government began producing Datenreport in 1983, the Netherlands' Social
and Cultural Planning Office began to produce the Social and Cultural
Report in 1974, and the United Kingdom's Central Statistical Office
began to produce Social Trends in 1970.
[99] Federal Statistical Office of Germany, Datenreport 2004
(Wiesbaden, Germany: 2004).
[100] Six of the 13 life domains consist of sectors that are considered
part of the economic and environmental domains for this report
(socioeconomic status and subjective class identification, labor market
and working conditions, income and income distribution, consumption,
transportation, and environment). The other seven life domains fall
into what we have termed the overall social and cultural domain; and
these are population, housing, health, education, social and political
participation, crime and public safety, leisure, time use and media
consumption.
[101] The Lisbon Strategy is an agreement among EU member countries
that laid out goals and objectives for all EU members. The Lisbon
Strategy is dedicated to economic, social, and environmental renewal in
the EU and contains goals that were agreed to by member countries. The
EU reports on progress toward achieving these goals every spring.
[102] World Bank Group, World Development Report 2004: Making Services
Work for Poor People (Washington, D.C.: 2004).
[103] Department of Commerce, Social Indicators III: Selected Data on
Social Conditions and Trends in the United States (Washington, D.C.:
December 1980).
[104] The indicators chosen to measure cultural activity in the U.S.
federal government social reports changed from the second to the third
report. In the second report, Social Indicators 1976, the cultural
indicators did not just center on the arts. The indicators included, in
addition to the number of concerts played and attendance at concerts,
the number of persons employed in knowledge-producing or knowledge-
disseminating occupations, the proportion of women in those
occupations, the percentage of the civilian labor force made up of
scientists and engineers, and book production (disaggregated by subject
area).
[105] Cultural Initiatives Silicon Valley, Creative Community Index
(San Jose, Calif.: 2002). For more information, see http://www.ci-
sv.org/cna_index.shtml.
[106] William J. Bennet, Index of Leading Cultural Indicators (New
York: 2001).
[107] For more information, see http://www.norc.uchicago.edu/projects/
gensoc.asp.
[108] See http://www.unesco.org/culture/worldreport.
[109] The specific programs at each of these organizations that helped
establish the Boston Indicators Project included the Community Building
Network at the Boston Foundation, the Sustainable Boston Initiative of
the City of Boston, and the Urban Institute's National Neighborhood
Indicators Partnership.
[110] The other groups included the Boston Redevelopment Authority,
Action for Boston Community Development, the United Way of
Massachusetts Bay, the Harvard School of Public Health, and the
Metropolitan Area Planning Council.
[111] The Boston Foundation, The Wisdom of Our Choices: Boston's
Indicators of Progress, Change and Sustainability 2000 (Boston: 2000).
[112] Boston's 400th anniversary will be in the year 2030.
[113] Boston Foundation, Creativity and Innovation: A Bridge to the
Future (Boston: 2002).
[114] In 1994 the Oregon Benchmarks was one of the winners of an
Innovations in Government contest sponsored by Harvard University's
Kennedy School of Government and the Ford Foundation. The Oregon
Benchmarks also received positive notice from the National Governor's
Association and the federal government's National Performance Review.
[115] Ten of the 25 current member states (Cyprus, the Czech Republic,
Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, and
Slovenia) joined the EU on May 1, 2004. Two other states, Bulgaria and
Romania, hope to complete accession negotiations and be able to join
the EU by 2007. Accession negotiations establish the terms under which
applicants will meet and enforce EU rules and regulations in a host of
areas ranging from agriculture to competition to trade. Turkey was
formally recognized as a EU candidate in 1999, but remains in a
separate category as it seeks to comply fully with the EU's political
and economic criteria for membership. No firm date has been set for
beginning accession talks with Turkey. Croatia and Macedonia have also
applied for EU membership.
GAO's Mission:
The Government Accountability Office, the investigative arm of
Congress, exists to support Congress in meeting its constitutional
responsibilities and to help improve the performance and accountability
of the federal government for the American people. GAO examines the use
of public funds; evaluates federal programs and policies; and provides
analyses, recommendations, and other assistance to help Congress make
informed oversight, policy, and funding decisions. GAO's commitment to
good government is reflected in its core values of accountability,
integrity, and reliability.
Obtaining Copies of GAO Reports and Testimony:
The fastest and easiest way to obtain copies of GAO documents at no
cost is through the Internet. GAO's Web site ( www.gao.gov ) contains
abstracts and full-text files of current reports and testimony and an
expanding archive of older products. The Web site features a search
engine to help you locate documents using key words and phrases. You
can print these documents in their entirety, including charts and other
graphics.
Each day, GAO issues a list of newly released reports, testimony, and
correspondence. GAO posts this list, known as "Today's Reports," on its
Web site daily. The list contains links to the full-text document
files. To have GAO e-mail this list to you every afternoon, go to
www.gao.gov and select "Subscribe to e-mail alerts" under the "Order
GAO Products" heading.
Order by Mail or Phone:
The first copy of each printed report is free. Additional copies are $2
each. A check or money order should be made out to the Superintendent
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or
more copies mailed to a single address are discounted 25 percent.
Orders should be sent to:
U.S. Government Accountability Office
441 G Street NW, Room LM
Washington, D.C. 20548:
To order by Phone:
Voice: (202) 512-6000:
TDD: (202) 512-2537:
Fax: (202) 512-6061:
To Report Fraud, Waste, and Abuse in Federal Programs:
Contact:
Web site: www.gao.gov/fraudnet/fraudnet.htm
E-mail: fraudnet@gao.gov
Automated answering system: (800) 424-5454 or (202) 512-7470:
Public Affairs:
Jeff Nelligan, managing director,
NelliganJ@gao.gov
(202) 512-4800
U.S. Government Accountability Office,
441 G Street NW, Room 7149
Washington, D.C. 20548: