This is the accessible text file for GAO report number GAO-03-222 
entitled 'Medicaid and SCHIP: States Use Varying Approaches to Monitor 
Children's Access to Care' which was released on February 14, 2003.



This text file was formatted by the U.S. General Accounting Office 

(GAO) to be accessible to users with visual impairments, as part of a 

longer term project to improve GAO products’ accessibility. Every 

attempt has been made to maintain the structural and data integrity of 

the original printed product. Accessibility features, such as text 

descriptions of tables, consecutively numbered footnotes placed at the 

end of the file, and the text of agency comment letters, are provided 

but may not exactly duplicate the presentation or format of the printed 

version. The portable document format (PDF) file is an exact electronic 

replica of the printed version. We welcome your feedback. Please E-mail 

your comments regarding the contents or accessibility features of this 

document to Webmaster@gao.gov.



Report to Congressional Requesters:



United States General Accounting Office:



GAO:



January 2003:



MEDICAID AND SCHIP:



States Use Varying Approaches to Monitor Children’s Access to Care:



Medicaid and SCHIP Access:



GAO-03-222:



Highlights:



Highlights of GAO-03-222, a report to Representatives Sherrod Brown; 

John Conyers, Jr.; Diana DeGette; John D. Dingell; Gene Green; William 

J. 

Jefferson; Sander M. Levin; and Ted Strickland



MEDICAID AND SCHIP

States Use Varying Approaches to Monitor Children’s Access to Care



Why GAO Did This Study:



Over 25 million children have health insurance coverage through 

Medicaid or 

the State Children’s Health Insurance Program (SCHIP). Coverage 

alone, however, 

does not guarantee that services will be available or that 

children will receive 

needed care.  GAO was asked to evaluate states’ efforts to 

facilitate and monitor 

access to primary and preventive services for children in these 

jointly funded 

federal-state programs.  The study surveyed 16 states, covering 

over 65 percent 

of the Medicaid and SCHIP population.  GAO analyzed requirements 

relevant to managed 

care and fee-for-service (FFS) delivery systems, including the 

number and location 

of physicians and their availability to see beneficiaries, monitoring 

of health plan 

or physician compliance with these requirements, and collection and 

analysis of 

beneficiary service utilization data.



What GAO Found:



Overall, states imposed more access-related requirements on 

participating providers 

and more actively monitored children’s use of services in their 

Medicaid managed care 

programs than in their Medicaid FFS or SCHIP programs. Medicaid 

managed care: State 

requirements for managed care plans ranged from very broad 

provisions that health 

plans must have “adequate” physician networks for serving their 

enrolled members to 

very specific standards, such as the number and geographic 

proximity of physicians 

and maximum time frames within which a new beneficiary receives 

a first appointment. 

States less often verified data that plans submitted to show 

compliance with these 

requirements or independently monitored physicians’ availability.  

In one instance 

of verification, a state found that a third of a health plan’s 

physician network was 

not accepting new Medicaid patients, thus limiting access for 

new beneficiaries. The 

value of plan-submitted data that states used to monitor 

children’s use of services 

was often compromised by continuing problems with their 

completeness and reliability. 

Furthermore, information derived from beneficiary satisfaction 

surveys was not necessarily 

representative of all Medicaid managed care beneficiaries. 

Medicaid FFS: Most states 

did not set goals for or analyze the availability of 

participating primary care physicians 

even though a majority of Medicaid-eligible children in half of 

the states reviewed are 

still served in FFS programs.  In most FFS programs, 

beneficiaries may seek care from 

any providers participating in the Medicaid program and may 

change providers at any time 

if they are dissatisfied.  However, when FFS payment rates 

are lower than those paid by 

other purchasers—which was the case in most states reviewed—

providers can be discouraged 

from participating in Medicaid and thus restrict 

beneficiaries’ access.  States did little 

to monitor the use of services by Medicaid-eligible children 

in FFS programs despite 

having a ready source of data in their claims payment systems. 

SCHIP: Nine of the 16 states 

used the same providers, administrative systems, and 

monitoring approaches for their 

SCHIP programs as they did for Medicaid.  The remaining 7 states, 

whose SCHIP programs were 

distinct from Medicaid and used managed care almost exclusively, 

set few requirements for 

or monitored providers’ availability to SCHIP-eligible children.  

States with distinct SCHIP 

programs also reported fewer efforts to monitor children’s use of 

services than in their 

Medicaid programs.Comments on our report from the Department of 

Health and Human Services 

highlighted new federal requirements for state oversight of 

managed care, and design differences 

between Medicaid and SCHIP that can affect monitoring approaches.  

States we reviewed provided 

clarifying or technical comments regarding their oversight of 

access, which we incorporated 

in the report as appropriate.



To view the full report, including the scope

and methodology, click on the link above.

For more information, contact Kathryn G. Allen (202) 512-7118

 

Contents:



Letter:



Results in Brief:



Background:



In Medicaid Managed Care, States Focused More on Setting Plan Network 

Requirements than on Monitoring Plans or Analyzing Service Utilization:



For Medicaid FFS, State Requirements for Providers and Monitoring of 

Service Utilization Were More Limited:



Distinct SCHIP Programs Had Fewer Network Requirements and Less 

Monitoring of Service Utilization:



Agency and State Comments and Our Evaluation:



Appendix I: Medicaid HEDIS Measures Related to Service 

Utilization:



Appendix II: Managed Care Plan Withdrawals from Medicaid in 

Four States:



Massachusetts:



Ohio:



Tennessee:



Texas:



Appendix III: Analysis of Medicaid FFS Payment Rates in 

Selected States:



Methodology for Comparison of FFS Payment Rates:



Appendix IV: Comments from the Department of Health and 

Human Services:



Appendix VGAO Contact and Staff Acknowledgments:



Related GAO Products:



Tables:



Table 1: SCHIP Design Choices for 16 States, as of March 2002:



Table 2: Share of Children Enrolled in Medicaid and Separate SCHIP 

Programs, by Service Delivery Method, for 16 States:



Table 3: Examples of Specific State Standards for Plan Networks and 

Appointment Waiting Times:



Table 4: Examples of Medicaid HEDIS Measures Related to Service 

Utilization for Children:



Table 5: Estimated Percentage of Medicaid Children Excluded from HEDIS:



Table 6: Examples of Beneficiary Satisfaction Questions for Children 

Covered by CAHPS:



Table 7: Estimated Percentage of Medicaid Children Excluded from CAHPS:



Table 8: Length of Medicaid Enrollment Required for Selected HEDIS 

Measures for Children’s and Adolescents’ Use of Services:



Table 9: Examples of Plan Withdrawal Transition Activities Conducted by 

Four State Medicaid Programs:



Table 10: Medicaid FFS Payment Rates, Expressed as a Percentage of 

Medicare Payments, in 13 States with Traditional FFS or Primary Care 

Case Manager Delivery Systems That Serve Children:



Table 11: CPT 4 Codes Used in Comparing Medicaid and Medicare Fees:



Figures:



Figure 1: Selected Medicaid Managed Care Plan Network Requirements and 

Standards in 14 States:



Figure 2: Variation in 14 States’ Monitoring of Medicaid Managed Care 

Plans’ Provider Information:



Figure 3: Selected Requirements for Medicaid PCCM Providers in Seven 

States:



Figure 4: Comparison of Seven States’ Requirements and Standards for 

Providers in Medicaid and SCHIP Managed Care:



Abbreviations:



ACG ambulatory care group:



AHRQ Agency for Healthcare Research and Quality:



CAHPS Consumer Assessment of Health Plans:



CMS Centers for Medicare & Medicaid Services:



CPT 4 Current Procedural Terminology, 4th edition:



EPSDT Early and Periodic Screening, Diagnostic and Treatment:



FFS fee-for-service:



HEDIS Health Plan Employer Data and Information Set:



HHS Department of Health and Human Services:



HRSA Health Resources and Services Administration:



NCQA National Committee on Quality Assurance:



PCCM primary care case manager:



PCP primary care provider:



SCHIP State Children’s Health Insurance Program:



United States General Accounting Office:



Washington, DC 20548:



January 14, 2003:



The Honorable Sherrod Brown

The Honorable John Conyers, Jr.

The Honorable Diana DeGette

The Honorable John D. Dingell

The Honorable Gene Green

The Honorable William J. Jefferson

The Honorable Sander M. Levin

The Honorable Ted Strickland

House of Representatives:



Over 25 million children have health care coverage through Medicaid or 

the State Children’s Health Insurance Program (SCHIP), joint federal-

state programs that finance health insurance for certain low-income 

adults and children. Medicaid and SCHIP provide the financial means for 

low-income children to receive primary, preventive, and specialty care, 

which are important to ensuring a healthy child and adolescent 

population. Having a regular provider, or usual source of care, also 

can help reduce the use of services from high-cost sources such as 

emergency rooms and inpatient hospital care.[Footnote 1]



While health insurance coverage can provide the financial means to 

obtain care, it does not by itself guarantee that health services will 

be available and accessible or that beneficiaries will receive needed 

care. Access to primary care services is significantly affected by 

local factors that vary across and within states, such as physician 

supply, location, and willingness to participate in a state’s Medicaid 

and SCHIP programs. While federal law establishes general requirements 

to ensure that Medicaid and SCHIP beneficiaries have access to covered 

health services, the extent to which children actually receive these 

health care services is influenced by how states implement their 

programs and monitor access at the state and local levels.



The type of service delivery and financing system that states use in 

their Medicaid and SCHIP programs potentially affects beneficiaries’ 

ability to locate and obtain services. Managed care, which often 

entails states making capitation payments to managed care plans to 

provide or arrange for all services for enrolled beneficiaries, 

encourages participating plans to offer and coordinate primary and 

specialty care for beneficiaries. Managed care also may promote 

efficiency by attempting to ensure that only necessary services are 

provided in the most appropriate setting. Appropriate safeguards are 

important, however, as capitation payments can also create an incentive 

to underserve or even deny beneficiaries access to needed care since 

plans and, in some cases, providers can profit from not delivering 

services for which they have already received payment. In contrast, 

beneficiaries in fee-for-service (FFS) systems, including those 

receiving care in a primary care case manager (PCCM) system,[Footnote 

2] may be at risk for the overprovision of services as providers seek 

to increase revenue. However, if FFS payment levels are too low, 

physicians may underserve their patients or be unwilling to participate 

at all.



Our prior work has shown that access to care in Medicaid has been 

problematic for certain services--such as health screening for 

children, oral health, and mental health--and for particular 

populations, such as children with special needs.[Footnote 3] Recent 

reports that some physicians are unwilling to take more Medicaid 

patients and that some managed care plans are exiting from the Medicaid 

program have raised additional concerns about adequate access for 

eligible children. Now that SCHIP is beginning its sixth year of 

implementation, a related concern is the experiences of children in 

accessing care under SCHIP, where states have greater flexibility to 

decide whom to cover, what services to provide, and how to pay for 

services, including required beneficiary cost sharing. Accordingly, you 

asked us to evaluate states’ efforts to routinely monitor access to 

primary and preventive care services in (1) Medicaid managed care, 

including actions selected states took when participating health plans 

withdrew from the program, (2) Medicaid FFS-based delivery systems, 

including PCCM systems, and (3) SCHIP.



To examine these issues, we analyzed 16 states’ approaches to 

monitoring access to primary and preventive health care services in 

their Medicaid and SCHIP programs. These states were Arkansas, 

California, Colorado, Florida, Illinois, Louisiana, Maryland, 

Massachusetts, Michigan, Nevada, New York, Ohio, Pennsylvania, 

Tennessee, Texas, and Washington. We selected these states to obtain 

wide representation of geographic regions, managed care and FFS 

systems, and SCHIP program designs.[Footnote 4] Over 65 percent of all 

Medicaid and SCHIP beneficiaries resided in these 16 states. To 

evaluate state approaches to monitoring access to care, we focused our 

analysis of states’ managed care and FFS delivery systems in three key 

areas:



* specific requirements for participating managed care plans and 

physicians to help ensure sufficient physician capacity and 

accessibility for eligible beneficiaries;



* actions to independently verify or otherwise monitor provider 

participation; and:



* routine data collection and analysis of information on beneficiaries’ 

actual service utilization, including patient satisfaction surveys.



For states’ Medicaid and SCHIP managed care programs, these service 

utilization data included encounter data, which are individual-level 

data on service use that plans are required to collect and report to 

the state; the Health Plan Employer Data and Information Set (HEDIS), 

developed by the National Committee for Quality Assurance (NCQA) to 

help purchasers and consumers compare the performance of health plans 

in providing selected services; and the Consumer Assessment of Health 

Plans (CAHPS), which is a standardized patient satisfaction survey 

developed by the federal Agency for Healthcare Research and Quality 

(AHRQ). We conducted site visits in four states where managed care plan 

withdrawals had been reported--Massachusetts, Ohio, Tennessee, and 

Texas--and analyzed information from primary care providers 

(PCP);[Footnote 5] representatives of advocacy groups; state insurance 

departments; and managed care plans participating in Medicaid, SCHIP, 

or both. At the federal level, we interviewed officials at the Centers 

for Medicare & Medicaid Services (CMS), which oversees states’ Medicaid 

and SCHIP programs, and the Health Resources and Services 

Administration (HRSA), which has responsibility for analyzing issues 

related to access to care, as well as joint responsibility for 

oversight of SCHIP. We reviewed relevant documents, including federal 

laws, federal regulations, state contracts with managed care 

organizations, and various federal and state reports related to access. 

We conducted our work from June 2001 through December 2002 in 

accordance with generally accepted government auditing standards.



Results in Brief:



Each of the states we reviewed with Medicaid managed care programs set 

requirements for participating plans’ provider networks, which include 

the physicians and specialists who have agreed to deliver or arrange 

for health care services to beneficiaries enrolled in a health plan. 

These state requirements ranged from broad provisions that health plans 

must have “adequate” networks for serving their enrolled members, to 

very specific standards that set, for example, a maximum number of 

beneficiaries per primary care physician or maximum time frames within 

which a provider must see a new beneficiary for a first appointment. 

The states less frequently verified data that plans submitted to them 

or independently collected or analyzed data to ascertain compliance 

with the requirements. States that did routinely monitor plans’ 

compliance with network requirements often identified potential access 

problems and took steps to address them. For example, a state review of 

the physicians listed in a plan’s network found that many physicians 

were not accepting new Medicaid patients, resulting in too few 

physicians accessible to such patients. Beyond setting requirements for 

or monitoring plans’ network size and availability, states attempted to 

assess the extent to which beneficiaries were actually receiving 

services through three key routine data sources: encounter data that 

states require plans to submit on individual-level service use, 

assessments of managed care plans’ performance on specified measures, 

and periodic beneficiary satisfaction surveys. However, the value of 

these data was compromised by continuing problems in most states with 

encounter data’s completeness and reliability; additionally, 

standardized data on plan performance and beneficiary satisfaction 

surveys were not representative of all Medicaid managed care 

beneficiaries. Potential issues of access to care associated with 

managed care plans withdrawing from Medicaid in four states we visited 

affected significantly different shares of eligible beneficiaries, 

ranging from about 1 percent of Medicaid beneficiaries in Texas to 

almost 50 percent in Tennessee. While these four states had taken 

various steps to help minimize disruption in access to care for 

beneficiaries affected by plan withdrawals, it was not clear to what 

extent these efforts had been successful in helping beneficiaries 

transition smoothly to new health plans and avoid access-to-care 

problems.



States did considerably less in their Medicaid FFS programs--which 

still serve the majority of children in half of the states we reviewed-

-to establish requirements for or monitor provider availability or to 

assess beneficiaries’ utilization of services than in their managed 

care programs. For traditional FFS programs, beneficiaries may seek 

care from any providers participating in the Medicaid program and may 

change providers at any time if they are dissatisfied. However, 

Medicaid beneficiaries’ ability to easily change providers depends on 

the number, type, and location of providers willing to take new 

Medicaid patients, which in turn is strongly influenced by Medicaid 

payment rates and associated administrative processes. We found that 

FFS payment rates in most states we reviewed were significantly lower 

than those paid by other purchasers for comparable services, which can 

discourage providers from participating in the program and thus 

restrict beneficiaries’ access to a broad supply of providers. 

Officials in several of the states we contacted with Medicaid FFS 

programs said that anecdotal information and complaint data suggested 

that low payment rates, slow payment, and other administrative issues 

deterred physicians in primary care or in some specialties from 

participating in the program. Most of the seven states we reviewed with 

PCCM programs set certain requirements for participating physicians, 

such as limiting the number of beneficiaries that a PCCM could enroll 

in an effort to ensure that physicians had the capacity to serve each 

beneficiary. However, these states did little to monitor the extent to 

which beneficiaries were successful in obtaining appointments as 

needed. In regard to routine data collection and analyses, all but one 

of these seven states analyzed their FFS claims data and provided PCCMs 

with comparative data on service utilization patterns for their own 

practices and for other PCCMs. However, these comparative data often 

focused on higher-cost services, such as inpatient hospitalization or 

emergency room use.



The majority of the states we reviewed--9 of the 16--designed their 

SCHIP programs to be an expansion of their Medicaid programs or modeled 

them after Medicaid, with the same providers and administrative 

systems. Therefore, in these states, the requirements for, and 

monitoring of, SCHIP provider participation and beneficiary service 

utilization mirrored that of their Medicaid programs. In contrast to 

these states, 7 states chose to serve all or most of their SCHIP 

beneficiaries through programs that were distinct from Medicaid. These 

states did significantly less in their distinct SCHIP programs in terms 

of setting requirements for, or monitoring, participating providers or 

beneficiary service use than they did for their Medicaid programs.



We received comments on a draft of this report from the Department of 

Health and Human Services (HHS), as well as from 13 of the 16 states 

that were included in our review. In response to our findings, HHS 

highlighted new federal requirements for state oversight of Medicaid 

managed care that are to be fully implemented by August 2003. HHS also 

pointed out that design differences between Medicaid and SCHIP may 

affect states’ approaches to monitoring access to care. State officials 

provided clarifying and technical comments regarding their oversight of 

access to care, which we incorporated as appropriate throughout this 

report.



Background:



States’ Medicaid and SCHIP programs are governed by various federal 

requirements regarding eligibility, covered services, and access to 

care. Under these requirements, states generally have some discretion 

in determining the amount, duration, and scope of services their 

programs will provide, and the delivery and financing systems through 

which beneficiaries will receive care--that is, FFS, managed care, or 

both. Federal requirements relating to Medicaid beneficiaries’ access 

to care are established in statute; for managed care service delivery 

systems, detailed federal regulations regarding access were recently 

issued.[Footnote 6] SCHIP requirements are also set out in statute but 

are less specific than those for Medicaid and do not include detailed 

managed care requirements or regulations comparable to those for 

Medicaid.



Populations Covered and Program Characteristics:



Since 1965, Medicaid has financed health care coverage for certain 

categories of low-income individuals--including over 22 million 

children in 2000. Federal law requires states to extend Medicaid 

eligibility to children aged 5 and under if their family incomes are at 

or below 133 percent of the federal poverty level and to children aged 

6 through 18 in families with incomes at or below the federal poverty 

level. At their discretion, most states have set income eligibility 

thresholds for families with children that expand their Medicaid 

programs beyond the minimum federal statutory levels.



In 1997, the Congress established SCHIP, which provides health care 

coverage to low-income, uninsured children living in families whose 

incomes exceed the eligibility limits for Medicaid. SCHIP covered over 

4.6 million children in fiscal year 2001, generally targeting children 

in families with incomes up to 200 percent of the federal poverty 

level.[Footnote 7] Compared with Medicaid, which has specific minimum 

federal eligibility and benefit requirements, the SCHIP legislation 

provides states more flexibility in how they choose to structure their 

programs. States have three options in designing SCHIP: They may expand 

their Medicaid programs, develop a separate child health program that 

functions independently of Medicaid, or create a combination of the two 

approaches. (See table 1 for the program designs of the 16 states in 

our sample.) While Medicaid expansion programs under SCHIP must use 

Medicaid’s provider networks and delivery systems, SCHIP separate child 

health programs may depart from Medicaid requirements particularly with 

regard to covered benefits and the plans, providers, and delivery 

systems available to beneficiaries.[Footnote 8]



Table 1: SCHIP Design Choices for 16 States, as of March 2002:



Design: Medicaid expansion; State: Arkansas,[A] Louisiana, Ohio, and 

Tennessee.



Design: Separate program; State: Colorado, Nevada, Pennsylvania, and 

Washington.



Design: Combination; State: California, Florida, Illinois, 

Maryland,[B]; Massachusetts,[B] Michigan, New York, and Texas.





Source: CMS.



[A] In February 2001, Arkansas received approval from CMS to implement 

a separate SCHIP program; however, this program had not been 

implemented as of February 2002.



[B] The state’s separate SCHIP portion of its combination program 

provides coverage either through 

(1) a premium assistance program for families with access to private 

insurance coverage or (2) Medicaid providers and services. Premium 

assistance programs were not included in our study.



[End of table]



Medicaid and SCHIP differ in terms of the share of their program 

expenditures that come from federal funds. No overall federal budget 

limit exists for the Medicaid program; it is an open-ended entitlement 

whereby state expenditures for services that are provided under a CMS-

approved state Medicaid plan are matched by the federal government 

using a formula that results in federal shares that currently range 

from 50 to 76 percent of expenditures, depending on a state’s per 

capita income in relationship to the national average. The federal 

share of Medicaid expenditures is about 57 percent. In contrast to 

Medicaid, federal funding for SCHIP is limited. The Congress 

appropriated $40 billion over 10 years (from fiscal years 1998 to 

2007), with a specified amount allocated annually to each of the 50 

states, the District of Columbia, Puerto Rico, and 4 U.S. territories. 

State SCHIP expenditures are matched by federal payments up to the 

state’s annual appropriated allotment.[Footnote 9] The SCHIP statute 

provides for an “enhanced” federal matching rate, with each state’s 

SCHIP rate exceeding its Medicaid rate. The federal share of each 

state’s SCHIP expenditures ranges from 65 to 83 percent; the federal 

share of total SCHIP expenditures is about 72 percent.



Delivery Systems:



States provide Medicaid and SCHIP services through two distinct service 

delivery and financing systems--managed care and FFS, with the latter 

including PCCM.[Footnote 10] Under a capitated managed care model, 

states contract with a managed care organization and prospectively pay 

the plans a fixed monthly fee per patient to provide or arrange for 

most health services. Plans, in turn, pay providers either 

retrospectively for each service delivered on a FFS basis or through 

prospective capitation payment arrangements. In contrast, in a 

traditional FFS delivery system, the Medicaid program reimburses 

providers directly and on a retrospective basis for each service 

delivered. The PCCM model is similar to a traditional FFS arrangement 

except that PCCMs are paid a monthly, per capita case management fee, 

usually around $3, to coordinate care for beneficiaries, in addition to 

FFS reimbursement for any health care services they provide. PCCMs, 

which are selected by beneficiaries upon enrollment, are responsible 

for treating and coordinating the care for those beneficiaries. 

Coordination may involve referrals to specialists and other providers. 

In some cases, receipt of specialty and other services may require PCCM 

approval.



The 16 states we reviewed often structured their Medicaid and SCHIP 

service delivery systems differently. As shown in table 2, the 

exclusive use of managed care was less prevalent in Medicaid than in 

separate SCHIP programs (3 and 6 states, respectively), with 3 states-

-Maryland, Michigan, and Tennessee--using managed care for virtually 

all children in both Medicaid and SCHIP. The states were more likely to 

use a combination of managed care and FFS approaches for their Medicaid 

programs than for SCHIP (11 and 5 states, respectively). Despite the 

recent growth in states’ use of managed care, FFS is still a major 

component of many states’ programs, especially for Medicaid.



Table 2: Share of Children Enrolled in Medicaid and Separate SCHIP 

Programs, by Service Delivery Method, for 16 States:



State: Arkansas; Medicaid: FFS-based: Traditional FFS: [B]; Medicaid: 

FFS-based: PCCM: 100%; Medicaid: Managed care: --; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: [C]; Separate SCHIP 

program[A]: FFS-based: PCCM: [C]; Separate SCHIP program[A]: Managed 

care: --.



State: California; Medicaid: FFS-based: Traditional FFS: 29%; Medicaid: 

FFS-based: PCCM: [B]; Medicaid: Managed care: 71%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: --; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 100%.



State: Colorado; Medicaid: FFS-based: Traditional FFS: 28%; Medicaid: 

FFS-based: PCCM: 18%; Medicaid: Managed care: 54%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: --; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 100%.



State: Florida; Medicaid: FFS-based: Traditional FFS: [D]; Medicaid: 

FFS-based: PCCM: 53%; Medicaid: Managed care: 47%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: [B]; Separate SCHIP 

program[A]: FFS-based: PCCM: 2%[E]; Separate SCHIP program[A]: Managed 

care: 98%.



State: Illinois; Medicaid: FFS-based: Traditional FFS: 87%; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 13%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: 99%; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 1%.



State: Louisiana; Medicaid: FFS-based: Traditional FFS: 88%; Medicaid: 

FFS-based: PCCM: 12%; Medicaid: Managed care: --; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: [C]; Separate SCHIP 

program[A]: FFS-based: PCCM: [C]; Separate SCHIP program[A]: Managed 

care: --.



State: Maryland; Medicaid: FFS-based: Traditional FFS: [D]; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 100%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: --; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: [F].



State: Massachusetts; Medicaid: FFS-based: Traditional FFS: [D]; 

Medicaid: FFS-based: PCCM: 63%; Medicaid: Managed care: 37%; [Empty]; 

Separate SCHIP program[A]: FFS-based: Traditional FFS: --; Separate 

SCHIP program[A]: FFS-based: PCCM: 65%; Separate SCHIP program[A]: 

Managed care: 35%.



State: Michigan; Medicaid: FFS-based: Traditional FFS: [D]; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 100%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: --; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 100%.



State: Nevada; Medicaid: FFS-based: Traditional FFS: 41%; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 59%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: 13%; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 87%.



State: New York; Medicaid: FFS-based: Traditional FFS: 61%; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 39%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: [B]; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: 100%.



State: Ohio; Medicaid: FFS-based: Traditional FFS: 67%; Medicaid: FFS-

based: PCCM: --; Medicaid: Managed care: 33%; [Empty]; Separate SCHIP 

program[A]: FFS-based: Traditional FFS: [C]; Separate SCHIP program[A]: 

FFS-based: PCCM: --; Separate SCHIP program[A]: Managed care: [C].



State: Pennsylvania; Medicaid: FFS-based: Traditional FFS: [D]; 

Medicaid: FFS-based: PCCM: 24%; Medicaid: Managed care: 76%; [Empty]; 

Separate SCHIP program[A]: FFS-based: Traditional FFS: --; Separate 

SCHIP program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: 

Managed care: 100%.



State: Tennessee; Medicaid: FFS-based: Traditional FFS: --; Medicaid: 

FFS-based: PCCM: --; Medicaid: Managed care: 100%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: --; Separate SCHIP 

program[A]: FFS-based: PCCM: --; Separate SCHIP program[A]: Managed 

care: [C].



State: Texas; Medicaid: FFS-based: Traditional FFS: 42%; Medicaid: FFS-

based: PCCM: 21%; Medicaid: Managed care: 37%; [Empty]; Separate SCHIP 

program[A]: FFS-based: Traditional FFS: --; Separate SCHIP program[A]: 

FFS-based: PCCM: --; Separate SCHIP program[A]: Managed care: 100%.



State: Washington; Medicaid: FFS-based: Traditional FFS: 34%; Medicaid: 

FFS-based: PCCM: [B]; Medicaid: Managed care: 66%; [Empty]; Separate 

SCHIP program[A]: FFS-based: Traditional FFS: 57%; Separate SCHIP 

program[A]: FFS-based: PCCM: [B]; Separate SCHIP program[A]: Managed 

care: 43%.



State: Number of states using system; Medicaid: FFS-based: Traditional 

FFS: 9; Medicaid: FFS-based: PCCM: 7; Medicaid: Managed care: 14; 

[Empty]; Separate SCHIP program[A]: FFS-based: Traditional FFS: 3; 

Separate SCHIP program[A]: FFS-based: PCCM: 2; Separate SCHIP 

program[A]: Managed care: 11.



Source: State data, as of December 2001, except for New York data, 

which are as of September 2001.



[A] Includes the separate child health programs in states with 

combination or separate SCHIP programs.



[B] Although this delivery system exists in the state, it includes less 

than 1 percent of children enrolled in Medicaid or SCHIP and thus was 

not included in our study.



[C] Not applicable. State’s SCHIP program is a Medicaid expansion; 

thus, delivery systems are the same as those in the state’s Medicaid 

program.



[D] Delivery system exists for children with special needs, which is 

outside the scope of this study. Additionally, state enrolls children 

in FFS until they transition to a managed care or a PCCM delivery 

system. For example, families with eligible children in Florida are 

allowed 90 days in which to select a PCP in managed care or a PCCM; 

during these 90 days, they are enrolled in Medicaid FFS.



[E] In Florida, delivery systems under SCHIP vary by age. Families with 

children under age 5 can select between managed care and a PCCM, while 

older children are limited to managed care service delivery.



[F] Not applicable; Maryland’s separate SCHIP portion of its 

combination program was not operational when our study began and thus 

was not included in the study.



[End of table]



Federal Requirements for Access to Care:



A state is required by federal statute to ensure that its payment and 

delivery systems will afford beneficiaries’ access to services similar 

to that of its general population;[Footnote 11] further, Medical 

assistance must be provided with reasonable promptness.[Footnote 12] 

While Medicaid traditional FFS delivery systems have no additional 

access requirements, managed care and PCCM delivery systems do. States 

are required to ensure that beneficiaries’ access to care in managed 

care and PCCM is equal to that available to beneficiaries in 

traditional FFS. On June 14, 2002, CMS published final rules to 

implement new provisions the Balanced Budget Act of 1997 set out for 

states’ Medicaid managed care programs. These new rules address the 

requirements, prohibitions, and procedures for the provision of 

different types of Medicaid managed care and PCCM delivery systems. 

Under these rules, which became effective August 13, 2002, states have 

until August 13, 2003, to bring all aspects of their state managed care 

programs into compliance with the new requirements.[Footnote 13]



States that wish to use managed care and PCCMs to deliver Medicaid 

services must have CMS approval to do so. CMS approval is in part 

intended to ensure that adequate protections are in place to safeguard 

the interests of beneficiaries enrolled in managed care who may find 

their freedom to seek the care of any participating provider at any 

time more restricted than in FFS. In managed care, states may “lock in” 

beneficiaries to one managed care plan and its network of providers for 

up to 1 year in order to provide the plan sufficient time and 

opportunity to manage the care of its enrollees most efficiently and 

appropriately. States request CMS approval for their managed care 

programs through one of two methods: (1) as a waiver from certain 

statutory requirements or (2) as an amendment to the state’s Medicaid 

plan.[Footnote 14] Fifteen of the 16 states we reviewed received CMS 

approval to provide managed care through two types of waivers of 

statutory provisions, program and demonstration waivers, while one 

state--Nevada--received approval through a state plan amendment.



Of the states we reviewed, 12 had approved “freedom-of-choice” program 

waivers, under section 1915(b) of the Social Security Act, which 

permitted them to direct beneficiaries to enroll in a managed care 

system.[Footnote 15] In reviewing and approving program waivers, CMS 

requires states that wish to limit beneficiaries’ enrollment to managed 

care to offer a choice of at least two managed care plans or allow 

beneficiaries to choose between one managed care plan and a PCCM 

system. CMS also requires states to ensure that (1) managed care plans’ 

physician networks under the waiver include approximately the same 

number or more physicians than were available before the waiver’s 

implementation and (2) services under program waivers are provided 

within reasonable time frames and are furnished within reasonable 

distances for the beneficiaries to travel. As a condition of waiver 

approval, during the period of our review CMS asked states to specify 

whether they had established access-related requirements for 

participating plans in areas such as provider capacity, or maximum 

times frames for beneficiaries to schedule appointments, travel to 

physicians’ offices, or wait in physicians’ offices to be seen. CMS did 

not require states to establish specific requirements in these areas, 

but if they did, they were asked to describe in their waiver 

applications how they planned to monitor compliance with any 

established requirements. Initial approval of a program waiver is for a 

2-year period, at which time the waiver can be reviewed and approved 

for renewal by CMS. Waiver renewals can result in changes in specific 

requirements for states.



Six of the states we reviewed--Arkansas, California, Maryland, 

Massachusetts, New York, and Tennessee--had approved comprehensive 

research and demonstration waivers, authorized by section 1115 of the 

Social Security Act, to test concepts likely to further program 

objectives.[Footnote 16] A demonstration waiver provides a state with 

greater flexibility to design its Medicaid programs in areas such as 

eligibility standards, covered benefits, and reimbursement rules. In 

reviewing and approving demonstration waivers, CMS often establishes 

terms and conditions with which states must comply that are more 

prescriptive than requirements for program waivers. For example, in 

approving demonstration waivers, CMS has required states to (1) specify 

ratios that set the maximum number of enrolled beneficiaries per 

participating PCP, (2) establish the maximum time or distance for 

beneficiaries to travel to a physician’s office, and 

(3) limit beneficiaries’ waiting times when scheduling appointments for 

urgent, routine, or specialty care. Initial approval of a demonstration 

waiver is for a 5-year period, at which time the waiver can be reviewed 

and approved for renewal by CMS.



In contrast to Medicaid, in their SCHIP programs states may require 

mandatory beneficiary enrollment in managed care without offering a 

choice among health plans. Federal SCHIP access-related requirements 

are also less extensive than those for Medicaid. The SCHIP statute 

requires that states have methods in place to ensure access to covered 

services, including emergency services, but does not specify precise 

requirements.[Footnote 17] States must describe their methods to ensure 

access to covered services, including any monitoring procedures, in 

their SCHIP state plans. In addition, the SCHIP statute required each 

state to submit to the Secretary of HHS a one-time program evaluation 

in March 2000.[Footnote 18] States must also submit to the Secretary 

annual reports that show their progress toward reaching their strategic 

objectives and performance goals, some of which may relate to access to 

care.



In Medicaid Managed Care, States Focused More on Setting Plan Network 

Requirements than on Monitoring Plans or Analyzing Service Utilization:



In attempting to ensure access to care in Medicaid managed care, states 

focused more on setting requirements for managed care plans than on 

monitoring compliance with these requirements or on analyzing 

beneficiaries’ use of services. The 14 states we reviewed with Medicaid 

managed care programs reported varying levels of effort (1) to 

establish certain requirements and standards for participating plans’ 

physician networks and to monitor their implementation and (2) to 

collect and analyze data on service utilization, such as encounter data 

from participating plans and beneficiary satisfaction surveys. State 

requirements for plans’ physician networks varied widely in their 

specificity, from broad statements that health plans must have 

“adequate” physician networks serving their enrolled members to very 

specific standards that set, for example, a maximum average number of 

beneficiaries per PCP or a maximum time frame for scheduling a first 

appointment. All but 1 of the 14 states required managed care plans to 

routinely submit lists of physicians participating in their networks, 

ranging from weekly to quarterly reporting. However, fewer states 

independently verified or routinely monitored aspects of the submitted 

data on managed care plans’ provider networks. For example, 8 of the 13 

states receiving plans’ routine lists of participating physicians 

periodically verified the number of physicians accepting new Medicaid 

patients, but only 5 states analyzed the number of physicians to 

identify those participating in multiple plans, which could overstate 

overall physician capacity. Moreover, only 5 states routinely or 

independently assessed plans’ compliance with maximum waiting times for 

beneficiaries’ scheduling appointments. In some cases, states left it 

to plans to establish time frames for scheduling appointments, rather 

than setting statewide standards for all plans.



Beyond network-related requirements and any associated monitoring, 

states attempted to assess beneficiaries’ actual use of services 

through various routine data sources and occasional special studies. 

Routine data sources included encounter data, where states require 

health plans to submit data for each service provided to each enrollee, 

periodic assessments of plans’ performance against standardized 

measures, and beneficiary satisfaction surveys. But continuing problems 

with the reliability of encounter data--and the fact that standardized 

data on plan performance and beneficiary satisfaction surveys were not 

representative of all Medicaid managed care enrollees--tended to 

undermine the utility of these data sources in describing the 

experiences of beneficiaries and their service utilization. The four 

states we visited that had experienced the withdrawal of managed care 

plans from their Medicaid programs had taken various steps to help 

minimize disruption in care for affected beneficiaries. However, it is 

not clear to what extent these states monitored service utilization for 

beneficiaries affected by such changes and their experiences in 

transitioning to new plans and physicians.



Most States Set Plan Network Requirements but Less Frequently Monitored 

Plans’ Compliance:



To oversee access to care in their Medicaid managed care programs, the 

states we reviewed established requirements for participating plans 

that most often focused on the size and structure of their physician 

networks, such as the number and geographic location of PCPs and 

specialists, and beneficiaries’ ability to schedule appointments. Some 

states, such as Colorado, Texas, and Washington, had broad requirements 

that physician networks must be adequate to serve beneficiaries, as 

shown in figure 1. Among the 14 states that used managed care in their 

Medicaid programs, broad network requirements were more prevalent for 

specialists than for PCPs. In contrast, 11 of 14 states set specific 

standards or ratios relating to the number of enrolled beneficiaries 

per PCP, and 13 set standards for providers’ geographic proximity to 

beneficiaries, such as the maximum distance or travel time for a 

beneficiary to reach a provider’s office. More variation was evident in 

states’ requirements for plans in terms of appointment scheduling for 

beneficiaries. All 14 states set maximum time frames to schedule 

routine and urgent appointments, while 6 states also set maximum time 

frames for a newly enrolled beneficiary’s first appointment and 8 

states set maximum in-office waiting times.



Figure 1: Selected Medicaid Managed Care Plan Network Requirements and 

Standards in 14 States:



[See PDF for image]



[End of figure]





AState does not have a specific standard but does require plans to 

monitor this measure.



[B] State only has a standard for selected populations, such as 

children with special needs.



States took varying approaches in setting their requirements for plan 

networks and appointment waiting times, as shown in table 3. For 

example, Florida required physicians to certify that their overall 

practice did not exceed 3,000 patients, whereas other states 

established specific Medicaid beneficiary-to-PCP ratios ranging from 

1,000 to 1 in Pennsylvania to 2,500 to 1 in Tennessee. With regard to 

appointment waiting times, some states required plans to set their own 

standards rather than establishing a consistent statewide standard.



Table 3: Examples of Specific State Standards for Plan Networks and 

Appointment Waiting Times:



Availability measure: Plan network: PCPs; Examples of standards used: 

Plan network: * Florida requires physicians to certify that their 

overall practice does not exceed 3,000 patients.; * Maryland requires 

each health plan to have enrolled beneficiaries-to-PCP ratios that do 

not exceed 2,000:1 for adults and PCPs should have no more than 1,500 

beneficiaries under age 21. Tennessee uses a maximum 2,500:1 

beneficiaries-to-PCP ratio.; * Ohio’s contracts with health plans 

specify a required number of PCPs based on the number of beneficiaries 

and plans in a county..



Availability measure: Plan network: Specialists; Examples of standards 

used: Plan network: * New York requires each participating plan to have 

30 specialties: 14 with specific ratios of enrolled beneficiaries to 

specialists and 16 specialties for which health plans must have at 

least two providers.; * Ohio requires health plans to have a specified 

number of 6 types of specialists, including dentists, allergists, and 

general surgeons, per county or service area.; * Pennsylvania requires 

health plans to provide beneficiaries with a choice of at least 2 

appropriate specialists within a reasonable geographic distance..



Availability measure: Plan network: Geographic distribution; Examples 

of standards used: Plan network: * Ohio requires health plans to ensure 

that 40 percent of beneficiaries reside within 10 miles of a PCP.; * 

Texas requires health plans to have a PCP within 30 miles of a 

beneficiary’s residence and specialty care within 75 miles.; * 

Washington requires health plans in urban areas to have two PCPs within 

10 miles of 90 percent of beneficiaries; plans in rural areas must have 

one PCP within 25 miles of most beneficiaries..



Availability measure: Plan network: Appointment waiting times.



Availability measure: Plan network: First visit; Examples of standards 

used: Plan network: * California requires that the first visit of newly 

enrolled beneficiaries be within 120 days.; * Michigan requires health 

plans to set a standard for when new beneficiaries should first visit a 

PCP.; * Pennsylvania requires that health assessments, general physical 

examinations, or first examinations be scheduled within 3 weeks of 

enrollment..



Availability measure: Plan network: Appointment scheduling; Examples of 

standards used: Plan network: * California requires health plans to 

provide urgent care within 24 hours and to set a standard for routine 

appointments.; * Nevada requires appointments for urgent care within 2 

days, and that routine care be scheduled within 2 weeks of request.; * 

New York requires that appointments be scheduled within 24 hours for 

urgent care, 4 weeks for routine and preventive care, and 4 to 6 weeks 

for specialist care..



Availability measure: Plan network: In-office waiting time; Examples of 

standards used: Plan network: * Florida requires that explanations be 

given to beneficiaries if they must wait more than 30 minutes; if the 

wait will exceed an hour, the provider is to reschedule the 

appointment.; * Michigan requires health plans to set an in-office 

waiting time standard.; * Pennsylvania requires that beneficiaries wait 

no more than 20 minutes on average or 1 hour maximum past their 

scheduled appointment times..



Source: GAO analysis of states’ data, as of December 2001.



[End of table]



Routinely monitoring plan performance, especially with established 

network requirements and standards, is critical because providers can-

-and do--change their participation in Medicaid managed care, which in 

turn can affect beneficiaries’ access to care. In some cases, a state 

may not have set a specific network requirement but nonetheless 

independently monitors plan performance. States that monitor the extent 

to which participating plans’ network providers are actually available 

to beneficiaries are better able to systematically identify and respond 

to access problems. For example, see the following.



* In 1999, Tennessee reviewed each managed care plan’s contracts with 

its providers and contacted providers directly to independently verify 

their participation with the plan and whether they were open to new 

Medicaid patients. The state found that, for one health plan, only 44 

percent of the participating PCPs accepted new Medicaid patients; of 

the remaining 56 percent of PCPs, 33 percent had Medicaid patients but 

would not accept any new ones, and 23 percent either did not accept any 

Medicaid patients or could not be reached by telephone. Determining 

that the plan did not comply with requirements for PCP availability, 

the state required the plan to add providers who would accept new 

Medicaid beneficiaries before assigning any additional beneficiaries to 

the plan. State officials also reported that they now conduct a regular 

telephone survey of providers to verify the provider data that 

participating plans submit.



* Washington has a broad requirement that physician networks be 

adequate to serve enrolled beneficiaries but does not set as many 

additional specific standards as do some other states. The state does, 

however, require participating plans to routinely report which 

physicians are participating in their Medicaid networks and 

independently verifies plan reports by periodically placing test calls 

to physicians. Washington also compiles the physician-level information 

into a centralized database to review physician participation across 

health plans in order to better ensure that capacity is not overstated.



To monitor plan performance in terms of provider availability to 

beneficiaries, 13 of the 14 states we reviewed routinely obtained 

periodic data from participating plans on the number of physicians in 

their networks, ranging from weekly reports in Maryland to quarterly 

reports in California, Massachusetts, and New York. As a part of this 

routine data collection, 12 states also reviewed the geographic 

distribution of physicians in their networks. Fewer states, however, 

took additional steps to determine, on an ongoing basis, whether the 

plan-submitted data adequately reflected network capacity to serve 

Medicaid beneficiaries. For example, in 9 states the plans’ provider 

lists identified those physicians who were accepting new Medicaid 

patients, which would help indicate the extent to which plan networks 

were open to new public beneficiaries, and 7 states independently 

verified the accuracy of the submitted provider lists. Five states 

analyzed information across the plans’ provider lists to help identify 

the unduplicated number of PCPs available to the Medicaid managed care 

population and to help avoid overstating overall physician 

availability. (See fig. 2.):



Figure 2: Variation in 14 States’ Monitoring of Medicaid Managed Care 

Plans’ Provider Information:



[See PDF for image]



[End of figure]





AIn some counties in California, plans are required to update their 

provider lists semiannually.



[B] The state imposed specific standards regarding the number of PCPs 

in a health plan’s network (such as beneficiaries-to-PCP ratios) but 

did not account for providers that may be enrolled in multiple plans.



[C] The state plans to reinstitute requirements for health plans to 

submit provider information quarterly in the next contract period.



[D] The state requires health plans to limit the total number of 

patients a physician may have across all lines of business (for 

example, private pay, Medicaid, and other types of insurance coverage), 

but the state does not monitor compliance with this limit.



[E] The state does not obtain lists from health plans that indicate the 

number of providers accepting new patients. Instead, it tracks the 

number of patients each provider is willing to accept through a health 

plan and compares this information to the number of beneficiaries 

enrolled with a particular physician. Based on this comparison, the 

state identifies which physicians should be accepting new patients.



Compared to state monitoring of provider network information, even 

fewer states monitored compliance with their requirements for 

appointment waiting times. Five of the 14 states with Medicaid managed 

care--California, Massachusetts, New York, Tennessee, and Washington--

routinely collected data or otherwise independently verified health 

plans’ compliance with specific appointment-related standards such as 

maximum time frames to schedule an initial health assessment (first 

visit) or routine-care appointment and in-office waiting times. To 

determine whether beneficiaries newly enrolled in a plan received 

initial health assessments within 120 days of enrollment, California 

regularly reviews health plan reports and physician office medical 

records for a sample of new beneficiaries in each plan. To verify 

physician compliance with appointment scheduling standards, New York 

makes random calls to physicians (200 offices per plan service area per 

year), requesting information on the next available appointment for a 

specified need, such as routine care, urgent care, or after-hours care. 

In contrast, Massachusetts directs plans to develop and monitor 

compliance with their own appointment scheduling requirements, and the 

state annually reviews and critiques the methodology and results 

reported by each plan.



Absent routine verification or monitoring of plans’ compliance with 

network and availability requirements, states do not have an adequate 

assurance that such requirements are having their intended effect on 

beneficiaries’ access to managed care providers. Officials in one state 

that did not verify requirements indicated that the standards served as 

a basis for legal recourse in the event that beneficiaries raised 

complaints regarding appointment availability. Undertaking additional 

measures to verify plan compliance, as Tennessee did, can identify more 

comprehensive network problems that limit access to care for Medicaid 

beneficiaries and that might otherwise go undetected.



New Regulations May Alter States’ Approaches to Monitoring Managed 

Care:



The new Medicaid managed care regulations, effective August 13, 2002, 

and to be fully implemented by August 13, 2003, will likely require 

some states to alter their approaches to requirements for their 

participating plans and provider networks. In general, the regulations 

require that states ensure--through their contracts with managed care 

plans--that participating plans demonstrate their capacity to serve the 

needs of their enrollees for any specific standards that states set for 

access to care. Among other things, the regulations require states to 

ensure that participating plans:



* maintain and monitor their networks of providers to provide adequate 

and timely access to all services covered under their contracts with 

the states, including monitoring the numbers of network providers who 

are not accepting new Medicaid patients;



* ensure that network providers offer hours of operation that are no 

less than the hours offered to commercial enrollees or comparable to 

those of Medicaid FFS;



* make services included in the contract available 24 hours a day, 7 

days a week, when medically necessary; and:



* establish mechanisms to ensure compliance by providers with state 

standards for access to care.



The regulations require states to certify to CMS--at the time the state 

enters into a contract with a plan or when there are significant 

changes that would affect the ability of plans to provide adequate 

capacity or services--that plans have complied with state requirements 

for the availability of services covered by managed care contracts. To 

the extent that states have not made specifications regarding health 

plan physician network capacity or assurances of access to care, states 

may need to revise their contracts with plans to comply with this new 

requirement. States that verify or monitor participating plans’ actual 

compliance with the terms of their contracts will likely have greater 

direct and routine information on whether the access-related 

requirements they have set out for participating plans are achieving 

their intended benefit for covered beneficiaries.



Routine Monitoring of Service Utilization Often Handicapped by Poor 

Data:



Determining the extent to which Medicaid beneficiaries are utilizing--

and are satisfied with--covered program services is an important test 

of the effectiveness of any state requirements for managed care plans’ 

network capacity and accessibility. To assess beneficiaries’ service 

utilization and satisfaction, the states we reviewed generally required 

participating plans to routinely provide data from two key sources: 

encounter data, which are individual-level data on service use, and 

HEDIS, which provides comparative information across participating 

plans for designated service measures. Most states also administered 

CAHPS, which is a standardized beneficiary satisfaction survey. 

However, for the majority of states we reviewed, the utility of these 

data for routine monitoring was often handicapped because of the 

frequent failure of plans to submit reliable encounter data and the 

exclusion of significant shares of beneficiaries from the HEDIS and 

CAHPS data. CAHPS survey results were further limited by poor response 

rates in most states. A few states reported making sufficient progress 

in their efforts to improve the quality of their encounter data that 

they could use them to routinely analyze service utilization in their 

Medicaid managed care programs. In addition to these routine data 

sources, a few states reported conducting occasional special studies 

that enabled them to identify and focus on access issues pertaining to 

beneficiaries’ use of services or satisfaction with services received.



Encounter Data:



Encounter data are intended to capture information on beneficiaries’ 

use of primary and preventive care as well as other services, such as 

emergency room visits. These data can help states identify patterns of 

care along several dimensions, such as by type of visit or patient 

(such as well-child visits by age), by health condition or disorder 

(such as asthma or diabetes), and by plan. As a condition of their 

approved federal managed care waivers, states must require Medicaid 

managed care plans to submit encounter data. But obtaining reliable and 

useful encounter data has proven to be a difficult undertaking, as we 

have earlier reported.[Footnote 19] According to CMS and several of the 

states we reviewed, many states continue to struggle with obtaining 

reliable and complete encounter data. One state we contacted found that 

the lack of standardized provider coding and formatting procedures 

resulted in missing and incomplete data. As a result, only 16 percent 

of the provider identifiers in the submitted encounter data could be 

matched to the state’s Medicaid provider master file. Another state 

noted that its encounter data were of limited use because many health 

plans were unable to obtain complete data from their providers. Two of 

the 14 states we reviewed reported that obtaining complete encounter 

data was more problematic for health plans that paid their physicians a 

monthly capitated payment that is not linked to the delivery of 

specific services.



For states providing Medicaid services through managed care, encounter 

data often are the basis for states’ responses to federal reporting 

requirements under Medicaid’s Early and Periodic Screening, Diagnostic 

and Treatment (EPSDT) services. EPSDT is designed to provide children 

and adolescents with access to comprehensive, periodic evaluations of 

health, developmental, and nutritional status, as well as hearing, 

vision, and dental services.[Footnote 20] The EPSDT annual reports that 

states must submit to CMS are designed to capture, by age group, 

information such as the number of children who (1) received EPSDT 

health screenings,[Footnote 21] (2) were referred for corrective 

treatment, (3) received dental treatment or preventive services, and 

(4) were enrolled in managed care plans. However, we have previously 

reported that managed care plans, particularly those that pay their 

participating physicians on a capitated basis, often had difficulty 

collecting and reporting complete and accurate EPSDT data.[Footnote 22] 

Thus, EPSDT reports that are based on encounter data are often 

incomplete or inaccurate, compromising the reliability of states’ data 

on use of these services.



Despite these widespread problems with encounter data, a few states we 

reviewed noted that the reliability and usefulness of their encounter 

data have improved over time. Maryland, New York, and Michigan, for 

example, reported sufficient progress with improving the quality of 

their encounter data that they are now able to use them to analyze 

service utilization in their Medicaid managed care programs, as 

indicated below.



* Maryland officials noted that after spending several years developing 

and refining its system for obtaining encounter data, the state is now 

able to use them as the basis to make risk-adjusted payments to plans 

and to routinely assess Medicaid managed care beneficiaries’ 

utilization of well-child, ambulatory, and emergency room visits. The 

state publicly reports performance information by health plan, creating 

a strong incentive for health plans to ensure that all encounters are 

reported. To ensure that the reported encounter data accurately portray 

services delivered, the state conducts validation studies on the data 

submitted by health plans. The state also reviews the distribution and 

frequency of diagnoses reported through the encounter data over time to 

monitor whether the mix of diagnoses across the population changes.



* New York established a data warehouse for Medicaid managed care in 

1997. The warehouse includes encounter data submitted by health plans 

as well as data from other providers’ FFS claims for reimbursement for 

services provided to managed care beneficiaries outside of their health 

plan. A variety of reports on utilization and access data are generated 

and shared with plans on a restricted access web site.



* Michigan is developing a data warehouse that will combine managed 

care encounter data with FFS claims and public health data, such as 

vital statistics and immunization records, into a single information 

system that it will use to analyze beneficiaries’ service utilization. 

The data warehouse will also be able to create utilization profiles by 

managed care plan. The state has begun testing the data warehouse that 

is expected to be operational within the next year.



HEDIS:



HEDIS is a set of standardized performance measures that helps 

purchasers and consumers compare the performance of managed health care 

plans.[Footnote 23] HEDIS performance measures are organized into eight 

categories, four of which include measures directly related to 

beneficiary service utilization.[Footnote 24] The Medicaid version of 

HEDIS includes various access-related measures that attempt to capture 

beneficiaries’ use, often by age, of various preventive and other 

services from specified providers, as illustrated in table 4. (See app. 

I for a more detailed list of Medicaid HEDIS measures related to 

service utilization.):



Table 4: Examples of Medicaid HEDIS Measures Related to Service 

Utilization for Children:



General HEDIS category: Effectiveness of care; Specific HEDIS measure: 

Childhood immunization status; Description: The percentage of enrolled 

children who turned 2 years old during the measurement year, who were 

continuously enrolled for 12 months preceding their second birthdays 

and who were identified as having the recommended number of specific 

immunizations by their second birthdays.



Specific HEDIS measure: Adolescent immunization status; Description: 

The percentage of enrolled adolescents who turned 13 during the 

measurement year, who were continuously enrolled for 12 months 

immediately preceding their 13th birthdays and who were identified as 

having had the recommended number of specific immunizations by their 

13th birthdays.



Specific HEDIS measure: Use of appropriate medications for people with 

asthma; Description: Whether members with persistent asthma are being 

prescribed medications acceptable as primary therapy for long-term 

control of asthma.



General HEDIS category: Access/ availability of care; Specific HEDIS 

measure: Children’s access to PCPs; Description: The percentage of 

enrolled members age 12 months through 24 months, 25 months through 6 

years, and 7 years through 11 years who had a visit with a network 

PCP.



Specific HEDIS measure: Annual dental visit; Description: The 

percentage of enrolled members age 4 through 21 who were continuously 

enrolled during the measurement year and who had at least one dental 

visit during the measurement year (when dental services are a covered 

benefit under Medicaid).



General HEDIS category: Use of services; Specific HEDIS measure: Well-

child visits in years 3, 4, 5, and 6 of life; Description: The 

percentage of members who were 3, 4, 5, or 6 years old during the 

measurement year, who were continuously enrolled during the measurement 

year, and who received one or more well-child visits with a primary 

care practitioner during the measurement year.



Specific HEDIS measure: Adolescent well-care visits; Description: The 

percentage of enrolled members who were age 12 through 21 years during 

the measurement year, who were continuously enrolled during the 

measurement year, and who had at least one comprehensive well-care 

visit with a PCP or an obstetrician/gynecologist practitioner during 

the measurement year.





Source: NCQA, HEDIS 2000: Technical Specifications (Washington, D.C.: 

1999).



[End of table]



Twelve of the 14 states we reviewed used HEDIS measures to help assess 

Medicaid beneficiaries’ utilization of services. Eleven states required 

participating plans to submit HEDIS performance results, while 1 state-

-Ohio--conducted its own HEDIS analysis using encounter data submitted 

by plans.[Footnote 25] Some states used the full set of HEDIS measures, 

while others used selected measures corresponding to areas of interest.



Despite the potential of HEDIS to provide valuable information 

regarding beneficiaries’ use of managed care services, its narrow focus 

in identifying beneficiaries to be included in the assessments often 

limits the ability to generalize results to all beneficiaries within a 

plan or within a state. Many HEDIS measures require beneficiaries to 

have 12 months of continuous enrollment in a single managed care plan 

in the assessment year in order to be included in the 

measures.[Footnote 26] As a measurement criterion, the continuous 

enrollment requirement is intended to ensure that comparisons of 

performance across health plans are made on the basis of sample 

populations that have been enrolled for similar periods of time. 

However, because beneficiaries’ average length of time in the Medicaid 

program can be less than 12 months--ranging from 6 to 9 months in three 

states--this 12-month enrollment requirement excluded at least one 

quarter of Medicaid beneficiaries from most of the states we reviewed 

and more than half in four states, as shown in table 5.[Footnote 27] 

Consequently, HEDIS measures may not provide a representative measure 

of service utilization for a significant share of children covered by 

Medicaid managed care. Another limitation of the HEDIS measures is that 

they are often based on encounter data and are thus subject to the 

reliability concerns previously raised.



Table 5: Estimated Percentage of Medicaid Children Excluded from HEDIS:



State[A]: Colorado; Percentage excluded from HEDIS[B]: 79.



State[A]: Ohio; Percentage excluded from HEDIS[B]: 75.



State[A]: Washington; Percentage excluded from HEDIS[B]: 52.



State[A]: Florida; Percentage excluded from HEDIS[B]: 51.



State[A]: New York; Percentage excluded from HEDIS[B]: 40.



State[A]: Michigan; Percentage excluded from HEDIS[B]: 33.



State[A]: Maryland; Percentage excluded from HEDIS[B]: 32.



State[A]: Pennsylvania; Percentage excluded from HEDIS[B]: 31.



State[A]: Massachusetts; Percentage excluded from HEDIS[B]: 29.



State[A]: California; Percentage excluded from HEDIS[B]: 26.



State[A]: Illinois; Percentage excluded from HEDIS[B]: 24.



State[A]: Nevada; Percentage excluded from HEDIS[B]: [C].



State[A]: Tennessee; Percentage excluded from HEDIS[B]: [D].



State[A]: Texas; Percentage excluded from HEDIS[B]: [D].





Source: GAO analysis of states’ data, as of December 2001.



[A] States were asked for enrollment information for the most recent 

year for which data were available, which was generally 2001.



[B] Percentages represent the portion of the population excluded from 

the required sample for some of the HEDIS measures because they were 

enrolled for less than 12 months.



[C] State could not provide exclusion data.



[D] State does not use HEDIS data.



[End of table]



Several states we reviewed provided examples of how they used the HEDIS 

data they received from participating plans. These included using the 

information to compare each plan’s performance against national 

Medicaid averages for selected measures and developing report cards to 

compare results across plans. Given issues we identified with the 

completeness of the data, however, such uses and comparisons may not be 

reliable indicators of beneficiaries’ use of services and may render a 

false impression of beneficiaries’ actual experience in service 

utilization.



CAHPS:



State Medicaid managed care programs are required to have an internal 

quality assurance system, which can involve administering beneficiary 

satisfaction surveys.[Footnote 28] Thirteen of the 14 states we 

reviewed reported using CAHPS to assess beneficiaries’ experiences with 

their Medicaid managed care plans.[Footnote 29] CAHPS is a standardized 

survey designed to compare the performance of managed care plans on the 

basis of beneficiaries’ perceptions regarding the care they received 

through their plans.[Footnote 30] The CAHPS survey covers a range of 

topics related to service utilization, including appointment 

scheduling, waiting time in a physician’s office, and the use of 

specialty services, as shown in table 6.



Table 6: Examples of Beneficiary Satisfaction Questions for Children 

Covered by CAHPS:



Measure: First visit; Question: * Did you get an appointment for your 

child’s first visit to a doctor or other health care provider for a 

checkup, or for shots or drops, as soon as you wanted?.



Measure: Appointment scheduling; Question: * In the last 6 months, how 

often did your child get an appointment for regular or routine health 

care as soon as you wanted?; * In the last 6 months, when your child 

needed care right away for an illness or injury, how often did your 

child get care as soon as you wanted?.



Measure: Ambulatory care; Question: * In the last 6 months (not 

counting times your child went to an emergency room), how many times 

did your child go to a doctor’s office or clinic?.



Measure: In-office waiting time; Question: * In the last 6 months, how 

often did your child wait in the doctor’s office or clinic more than 15 

minutes past the appointment time to see the person your child went to 

see?.



Measure: Referral to specialist; Question: * In the last 6 months, how 

much of a problem, if any, was it to get a referral to a specialist 

that your child needed to see?.



Source: CAHPS 2.0, Child Medicaid Managed Care Questionnaire and Child 

Supplemental Questions, (Rockville, Md.: AHRQ, 1998).



[End of table]



Like HEDIS, however, information from CAHPS is only gathered from a 

subset of beneficiaries. CAHPS has a 6-month continuous enrollment 

requirement for Medicaid beneficiaries to be included in the survey 

sample. While this is a shorter minimum enrollment period than for 

HEDIS, it still resulted in excluding about one quarter or more of 

covered beneficiaries in five states we reviewed, and nearly half or 

more in two states, as shown in table 7. Moreover, several states using 

CAHPS reported that they had low response rates from the sampled 

population; in some cases, surveys targeted only those beneficiaries 

with telephones, a practice that has the potential to bias the results 

for beneficiaries who could not be reached by that method.[Footnote 31]



Table 7: Estimated Percentage of Medicaid Children Excluded from CAHPS:



State[A]: Colorado; Percentage excluded from CAHPS[B]: 61.



State[A]: Ohio; Percentage excluded from CAHPS[B]: 49.



State[A]: Texas; Percentage excluded from CAHPS[B]: 31.



State[A]: Florida; Percentage excluded from CAHPS[B]: 30.



State[A]: Washington; Percentage excluded from CAHPS[B]: 24.



State[A]: Michigan; Percentage excluded from CAHPS[B]: 17.



State[A]: Maryland; Percentage excluded from CAHPS[B]: 16.



State[A]: Massachusetts; Percentage excluded from CAHPS[B]: 15.



State[A]: New York; Percentage excluded from CAHPS[B]: 15.



State[A]: Pennsylvania; Percentage excluded from CAHPS[B]: 13.



State[A]: California; Percentage excluded from CAHPS[B]: 11.



State[A]: Illinois; Percentage excluded from CAHPS[B]: 10.



State[A]: Nevada; Percentage excluded from CAHPS[B]: [C].



State[A]: Tennessee; Percentage excluded from CAHPS[B]: [D].





Source: GAO analysis of states’ data, as of December 2001.



[A] States were asked for enrollment information for the most recent 

year for which data were available, which was generally 2001.



[B] Percentages represent the portion of the population excluded from 

CAHPS because this group was enrolled for less than 6 months.



[C] State could not provide data.



[D] Tennessee does not use CAHPS, although it does conduct a state-

designed survey of a sample of all state residents about insurance 

coverage and satisfaction with services, including access to care.



[End of table]



Gauging beneficiary satisfaction with services solely through a 

satisfaction survey is an inherently difficult process, especially when 

a sample is not representative or response rates are low. To augment 

information on beneficiary satisfaction, states also had available the 

results of their complaints and grievances processes, which they are 

required to have as a condition of their managed care programs. Nearly 

all of the states we reviewed with Medicaid managed care operated a 

central hotline or complaint number, where beneficiaries could obtain 

program information or request assistance locating providers in 

addition to filing complaints. The states we reviewed generally focused 

on ensuring that complaints and questions raised by beneficiaries’ 

calls were addressed. For example, five states--Ohio, Maryland, 

Michigan, New York and Washington--had information databases that 

tracked complaints from their inception to their resolution. New York, 

Ohio, and Washington complaint reports were also analyzed by managed 

care plan, which allowed officials to identify any trends in 

beneficiary complaints.



As a tool to assess overall problems with access to care, records of 

complaints and grievances had several limitations. In some cases, 

states’ hotline or complaint data did not distinguish between requests 

for assistance and complaints about provider services, thus making it 

difficult to assess the extent of any systemic access problems. In 

addition, a small number of complaints could be difficult to interpret 

at face value; while few complaints or grievances could indicate 

overall satisfaction with care, it could also indicate a general lack 

of knowledge about or ability to file a formal complaint or grievance. 

A small number of complaints also could limit the state’s ability to 

identify any specific trends of systemic problems with access to care 

with a specific plan or within a state’s Medicaid managed care program 

as a whole.



Special Studies:



A few states we contacted reported that they occasionally conducted 

special studies, in addition to any routine monitoring they did, to 

assess service utilization issues for their Medicaid beneficiaries. For 

example, Maryland’s 4-year evaluation of its Medicaid managed care 

program, published in January 2002, concluded that providers and 

consumers felt that PCP networks were “under stress” in certain areas 

of the state, with a notable lack of physicians in rural areas of the 

state.[Footnote 32] The evaluation also identified significant 

inaccuracies with plan-submitted data on physician providers, including 

duplicate provider entries, incorrect provider affiliation status with 

participating plans, and missing information. As a result, the state 

took steps to develop and implement more rigorous methods of monitoring 

plan-submitted data. In particular, the state now monitors plans’ PCP 

networks, including verification calls to samples of physicians, so 

that PCP shortage areas can be identified and addressed. In the future, 

the state also plans to develop more specific standards for commonly 

used specialists and monitor plans’ compliance with these standards.



In 2001, Washington conducted a survey of new Medicaid managed care 

beneficiaries in 14 counties as a result of concerns raised by 

beneficiary advocates in light of managed care plans’ withdrawal from 

program participation. The study examined these beneficiaries’ 

experiences with accessing medical care, including emergency room use. 

The study found that 90 percent of new beneficiaries reported having a 

PCP after enrollment in Medicaid, compared to 62 percent having a PCP 

before enrollment in Medicaid managed care. Additionally, there were no 

significant differences between the experiences of managed care and FFS 

beneficiaries who had obtained medical, specialist, or emergency room 

care. However, the study did find that the majority of beneficiaries 

were unfamiliar with several key processes concerning their managed 

care plans, such as how to change PCPs within a health plan, contact 

their health plans when questions or problems arose, and make 

complaints. Based on these findings, the state plans to work with 

managed care plans to improve beneficiaries’ awareness regarding PCP 

selection and communication with their health plans.



States Attempted to Minimize Impact of Managed Care Plan Withdrawals on 

Access to Care, but Effect on Beneficiaries Is Uncertain:



The managed care industry--in the commercial as well as public sectors-

-has experienced considerable changes in recent years following periods 

of rapid entry of multiple managed care plans in certain markets and 

subsequent retrenchment based on plans’ willingness or ability to 

compete in those markets. Many communities and states have experienced 

changes in the number of managed care plans as a result of numerous 

health plan mergers, acquisitions, and closures as the managed care 

industry has evolved and matured. State Medicaid programs have often 

been affected by the withdrawal of some managed care plans from their 

programs; in some cases, states have intentionally acted to reduce the 

number of participating plans. The four states we visited--

Massachusetts, Ohio, Tennessee, and Texas--had experienced such changes 

to varying degrees. These states had taken various measures to help 

minimize any adverse effects on beneficiaries’ access to care due to 

participating plans leaving the Medicaid program. It is not clear, 

however, to what extent these states’ efforts had been successful in 

helping beneficiaries transition smoothly to new health plans and 

physicians and thus avoid problems with access to care.



The potential amount of disruption that occurs when a health plan 

withdraws from a state or community can vary considerably, depending on 

a number of circumstances. Health plan mergers can result in minimal 

changes for beneficiaries if they are able to maintain established 

relationships with their providers and can even strengthen the network 

of available providers within a plan. In Massachusetts, for example, a 

merger between two health plans in the early 1990s was considered by 

state officials to have increased physician availability for Medicaid 

beneficiaries enrolled in managed care. Officials in Massachusetts also 

noted that reductions in the number of health plans ensured that 

participating plans have enough enrolled beneficiaries to spread the 

costs and risks associated with capitation payments. In other cases, 

however, the extent of disruption may be more severe, particularly when 

large numbers of beneficiaries are affected and a significant number of 

plans struggle to remain financially viable.



Each of the four states we visited experienced varying levels of health 

plan withdrawals from their Medicaid managed care programs. Plan 

withdrawals over several years have affected almost 50 percent of 

beneficiaries in Tennessee and over 15 percent in Ohio, raising 

concerns about the accessibility of care to beneficiaries in these 

states. The magnitude of health plan withdrawals in Tennessee 

necessitated state efforts to recruit additional plans, at least one of 

which was later found by the state to have deficiencies related to 

failure to pay physicians accurately and promptly. In Ohio, at least 

one health plan that withdrew from the state program also failed to pay 

some of its network providers for services already rendered. In such 

cases, delayed reimbursement by managed care plans can seriously 

jeopardize providers’ willingness to continue participating in the 

Medicaid program and provide services to eligible beneficiaries. In 

contrast to Tennessee and Ohio, plan withdrawals in Massachusetts and 

Texas have affected a smaller share of beneficiaries. Massachusetts 

estimated that about 4 percent of its beneficiaries were affected by an 

early period of plan fluctuation as the state was implementing its 

mandatory managed care program; since 2000, however, the program has 

been stable with the same four managed care plans participating. In 

Texas, approximately 1 percent of beneficiaries have been affected by 

withdrawals of participating plans since the state implemented managed 

care in 1996.



To avoid disruptions in care for beneficiaries when plans ceased their 

participation in the Medicaid program, these states had implemented 

various procedures to help smooth beneficiaries’ transition to other 

plans or providers. For example, in cases where a withdrawing health 

plan intended to sell its membership to another plan, Ohio first 

compares the provider network of the withdrawing plan with the health 

plan that is purchasing the membership. The state does not approve the 

sale of membership unless most of the providers participating in the 

withdrawing plan also participate in the purchasing plan. In other 

cases, a state’s contract with its managed care plans required certain 

actions. Ohio’s contract, for example, requires a minimum of 75 days 

advance notice of a plan’s intention to terminate its participation in 

the program and includes provisions to collect a monetary assurance 

from the withdrawing plan or to withhold payments until all contractual 

requirements are completed, including required payments to network 

providers. Ohio and Texas provided examples of efforts to inform 

beneficiaries directly affected by a plan’s withdrawal about options 

available to them to continue care, such as information on other 

participating plans and how to choose another plan. These four states 

indicated that they believed their efforts to respond to changes in 

managed care participation were sufficient to minimize disruption to 

care for Medicaid-eligible beneficiaries. However, the extent to which 

the states’ efforts adequately ensured beneficiaries’ access to 

continuous care was uncertain. Appendix II provides more detail on 

managed care plan withdrawals in these four states.



For Medicaid FFS, State Requirements for Providers and Monitoring of 

Service Utilization Were More Limited:



For states’ FFS-based Medicaid delivery systems, which continue to 

serve the majority of children in half of the states we reviewed, 

requirements for participating providers and monitoring of provider 

availability were significantly more limited than for managed care. 

State analysis of service utilization data to assess the frequency and 

patterns of care that beneficiaries received was also more limited, 

despite the ready availability of such data through states’ claims 

payment systems. For traditional FFS programs, beneficiaries may seek 

care from any providers participating in the Medicaid program and may 

change providers at any time if they are dissatisfied. However, 

Medicaid beneficiaries’ ability to easily change providers is dependent 

on the number, type, and location of providers willing to take new 

Medicaid patients, which in turn is strongly influenced by Medicaid 

payment rates and associated administrative processes. We found that 

Medicaid FFS payment rates were significantly lower than rates for 

comparable Medicare services in the majority of states we reviewed, 

which can discourage provider participation and thus restrict 

beneficiaries’ access to a broad supply of providers. States that used 

PCCM programs as part of their FFS service delivery systems were 

somewhat more prone to set certain requirements for participating 

PCCMs, such as a maximum number of assigned beneficiaries and their 

geographic proximity to beneficiaries, than were states with 

traditional FFS systems. States with FFS programs generally did not set 

requirements for specialists or for physicians’ appointments, such as 

maximum waiting times to schedule an appointment, as they did for their 

managed care plans. States were more likely to conduct beneficiary 

satisfaction surveys for their PCCM programs than for their traditional 

FFS systems; the survey results, however, had the same constraints as 

previously discussed for managed care due to the limited share of 

beneficiaries participating in the surveys and low response rates.



Low FFS Payment Rates Can Reduce Provider Participation and Restrict 

Access to Care:



States are required to ensure that their Medicaid service delivery and 

payment systems will afford beneficiaries access to services similar to 

those provided to the state’s general population. To do this, states 

determine which providers may enroll in the Medicaid program to provide 

services, set payment rates for covered services, and pay claims that 

providers submit for the services they provide. In several of the 

states we reviewed with Medicaid FFS programs, program officials said 

that provider survey information and beneficiary complaint data 

suggested that low payment rates, slow payment, and other 

administrative issues deterred physicians in primary care or in some 

specialties from participating in the program. As we reported earlier, 

if payment rates decline to the point that they cause physicians to 

leave Medicaid or to reduce the number of beneficiaries they serve, 

then beneficiary access may be restricted.[Footnote 33]



Our analysis of payment rates indicated that Medicaid FFS payments to 

physicians for primary and preventive services for children were often 

significantly lower than what Medicare paid for comparable services in 

many of the states we reviewed. For the 13 states that paid physicians 

on a FFS basis for Medicaid-eligible children, payment rates ranged 

from 32 percent to 89 percent of Medicare rates. Nine of these states’ 

Medicaid rates were two-thirds or less of Medicare rates for comparable 

services. (See app. III for more detail.) Officials in many of these 

states said that Medicaid rates were also below those of commercial 

payers, although they generally had not conducted systematic studies to 

document these differences.



Most Traditional FFS Programs Set Few Goals Regarding the Number of 

Providers, and Conducted Minimal Monitoring of Service Utilization:



Despite the potential for low FFS rates to limit the number of 

providers willing to participate in the program, the nine states we 

reviewed with traditional FFS programs did not set specific goals for 

the number of physicians participating in their Medicaid programs and 

did not actively monitor the number and location of providers.[Footnote 

34] While states had lists of physicians who were enrolled as Medicaid 

providers and who submitted claims for services provided, in most cases 

these lists were not frequently or comprehensively updated and thus did 

not provide an accurate count of actively participating physicians. 

Some states’ Medicaid physician databases included physicians who had 

not provided services to Medicaid patients for years. In one state, the 

database doublecounted providers who had more than one service location 

or billing identifier. In addition, although states have claims data 

that serve as the basis for paying providers for services rendered, 

only some analyzed this information to identify PCPs, specialists, or 

other providers who were actively treating Medicaid beneficiaries. Even 

when they did, states often defined “active” providers to include those 

who submitted a single claim during the past year. With respect to 

appointments, such as maximum waiting times to schedule a routine or 

urgent appointment, none of the states we reviewed with traditional FFS 

programs had specific standards comparable to those we saw for managed 

care programs.



States also did little to monitor service utilization by Medicaid 

beneficiaries participating in traditional FFS care despite having a 

ready source of data in their claims payment systems. Claims data 

contain the type and frequency of services Medicaid beneficiaries have 

received and the type of provider delivering the care, which can be 

used to analyze service utilization. States did report using claims 

data to develop utilization statistics to meet federal requirements for 

annual reporting on EPSDT services for children. However, we have 

reported earlier that state EPSDT reports are often incomplete and 

unreliable, thus compromising their utility in assessing whether 

children are receiving required services.[Footnote 35] Beyond EPSDT, 

only one state with a Medicaid traditional FFS system reported 

analyzing claims data to evaluate access to care on primary and 

preventive services, such as annual well-child and dental 

visits.[Footnote 36] Rather than evaluate access to primary care, at 

least three states used claims data to assess inappropriate utilization 

of higher-cost services, such as emergency room care. For example, 

Texas collects and analyzes information on beneficiaries who 

potentially overuse care--defined as those at or above the 90th 

percentile of use for particular services, including physician, 

emergency room, and pharmacy services. Patients suspected of misusing 

services may be restricted to using a specific physician or pharmacy, 

with the goal of reducing their use of services to a more appropriate 

level.



Four of the nine states with traditional FFS systems reported 

periodically using beneficiary satisfaction surveys, such as CAHPS, to 

help assess issues regarding access to care. These states were 

Colorado, Illinois, Ohio, and Washington. As with Medicaid managed 

care, however, the utility of these surveys is diminished when there 

are low response rates and a lack of beneficiary representation in the 

sample selection. In one state, the survey sample was limited to 

individuals who had received at least one service in the prior 6 

months, thus excluding individuals who may have tried but failed to 

obtain services. Another state reporting a low beneficiary response 

rate found that while the cooperation rate was high among those who 

were reached, many potential respondents in the survey sample could not 

be contacted because of address or telephone number changes.



PCCM Programs Had Some Requirements for Providers, but Monitoring of 

Service Utilization Was Limited:



States’ PCCM programs are a hybrid of FFS and managed care service 

delivery approaches. They emulate FFS programs in the sense that the 

state has a direct relationship with providers who are enrolled to 

participate in the program and paid retrospectively for services 

actually delivered. PCCM programs share characteristics of managed care 

in the sense that beneficiaries are assigned to a PCCM--a physician, or 

a practice or other entity--that is responsible for coordinating their 

care as a case manager. The seven states we reviewed with PCCM programs 

had more requirements for participating PCCMs than they did for 

providers in traditional FFS programs, but fewer than PCPs in managed 

care programs.[Footnote 37] Similar trends were evident in terms of 

states’ routine monitoring of PCCM availability and beneficiaries’ 

service utilization: more than FFS, less than managed care.



The states we reviewed with Medicaid PCCM programs most often set 

requirements for the maximum number of beneficiaries that a PCCM could 

serve and the geographic proximity of PCCMs to their enrolled 

beneficiaries. None set limits on the number of beneficiaries a 

specialist could serve, and few set specific standards for appointment 

waiting times with their PCCMs; overall, PCCM programs had fewer 

standards than those imposed under managed care, as shown in figure 3.



Figure 3: Selected Requirements for Medicaid PCCM Providers in Seven 

States:



[See PDF for image]



[End of figure]



AThis type of standard exists in this state’s managed care program but 

not its PCCM program.



[B] This type of standard exists in the state’s PCCM program, but not 

its managed care program.



States’ PCCM capacity requirements were most often based on setting a 

maximum number of beneficiaries that a PCCM or practice could serve, 

ranging from 1,000 beneficiaries per PCCM in Arkansas and Pennsylvania 

to 1,500 in Florida and Massachusetts. Louisiana set a limit of 1,200 

beneficiaries per PCCM, or 4,800 for a group practice, and allowed an 

additional 300 beneficiaries to be enrolled for each nurse 

practitioner. With regard to geographic requirements, all five of the 

PCCM programs that had requirements for this standard specified a basic 

maximum of 30 minutes or 30 miles for beneficiaries to reach their 

PCCMs. Four of these states set a higher maximum for rural areas--such 

as 50 miles in Colorado or 60 minutes in Pennsylvania--or allowed 

general exceptions to the 30-minute standard for beneficiaries living 

in some rural areas.



States typically monitored provider participation in their PCCM 

programs by compiling weekly or monthly lists of participating PCCMs 

and the number of beneficiaries each PCCM was assigned, which could 

serve as the basis for paying the monthly PCCM fee. Monitoring these 

relative numbers also allowed states to ascertain whether PCCMs could 

be assigned additional beneficiaries. States therefore had current 

information on those providers actively participating as PCCMs and the 

numbers of assigned beneficiaries. This information alone, however, 

would not yield insights into how easily beneficiaries could see their 

PCCMs.



When states had both managed care and PCCM delivery systems, they less 

frequently set requirements for PCCM appointment waiting times than 

they did for managed care.[Footnote 38] Three states that operated both 

PCCM and managed care programs--Colorado, Florida, and Pennsylvania--

did not set any appointment waiting time standards for PCCMs as they 

did for managed care. In contrast, Massachusetts required its PCCMs to 

see new patients within a specific time frame in its PCCM program, but 

not in managed care. Of the four states that did set specific 

requirements for appointment waiting times, only Texas reported 

conducting routine monitoring to assess PCCM compliance with those 

requirements. Texas officials reported conducting audits of a random 

sample of 20 PCCMs per quarter per service area to evaluate compliance 

with respect to appointment scheduling and in-office waiting time.



To monitor service utilization within their PCCM programs, states most 

often relied on analyses of their FFS claims data. Six of the seven 

states with PCCM programs provided PCCMs that had a certain minimum 

number of assigned beneficiaries with periodic data profiles that 

compare service utilization patterns in their Medicaid practices with 

those of the overall program or other PCCMs.[Footnote 39] These data 

profiles often focused on high-cost services or those at risk of 

overutilization, such as inpatient hospitalization or emergency room 

use. Three states also included information related to primary and 

preventive services use. For example, see the following.



* Massachusetts provided PCCM practices that had 200 or more enrolled 

beneficiaries with practice-specific and comparative information about 

the percentage of children who received a recommended number of well-

child visits, by age group. The state further identified, for each 

practice, individual patients who had not received the recommended 

number of well-child visits. State program staff members met with each 

provider twice annually to discuss approaches to address problems 

identified in these data that may indicate limited access.



* Texas provided participating PCCMs with comparative information on 

selected services per beneficiary, including EPSDT visits, family 

planning, and immunizations.



In contrast, states typically did not monitor the utilization of 

services provided by specialists, although several state PCCM programs 

required documentation of PCCM referrals to specialists. Officials in 

several states were aware of problems with access to some types of 

providers and specialists in their PCCM programs, including dentists, 

dermatologists, and pediatric neurosurgeons. In an attempt to address 

such problems, Arkansas conducted a survey of dentists and Florida 

conducted a survey of physicians to identify obstacles to their 

willingness to accept Medicaid patients. While such one-time surveys 

can provide insightful information about problems and potential 

solutions in a specific period, they do not take the place of routine 

or targeted monitoring that can more systematically pinpoint problems 

for particular specialties, geographic areas, or beneficiaries.



Each of the states we reviewed with PCCM programs conducted beneficiary 

satisfaction surveys. In addition, Colorado administers its CAHPS 

survey to individuals participating in all three of the states’ 

Medicaid service delivery systems--managed care, traditional FFS, and 

PCCM--in order to help assess experiences of program beneficiaries 

relative to one another. However, given the shortcomings identified 

earlier--low response rates and exclusions of certain beneficiaries 

from sample selection--states could not with confidence generalize the 

results of these beneficiary surveys to the larger population.



Distinct SCHIP Programs Had Fewer Network Requirements and Less 

Monitoring of Service Utilization:



States have used the flexibility provided by SCHIP to take varying 

approaches for their service delivery systems for eligible children. Of 

the 16 states we reviewed, 9 states chose to serve their SCHIP 

beneficiaries through programs that were primarily designed as 

expansions of Medicaid or modeled on their Medicaid programs in terms 

of benefits and provider networks.[Footnote 40] These 9 states used the 

same health plan contracts for Medicaid and SCHIP managed care, and the 

same provider lists for both programs’ FFS-based delivery systems. In 

these cases, the extent of SCHIP monitoring would mirror that of the 

states’ Medicaid programs. On the other hand, 7 states designed at 

least part of their SCHIP programs to be distinct from Medicaid. These 

programs relied almost exclusively on managed care to deliver services. 

Although most of these states also had significant shares of their 

Medicaid beneficiaries in managed care, they set significantly fewer 

provider network requirements for their distinct SCHIP programs than 

for Medicaid and did less monitoring of providers enrolled in their 

SCHIP programs and of children’s use of services in SCHIP. In general, 

few states with distinct SCHIP programs routinely collected and 

analyzed data to ensure that SCHIP-eligible children were receiving 

covered services.



States with SCHIP Programs Distinct from Medicaid Set Few Provider 

Requirements:



The seven states that chose to serve all or most of their SCHIP 

beneficiaries through programs that were distinct from Medicaid used 

managed care delivery systems almost exclusively.[Footnote 41] These 

states were not bound by access-related requirements comparable to 

those for Medicaid PCCM or managed care programs. As such, they set 

provider network requirements and monitored service utilization less 

often in their distinct SCHIP managed care programs than they did in 

their Medicaid managed care programs. As shown in figure 4, only two of 

these seven states set specific beneficiary-to-PCP ratios for SCHIP, 

compared to five states for Medicaid, and no state set specific 

requirements for specialists, compared to three states for Medicaid. 

Similarly, only one of the seven states with distinct SCHIP programs 

set a maximum waiting time for a first appointment with a PCP and none 

had a requirement for in-office waiting times; in contrast six of these 

states’ Medicaid managed care programs set specific requirements for 

one or both of these access measures. Only four of the distinct SCHIP 

programs in these states set any specific standards for appointment 

scheduling, compared to all seven of the states’ Medicaid managed care 

programs.



Figure 4: Comparison of Seven States’ Requirements and Standards for 

Providers in Medicaid and SCHIP Managed Care:



[See PDF for image]



[End of figure]



Note: Table does not include Medicaid and SCHIP programs that have only 

a general requirement that health plans’ networks be adequate to serve 

their members.



[A] Although the state did not have a specific standard, it did require 

plans to monitor this measure.



[B] Florida’s separate SCHIP programs vary by beneficiary age category. 

The SCHIP column in this table refers to the program for older 

children, as the program for children under age 5 is modeled after 

Medicaid and thus has the same standards as the Medicaid program.



[C] Michigan’s data reflect its arrangement with all participating 

health plans except for one plan, which operates under different 

requirements.



The seven states with distinct SCHIP programs also monitored the 

availability of PCPs in plan provider networks less frequently than in 

Medicaid. In contrast to Medicaid managed care where nearly all states 

monitored providers at least quarterly, just three states required 

plans to submit provider lists periodically throughout the year--

Colorado, New York, and Texas. To confirm provider information 

submitted by plans participating in SCHIP, only New York systematically 

contacted physicians to verify information about whether network PCPs 

were accepting new SCHIP patients.[Footnote 42] Four states required 

SCHIP plans to submit physician data annually or every several years 

during state licensure reviews or for the contract renewal process. 

Among these, California’s SCHIP program required plans to indicate the 

number and percentage of PCPs and specialists accepting new patients 

and also to notify the state when there was a change in the provider 

network that resulted in disruption of 25 or more beneficiaries.



The extent of states’ monitoring of participating plans’ SCHIP provider 

networks did not appear to be related to whether SCHIP-eligible 

beneficiaries had access to commercial or noncommercial networks within 

the plans. Some states--such as New York and Texas--did not know 

whether SCHIP-eligible beneficiaries had access to the same providers 

as were participating in plans’ commercial networks. Other states--such 

as Florida, Michigan, and Pennsylvania--stated that most if not all of 

their SCHIP populations did have access to the same providers as in the 

plans’ commercial networks. However, without direct monitoring of PCPs 

enrolled in SCHIP plan networks, states had little or no direct 

knowledge of the extent to which PCPs would see SCHIP beneficiaries, 

including whether enrolled PCPs would accept new SCHIP patients at all 

or limited their practice to only a small number.



Distinct SCHIP Programs Monitored Service Utilization Less than 

Medicaid:



States with SCHIP programs distinct from Medicaid reported fewer 

efforts to monitor children’s utilization of services than in their 

Medicaid managed care programs. This held true for their use of 

encounter data as well as for HEDIS measures and CAHPS beneficiary 

satisfaction survey data.



CMS does not require states to collect encounter data from managed care 

plans participating in SCHIP, as it does in Medicaid managed care. Of 

the states we reviewed with distinct SCHIP programs, we found that two 

states--Florida and Texas--were attempting to collect as well as 

analyze encounter data for SCHIP-eligible children in order to assess 

the type and frequency of services they received. Florida’s distinct 

SCHIP program uses encounter data to compare the number of ambulatory 

visits made by SCHIP beneficiaries to the number of visits that would 

be expected for those children based on their diagnoses.[Footnote 43] 

Texas’ distinct SCHIP program, which was initiated in 2000, has used 

encounter data to compare immunization rates by plan with rates in 

commercial plans.



Four of the seven states--California, Michigan, New York, and 

Pennsylvania--required plans to submit HEDIS data so that the states 

could assess plans’ performance with respect to access to various 

preventive and other services.[Footnote 44] Compared to Medicaid, these 

HEDIS data may be more complete in three of these states--California, 

Michigan, and Pennsylvania--because they had opted to provide SCHIP-

eligible children with continuous eligibility for a 12-month period, 

thus increasing the likelihood that a more representative share of 

eligible children and their families would be included in the 

assessments.



Five of the seven states--California, Florida, Michigan, Pennsylvania, 

and Texas--used CAHPS to assess beneficiaries’ satisfaction with care. 

Compared to Medicaid, the CAHPS data for four of these states’ SCHIP 

programs may be more complete than for their Medicaid programs because 

these states provide continuous eligibility for a 12-month period.



Agency and State Comments and Our Evaluation:



We provided a draft of our report for comment to HHS, as well as to 

Medicaid and SCHIP officials in the 16 states included in our analysis. 

We received comments from HHS and from 13 states. Three states did not 

respond with comments.



HHS Comments:



With regard to states’ Medicaid managed care programs, HHS highlighted 

new requirements included in CMS’s June 2002 regulation implementing 

Medicaid managed care provisions of the Balanced Budget Act of 1997. 

HHS commented that, among other things, the regulation requires states 

to develop a quality strategy setting access standards for network 

adequacy and timeliness of access to care. HHS described this new 

regulation as also making clear the states’ responsibility to 

continually monitor plans’ compliance with these standards. While many 

states, including 13 of the 14 states we reviewed with Medicaid managed 

care delivery systems, were already subject to certain access 

requirements as a condition of receiving waivers of federal Medicaid 

requirements to operate their managed care programs, these requirements 

were not consistent from state to state. This new regulation, which 

must be fully implemented by August 13, 2003, has the potential to 

bring a more systematic approach to access requirements. More 

importantly, its emphasis on state monitoring could better ensure that 

such requirements are achieving their intended purposes.



For states’ Medicaid FFS delivery systems, HHS acknowledged the 

relationship between reimbursement rates and provider participation, 

noting that states can increase payment rates in geographic areas and 

specialties where access has been demonstrated to be a problem. Beyond 

reimbursement rates, HHS commented that our draft report pointed out a 

lack of data to quantify whether there is an access problem in Medicaid 

FFS. To the contrary, our report indicates that despite a ready source 

of information--claims data--for evaluating access to care in a FFS 

environment, states generally did not do so.



HHS agreed that our placement of PCCM programs in the FFS category was 

accurate from a reimbursement standpoint, but stated that PCCM should 

be considered a managed care delivery system because PCPs are expected 

to coordinate care. We continue to believe that a PCCM program is 

better described as an FFS-based delivery system because the 

differences between PCCM and managed care reimbursement approaches can 

differentially affect provider incentives in providing covered 

services. Our report does distinguish, however, the degree to which 

managed care, traditional FFS, and PCCM programs employ access 

standards and monitoring. Overall, states with PCCM programs tended to 

establish more standards and conduct somewhat more monitoring than for 

their traditional FFS programs, but less than for their managed care 

programs.



With regard to our finding that states with distinct SCHIP programs did 

significantly less to monitor access to care than for their Medicaid 

managed care programs, HHS stated there was a key difference in design 

and intent by the Congress between SCHIP and Medicaid. HHS commented 

that SCHIP allows states to have the flexibility to design programs 

that mirror private insurance and rely on private insurance mechanisms 

to ensure access to and quality of care, rather than laying out 

specific requirements. Acknowledging that states may not have 

comparable requirements for SCHIP and Medicaid monitoring provider 

participation and beneficiary service utilization, HHS said that states 

are monitoring enrollment, health access, and outcomes in their SCHIP 

programs. However, with regard to access, we found that few states with 

distinct SCHIP programs monitored provider network participation or 

routinely collected and analyzed data to ensure that SCHIP-eligible 

children were receiving covered services. We did not intend to suggest 

that states should use the same processes for their SCHIP and Medicaid 

programs, but rather simply to contrast states’ monitoring of access to 

care for low-income children eligible for these two programs.



HHS’s comments are reprinted in appendix IV. Additionally, HHS provided 

technical comments, which we incorporated as appropriate.



State Comments:



Several states provided clarifying comments regarding their oversight 

of access to care in Medicaid and SCHIP. These comments generally 

pertained to additional factors affecting access to care, the 

relationship between monitoring and access, and the extent of 

monitoring in traditional FFS and distinct SCHIP programs.



Two states identified factors that affect access to care within their 

Medicaid and SCHIP programs but are not easily controlled by the 

states. One state noted that the supply of physicians is severely 

limited in some states and in some regions of states, affecting all 

payers, including commercial payers as well as Medicaid and SCHIP. 

Another state raised the point that the extent to which children 

receive health care services is influenced by how well their parents or 

guardians understand and comply with recommended levels of health care 

set by providers or by the Medicaid program. We agree that provider 

supply and parental decision making are important determinants in 

children’s access to care and can be difficult factors for state 

programs to address. However, the type of monitoring activities 

addressed in this report can help to identify such factors and areas or 

locations where problems may be more pronounced, thus leading to more 

targeted solutions.



Four states identified certain activities that they believed 

facilitated access to care, but were not addressed in the report. One 

state, for example, noted that its Medicaid program helped 

beneficiaries locate a source of medical care, and another state 

described an initiative to send letters to parents of beneficiaries 

reminding them to schedule medical appointments. Although we recognize 

that these activities may help promote access to care for the Medicaid 

and SCHIP populations, this report did not address activities that 

primarily facilitated access, such as providing outreach to 

beneficiaries or offering provider payment incentives. Instead, we 

focused on states’ efforts to (1) establish and monitor requirements 

for provider availability and (2) gather and analyze data on receipt of 

care. In this regard, one state commented that the report had a “narrow 

perspective” on what constitutes monitoring in managed care and cited a 

range of indicators that it used, including beneficiary complaints, 

grievance reports, state fair-hearing requests, utilization data, and 

immunization rates. While such sources of data and activities hold 

strong potential for providing information concerning access to care, 

this report identified certain shortcomings of some of these indicators 

as programwide measures of access. For example, complaint and grievance 

system data can yield important information about problematic providers 

or services, but are not reliable measures of programwide access.



Four states cautioned against what they saw as a correlation made in 

the report between the amount of monitoring that a state does and the 

degree of access to care for program beneficiaries. For example, one 

state said that the report suggested that if monitoring is limited, 

access is also limited, and disagreed that this is necessarily the 

case. We did not intend to present such a direct correlation. However, 

if a state does not monitor data related to its access standards and to 

utilization of services, it may not know the extent to which 

beneficiaries encounter problems locating and obtaining services. 

During the course of our work, we identified instances where state data 

collection and monitoring revealed access problems that were then 

addressed to improve beneficiary access.



A few states emphasized that they considered HEDIS and CAHPS important 

tools that had helped them monitor health plan performance or achieve 

improvements in quality of care for Medicaid and/or SCHIP 

beneficiaries. One state noted that HEDIS was important in identifying 

and helping to reduce gaps between commercial and Medicaid plan 

performance. Another state questioned whether the continuous enrollment 

requirements for HEDIS (12 months) and CAHPS (6 months) would in fact 

bias the results of any analysis of beneficiaries’ access to care 

because it excludes some beneficiaries. In particular, this state 

believed that the benefits of improvements made by health plans are not 

limited to individuals enrolled for the full 6-or 12-month period. We 

agree that HEDIS and CAHPS are important tools in monitoring and 

comparing performance across plans, which necessitates that the sample 

population be defined by a comparable enrollment period. However, we do 

not believe that states can assume that all beneficiaries have access 

to care on the basis of HEDIS and CAHPS results that exclude a 

significant portion of the program population from their samples.



Two states discussed the extent to which they monitored access in 

Medicaid traditional FFS compared with Medicaid managed care delivery 

systems. One state said it analyzes data on key health outcomes for 

children, such as ambulatory-sensitive hospital admissions and trends 

in health care utilization. Both states specifically noted their 

efforts to comply with federally required reporting of EPSDT 

utilization for their FFS programs. Nevertheless, most of the states in 

our sample had few or no goals regarding the number of providers 

available to FFS beneficiaries and, with the exception of federally 

required EPDST reporting, few analyzed data related to access to 

primary care.



Similar to HHS’s view, one state noted that the report did not account 

for the fact that distinct SCHIP programs may choose approaches to 

program design and monitoring that differ from Medicaid, including 

approaches used in monitoring states’ commercial managed care plans. 

For example, this state and others reported relying on state insurance 

department licensure of health plans as the means of monitoring 

provider network adequacy, rather than imposing additional SCHIP-

specific requirements. We acknowledge in our report that states’ SCHIP 

programs may rely on different design and monitoring options than 

Medicaid. Overall, however, states with distinct SCHIP programs 

reported fewer efforts to monitor children’s access and use of services 

than in their Medicaid managed care programs.



Several states also provided technical comments, which we incorporated 

as appropriate.



As arranged with your offices, unless you release its contents earlier, 

we plan no further distribution of this report until 30 days after the 

issue date. At that time, we will send copies of this report to the 

Administrator of the Centers for Medicare & Medicaid Services and the 

Administrator of the Health Resources and Services Administration. We 

also will make copies available to others upon request. In addition, 

the report will be available at no charge on the GAO Web site at http:/

/www.gao.gov.



If you or members of your staffs have any questions regarding this 

report, please contact me on (202) 512-7118. Other contributors to this 

report are listed in appendix V.



Kathryn G. Allen

Director, Health Care--Medicaid 

and Private Health Insurance Issues:



[End of section]



Appendix I: Medicaid HEDIS Measures Related to Service Utilization:



Four of the eight general categories of the Health Plan Employer Data 

and Information Set (HEDIS) measures for Medicaid managed care plan 

performance relate directly to beneficiary service utilization. These 

four categories include effectiveness of care, access/availability of 

care, use of services, and satisfaction with the experience of 

care.[Footnote 45] Many of the measures in these categories require 

beneficiaries to be continuously enrolled for some period, often 12 

months, in order to be assessed. Table 8 details selected HEDIS 

measures that pertain to service utilization for children and 

adolescents enrolled in Medicaid managed care programs and the length 

of continuous enrollment required.



Table 8: Length of Medicaid Enrollment Required for Selected HEDIS 

Measures for Children’s and Adolescents’ Use of Services:



Category: Effectiveness of care; Measure name: Childhood immunization 

status; Length of continuous enrollment[A]: 12 months.



Measure name: Adolescent immunization status; Length of continuous 

enrollment[A]: 12 months.



Measure name: Cervical cancer screening; Length of continuous 

enrollment[A]: 12 months.



Measure name: Chlamydia screening; Length of continuous enrollment[A]: 

12 months.



Measure name: Prenatal care in first trimester; Length of continuous 

enrollment[A]: About 9 months prior to delivery.



Measure name: Checkups after delivery; Length of continuous 

enrollment[A]: About 2 months after delivery.



Measure name: Comprehensive diabetes care; Length of continuous 

enrollment[A]: 12 months.



Measure name: Use of appropriate medications for people with asthma; 

Length of continuous enrollment[A]: 24 months.



Measure name: Follow-up after mental illness hospitalization; Length of 

continuous enrollment[A]: 1 month after discharge.



Measure name: Antidepressant medication management; Length of 

continuous enrollment[A]: 12 months.



Measure name: Advising smokers to quit; Length of continuous 

enrollment[A]: 6 months.



Category: Access/availability of care; Measure name: Children’s access 

to primary care providers; Length of continuous enrollment[A]: 12 

months[B].



Measure name: Initiation of prenatal care; Length of continuous 

enrollment[A]: From 1 to 9 months prior to delivery[C].



Measure name: Annual dental visit; Length of continuous enrollment[A]: 

12 months.



Measure name: Availability of language interpretation services; Length 

of continuous enrollment[A]: None.



Category: Use of services; Measure name: Frequency of ongoing prenatal 

care; Length of continuous enrollment[A]: None.



Measure name: Well-child visits in the first 15 months of life; Length 

of continuous enrollment[A]: From 1 to 15 months of age[D].



Measure name: Well-child visits in the third, fourth, fifth and sixth 

year of life; Length of continuous enrollment[A]: 12 months.



Measure name: Adolescent well-care visits; Length of continuous 

enrollment[A]: 12 months.



Measure name: Inpatient utilization--general hospital/acute care; 

Length of continuous enrollment[A]: None.



Measure name: Ambulatory care; Length of continuous enrollment[A]: 

None.



Measure name: Inpatient utilization--nonacute care; Length of 

continuous enrollment[A]: None.



Measure name: Discharges and average length of stay--maternity care; 

Length of continuous enrollment[A]: None.



Measure name: Cesarean section rate; Length of continuous 

enrollment[A]: None.



Measure name: Vaginal birth after cesarean section rate; Length of 

continuous enrollment[A]: None.



Measure name: Births and average length of stay, newborns; Length of 

continuous enrollment[A]: None.



Measure name: Mental health utilization--inpatient discharges and 

average length of stay; Length of continuous enrollment[A]: None.



Measure name: Mental health utilization--percentage of members 

receiving inpatient, day/night care, and ambulatory services; Length of 

continuous enrollment[A]: None.



Measure name: Chemical dependency utilization--percentage of members 

receiving inpatient, day/night care, and ambulatory services; Length of 

continuous enrollment[A]: None.



Measure name: Outpatient drug utilization; Length of continuous 

enrollment[A]: None.



Category: Satisfaction with the experience of care; Measure name: 

Consumer Assessment of Health Plans (adults and children); Length of 

continuous enrollment[A]: 6 months.





Source: National Committee for Quality Assurance, HEDIS 2000: Technical 

Specifications (Washington, D.C.: 1999).



[A] For measures listed with a continuous enrollment requirement, HEDIS 

guidelines indicate that the managed care entity must assess on a 

measure-by-measure basis whether the measure may be reported in the 

current measurement year. Partial year reporting for the measures in 

this table was considered acceptable or possible by the HEDIS 

guidelines.



[B] Older age groups (7 to 11 years) require 24 months enrollment.



[C] Measure requires continuous enrollment of at least 43 days prior to 

delivery but no more than 279 days.



[D] Measure requires that child is enrolled from 31 days through 15 

months of age.



[End of table]



[End of section]



Appendix II: Managed Care Plan Withdrawals from Medicaid in Four 
States:



Four states we visited--Massachusetts, Ohio, Tennessee, and Texas--had 

varying experiences in terms of the number and impact of managed care 

plan withdrawals from their Medicaid managed care programs. In some 

cases, as in Massachusetts, the changes occurred early in the states’ 

implementation of their programs and the number of plans has been 

stable in recent years; in other cases, as in Ohio and Tennessee, the 

changes in participating plans continued over time and presented 

ongoing challenges to the states in managing their programs and 

ensuring appropriate access to care for their beneficiaries. The 

proportion of Medicaid beneficiaries affected by withdrawals of 

participating managed care plans ranged from about 1 percent in Texas 

to almost 50 percent in Tennessee. Following is a brief description of 

managed care plan withdrawals in each of the four states and examples 

of some of the measures states took to minimize disruption to 

beneficiaries’ care as a result of the changes.



Massachusetts:



Health plan participation in Massachusetts’ Medicaid managed care 

program has slowly stabilized, with four plans participating in the 

program since 2000. Earlier fluctuations occurred, however, as the 

state shaped its program to limit the number of participating plans and 

as some health plans decided to consolidate or leave the market. These 

early changes in participating plans affected about 4 percent of the 

state’s Medicaid population.



Massachusetts began its current Medicaid managed care program in July 

1997 with nine participating health plans.[Footnote 46] Two of the 

health plans, created by hospital systems that had traditionally 

provided services to lower income individuals, were formed specifically 

for this program. Of the remaining seven plans participating in the 

state’s Medicaid program, many were commercially available. Within the 

first 2 years of the program, however, the number of participating 

health plans declined to five. This reduction was partially a result of 

the state’s decision to contract with fewer health plans and to provide 

each health plan with a greater volume of beneficiaries. As a result of 

this decision, contracts were not renewed with two health plans and 

approximately 42,000 beneficiaries (about 4 percent of the state’s 

Medicaid population) had to select other plans. In addition, during 

this period several health plans merged and at least one plan left the 

Massachusetts health care market altogether. State officials reported 

that some plans lost interest in participating because of Medicaid’s 

administrative and reporting requirements. Additionally, commercial 

plans found that the Medicaid benefit package included certain 

services--such as behavioral health services--that the health plans did 

not provide their other members. This meant that health plans had to 

establish networks specifically for Medicaid beneficiaries; without a 

“critical mass” of Medicaid beneficiaries, however, health plans had 

difficulty remaining financially viable in the program. When 

participating plans withdrew from the state’s Medicaid program, state 

officials said that beneficiaries enrolled in the affected plans were 

informed that the plans would no longer be participating in the program 

and were provided an opportunity to choose other plans or enroll in the 

state’s primary care case manager program.



Ohio:



Since the inception of its mandatory managed care program in 1996, Ohio 

has faced a large number of health plan withdrawals. As of January 

2002, 10 plans had completely withdrawn from program participation, 

while 3 additional plans had withdrawn from specific counties in the 

state. Over 224,000 Medicaid beneficiaries--over 15 percent of the 

state’s Medicaid population--were affected by plan withdrawals. As a 

result of these withdrawals and providers’ growing reluctance to 

participate in managed care, Ohio changed from mandatory to voluntary 

managed care enrollment in some counties and fee-for-service (FFS) in 

others. As of April 2002, Ohio had 7 managed care plans serving 

Medicaid beneficiaries in 15 counties, with mandatory enrollment in 

only 4 of the counties.



Ohio Medicaid officials expected to see some fluctuations in plan 

participation in the early years of its program. They anticipated that 

some plans would withdraw due to the state’s requirement that plans 

that did not have significant enrollment--from 10 to 15 percent of the 

eligible population--within 2 years of the program’s inception would be 

required to leave the program. Several reasons were provided for the 

number of plans that eventually withdrew from the program, including 

voluntary withdrawal and court-ordered liquidations. In many cases, 

health plans sold their Medicaid membership to other plans. State 

officials acknowledged that the relatively large number of plan 

withdrawals affected individuals’ perception of the program and led to 

changes in the state’s managed care enrollment policy, with some 

counties switching from mandatory to voluntary managed care enrollment. 

Concerns about the program’s viability and stability were increased 

when the state insurance department liquidated one Medicaid health plan 

in 1998 and some of its network providers did not receive compensation 

from the plan.



State officials did not believe that beneficiaries’ access to care was 

affected by these plan withdrawals. In cases where a health plan’s 

membership was sold to another plan, the state attempted to ensure 

continuity of care by requiring that at least 90 percent of the current 

plan’s primary care providers (PCP) were included in the provider 

network of the purchasing plan.[Footnote 47] In other cases, we were 

told, beneficiaries were notified of their health plan’s impending 

withdrawal and provided an opportunity to select another plan if 

available. If a beneficiary did not select a health plan, or there was 

no alternative plan available, then the beneficiary returned to the 

state’s FFS program. In areas with mandatory managed care enrollment, 

however, beneficiaries were not allowed to remain in FFS indefinitely; 

they were required to select another plan or be automatically assigned 

to one.



Tennessee:



In establishing its mandatory managed care program in January 1994, 

Tennessee expanded Medicaid eligibility to hundreds of thousands of 

previously uninsured individuals and enrolled them into 1 of 12 

capitated managed care plans. Four plans left the program or were sold 

from 1994 through 1999.[Footnote 48] Since 2001, plan withdrawals have 

increasingly been an area of concern, with large numbers of Medicaid 

beneficiaries affected by changes to the state’s 2 largest health 

plans. For example, in 2001, almost 580,000 beneficiaries, or 41 

percent of the state’s Medicaid population, were affected when 1 plan 

withdrew from the western and central portions of the state and a 

second plan’s contract was terminated due to solvency issues. In 

response to the first of these two withdrawals, the state took two 

actions: (1) it recruited two new health plans to join the market and 

(2) it created a self-insured plan to serve as a backup in areas of the 

state where beneficiaries could not be adequately served by other 

health plans.



As of April 2002, 10 health plans were participating in Tennessee’s 

Medicaid managed care program although 2 plans, covering 21 percent of 

the state’s Medicaid beneficiaries, were considered to be at financial 

risk. The state announced its intention in March 2002 to terminate its 

contract with 1 health plan, which would necessitate the transfer of 

approximately 135,000 beneficiaries to other health plans.[Footnote 49] 

A second plan, with over 160,000 beneficiaries, was under 

rehabilitation by the state’s insurance department.



In view of the instability of the program and participating plans, 

Tennessee has taken several steps to help ensure continuous access to 

care for Medicaid beneficiaries. In order to provide time to plan ahead 

in the event of plan withdrawals, the state’s contract with 

participating plans requires 6 months of advance notice of an intended 

withdrawal and a transition plan to assure uninterrupted care to 

beneficiaries. When plans stopped participating in the program, 

beneficiaries were either provided the option to select new health 

plans or were assigned to health plans.



Texas:



Texas began its capitated Medicaid managed care program in1996 in four 

areas of the state. Since 1996, managed care was expanded to three 

additional service areas and now exists in 46 of the state’s 254 

counties. Since the rollout of managed care began, only three plans 

have withdrawn from participation in Texas’ Medicaid managed care 

program, affecting less than 20,000 beneficiaries, or approximately 1 

percent of the state’s Medicaid population.[Footnote 50] Two of these 

withdrawals were from the same service delivery area, leaving three 

plans participating in that area.[Footnote 51] However, the state 

contends that prior to the withdrawals there was a saturation of health 

plans in that service delivery area. As of July 2002, 11 plans were 

participating in the Medicaid managed care program.



In one instance, a participating plan gave the state less than 3 weeks’ 

notice of its intent to leave the program. Because of the limited 

notice, beneficiaries were automatically assigned to other plans in 

order to minimize disruption in their access to care. Although these 

assignments were initially made without direct input from the affected 

beneficiaries, their prior PCP and specialist utilization patterns were 

taken into account during this assignment process and beneficiaries 

were later given an opportunity to change plans. The state paid 

particular attention to the number of complaints during these 

transition periods and did not see a dramatic change. As such, state 

officials believe that the transitions went smoothly. Texas has a 

number of other measures in place to facilitate beneficiaries’ 

enrollment in alternative plans when their plans leave the program, as 

illustrated in table 9 along with additional examples from other states 

we visited.



Table 9: Examples of Plan Withdrawal Transition Activities Conducted by 

Four State Medicaid Programs:



Type of action: Contractual requirements of managed care plans; 

Examples of action: * Massachusetts takes responsibility for notifying 

beneficiaries of the health plan’s withdrawal from the program and the 

process beneficiaries must undergo to continue to receive services; 

however, health plans must continue to provide services until the 

beneficiary is disenrolled and participating in another plan.; * Ohio’s 

contract requires the collection of monetary assurance or the 

withholding of payments from withdrawing health plans until all 

contract requirements are completed.; * Texas requires health plans to 

provide the state 90 days notice of their intention to terminate 

participation, Ohio requires 75 days notice, and Tennessee requires 6 

months advanced notice.; * Tennessee’s contract requires withdrawing 

health plans to submit transition plans to ensure uninterrupted care to 

beneficiaries..



Type of action: Notification of beneficiaries and other stakeholders; 

Examples of action: * Ohio, Tennessee and Texas send letters to 

beneficiaries informing them of the health plan’s withdrawal. The 

letters may include a list of the other health plans, important 

telephone numbers, and actions beneficiaries must take.; * Texas 

notifies stakeholders, including the enrollment broker and other health 

plans, of the impending withdrawal. Remaining health plans in the area 

are provided with a list of PCPs that are only participating with the 

exiting plan. Additionally, the state or health plan notifies providers 

of the plan’s intention to withdraw from the program..



Type of action: Coordination between plans; Examples of action: * In 

Texas, the withdrawing plan identifies individuals with special needs 

and a dialogue between the current and future case managers begins. In 

addition, the withdrawing health plan provides instructions for 

providers on seeking authorization for continued services from new 

health plan..



Source: GAO analysis of states’ data, December 2001.



[End of table]



[End of section]



Appendix III: Analysis of Medicaid FFS Payment Rates in Selected 
States:



Nationally, low Medicaid physician fees have been a long-standing area 

of concern because they can affect the degree to which physicians 

participate in Medicaid, and thereby affect beneficiaries’ access to 

care. The relative fees paid by different insurers--Medicare, Medicaid, 

and SCHIP--can also affect providers’ willingness to participate in 

these programs. Since many children in Medicaid remain in fee-for-

service (FFS)-based programs, we compared Medicaid fees for selected 

office visit and pediatric preventive medical care services to the 

corresponding Medicare fees. While Medicare is a federal health 

insurance program primarily for the elderly and persons with 

disabilities, some children do receive Medicare benefits and thus its 

fee schedule includes fees for pediatric medical services. Among the 13 

states we reviewed that used FFS-based delivery systems as a key care 

delivery system for Medicaid children,[Footnote 52] Medicaid fees for 

primary and preventive care ranged from 32 percent to 89 percent of 

what Medicare would pay for similar services. (See table 10.) Concerns 

with the adequacy of Medicaid physician payment levels were also 

identified in studies of Medicaid physician payment in California, 

Washington, and Maryland.[Footnote 53]



Table 10: Medicaid FFS Payment Rates, Expressed as a Percentage of 

Medicare Payments, in 13 States with Traditional FFS or Primary Care 

Case Manager Delivery Systems That Serve Children:



State[A]: Massachusetts[B]; Medicaid FFS payments as a percentage of 

Medicare FFS payments (weighted): 89.



State[A]: Arkansas; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 71.



State[A]: Florida; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 71.



State[A]: Texas; Medicaid FFS payments as a percentage of Medicare FFS 

payments (weighted): 71.



State[A]: Nevada; Medicaid FFS payments as a percentage of Medicare FFS 

payments (weighted): 66.



State[A]: Ohio; Medicaid FFS payments as a percentage of Medicare FFS 

payments (weighted): 64.



State[A]: Illinois[B]; Medicaid FFS payments as a percentage of 

Medicare FFS payments (weighted): 61.



State[A]: Washington; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 60.



State[A]: Colorado; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 57.



State[A]: Louisiana; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 57.



State[A]: New York[B]; Medicaid FFS payments as a percentage of 

Medicare FFS payments (weighted): 54.



State[A]: California; Medicaid FFS payments as a percentage of Medicare 

FFS payments (weighted): 48.



State[A]: Pennsylvania; Medicaid FFS payments as a percentage of 

Medicare FFS payments (weighted): 32.





Source: GAO analysis of Medicare data and states’ data, as of December 

2001.



[A] Other study states not shown on this table include the following: 

Tennessee enrolls nearly all beneficiaries in capitated managed care, 

and therefore, we did not collect a Medicaid FFS payment schedule that 

can be compared to Medicare rates. Maryland uses a FFS-based delivery 

system for less than 5 percent of children and includes only those 

children requiring case management for rare and expensive conditions, 

or who are technology dependent. Michigan uses FFS-based care only for 

children in an eligibility category for special needs.



[B] Illinois, Massachusetts, and New York provide payment enhancements 

for some services, in addition to the regular fee for the service; 

where appropriate, these enhancements were included in the analysis.



[End of table]



Methodology for Comparison of FFS Payment Rates:



For our comparative analysis of Medicaid and Medicare FFS payments, we 

obtained fee schedules from 13 of the 16 states we reviewed, compiling 

fees for 12 medical services using selected codes from a commonly used 

procedural coding system--the standard Physicians Current Procedural 

Terminology, 4th edition (CPT 4). (See table 11.) For each state, we 

weighted the Medicaid and corresponding lowest Medicare fees[Footnote 

54] for that state by the relative utilization of the service among 

pediatricians, identified from a 1999 American Academy of Pediatrics 

survey.[Footnote 55] The sum of the weighted Medicaid fees was then 

expressed as a percentage of the sum of the Medicare payments in order 

to develop a single, weighted payment rate.



Table 11: CPT 4 Codes Used in Comparing Medicaid and Medicare Fees:



CPT 4 code: Office or other outpatient visit.



CPT 4 code: 99201; Description: Office or other outpatient visit: New 

patient, 10 minute visit.



CPT 4 code: 99202; Description: Office or other outpatient visit: New 

patient, 20 minute visit.



CPT 4 code: 99203; Description: Office or other outpatient visit: New 

patient, 30 minute visit.



CPT 4 code: 99213; Description: Office or other outpatient visit: 

Established patient, 15 minute visit.



CPT 4 code: 99214; Description: Office or other outpatient visit: 

Established patient, 25 minute visit.



CPT 4 code: Preventive medical services.



CPT 4 code: 99381; Description: Office or other outpatient visit: New 

patient, under 1 year.



CPT 4 code: 99382; Description: Office or other outpatient visit: New 

patient, 1 to 4 years.



CPT 4 code: 99383; Description: Office or other outpatient visit: New 

patient, 5 to 11 years.



CPT 4 code: 99391; Description: Office or other outpatient visit: 

Established patient, under 1 year.



CPT 4 code: 99392; Description: Office or other outpatient visit: 

Established patient, 1 to 4 years.



CPT 4 code: 99393; Description: Office or other outpatient visit: 

Established patient, 5 to 11 years.



CPT 4 code: 99394; Description: Office or other outpatient visit: 

Established patient, 12 to 17 years.



Source: CPT 4.



[End of table]



[End of section]



Appendix IV: Comments from the Department of Health and Human Services:



DEPARTMENT OF HEALTH AND HUMAN SERVICES	

Office of Inspector General:



Washington, D.C. 20201:



DEC 10 2002:



Ms. Kathryn G. Allen:



Director, Health Care - Medicaid and Private Health Insurance Issues 

United States General:



Accounting Office Washington, D.C. 20548:



Dear Ms. Allen:



Enclosed are the department’s comments on your draft report entitled, 

“Medicaid and SCHIP: States Use Varying Approaches to Monitor 

Children’s Access to Care.” The comments represent the tentative 

position of the department and are subject to reevaluation when the 

final version of this report is received.



The department also provided several technical comments directly to 

your staff.



The department appreciates the opportunity to comment on this draft 

report before its publication.



Sincerely,



Janet Rehnquist: 

Inspector General:

Signed by Janet Rehnquist:



Enclosure:



The Office of Inspector General (OIG) is transmitting the department’s 

response to this draft report in our capacity as the department’s 

designated focal point and coordinator for General Accounting Office 

reports. The OIG has not conducted an independent assessment of these 

comments and therefore expresses no opinion on them.



Comments of the Department of Health and Human Services on the General 

Accounting Office’s Draft Report, “Medicaid and SCHIP: States Use 

Varying Approaches to Monitor Children’s Access to Care” (GAO-03-222):



The Department of Health and Human Services (department) appreciates 

the opportunity to comment on this draft report.



Medicaid Managed Care:



The department would like to note that on June 14, 2002, the Centers 

for Medicare and Medicaid Services (CMS) published a final rule 

implementing the Medicaid managed care provisions of the Balanced 

Budget Act of 1997. The regulation provides clear guidance for states 

on access standards and monitoring. Prior to this new regulation, there 

were no specific access standards for managed care. With the new 

regulation, states will have to develop a quality strategy that 

includes establishing access standards for network adequacy and 

timeliness of access to care. For network adequacy, the standards must 

take into account anticipated enrollment; expected utilization of 

services; numbers and types of providers needed; number of providers 

not accepting new patients; geographic availability; and physical 

access for enrollees with disabilities. The regulation also creates new 

standards for direct beneficiary access to women’s health specialists 

and second opinions, as well as timeliness of access to services. The 

regulation also directs states to develop and enforce standards for 

coordination and continuity of care for primary care and for enrollees 

with special health care needs. Also, states must set standards for 

coverage and authorization of services, including timeframes for 

authorization decisions.



In addition to establishing explicit access standards, the new 

regulation makes clear the responsibility of the state to monitor, on 

an ongoing basis, plans’ compliance with these standards. There will 

also be increased expectations on state monitoring, and clear guidance 

on the requirements for an External Quality Review (EQR) of plans are 

expected to be published in the near future. As with access standards, 

current requirements for EQR are minimal. Section 4705 of the Balanced 

Budget Act of 1997 (BBA) directed CMS to develop standards for an 

“external independent review ... of the quality outcomes and timeliness 

of, and access to, the items and services for which the organization is 

responsible under the contract.” The CMS has already published detailed 

protocols for these reviews:



(see < http://www.ems.hhs.gov/medicaid/managedeare/mcegrhmp.asp>).



Medicaid Fee-for-Service (FFS):



The report indicates that low physician reimbursement rates under 

Medicaid FFS have the potential to reduce provider participation in the 

program, which can then negatively impact beneficiary access to 

providers. The report further indicates that states are doing little to 

monitor service utilization. However, as the report also notes, states 

have the responsibility to ensure sufficient access to services. 

Because states have the discretion to set reimbursement levels in 

response to particular program goals and within budgetary:



constraints, this has permitted states to increase payment rates in 

those geographic areas and for those particular specialties where 

access has been demonstrated to be a problem.



As the report points out, there is a lack of data to quantify whether 

there is or is not an access problem in Medicaid FFS. We note, however, 

that enrollment in Medicaid managed care continues to increase so more 

children are being covered under managed care arrangements where 

assurances of adequate access to care is required through appropriate 

documentation.



State Children’s Health Insurance Proaram (SCRIP):



The report concludes that separate SCRIP programs “did significantly 

less in their distinct SCRIP programs in terms of setting requirements 

for, or monitoring, participating providers or beneficiary service use 

than they did for their Medicaid programs.” In part, GAO seems to miss 

the key difference in design and intent by Congress between SCRIP and 

Medicaid. The SCRIP allows states to have the flexibility to design 

programs that mirror private insurance and rely on private insurance 

mechanisms to assure access and quality of care. Rather than laying out 

specific requirements, title XXI is built on accountability by states 

by requiring that each state describe their strategic objectives, 

performance goals and performance measures. The strategic objectives, 

by law, must relate to increasing the extent of creditable health 

coverage among targeted low-income children and other low-income 

children. As a result, states may not have comparable requirements 

between SCRIP and Medicaid for monitoring of provider participation and 

beneficiary service utilization. Rather, states are monitoring 

enrollment into the program and health access and outcomes. Every state 

has established a set of performance goals and measures, which are 

generally related to enrollment, access and outcome.



Many states have set performance goals related to quality and 

satisfaction of care. Information from state plans and annual reports 

in July 2001 indicated that only 5 states do not use any Health Plan 

Employer Data Information Sets (HEDIS) measures. All other states use 

all or part of the HEDIS set of measures. Most states collect data on 

immunizations and well child visits. In addition, CMS is working with 

states to establish the Performance Measurement Partnership Project, 

which will create a uniform set of indicators across Medicaid and SCRIP 

to improve performance monitoring across all states. The SCRIP focuses 

on outcomes, as opposed to process, and the core set of performance 

measures will allow us to measure and compare access and outcomes 

across states.



Specific Comments:



The department notes that the draft report places Primary Care Case 

Managers (PCCMs) in the fee-for-service category, which is accurate 

from a reimbursement standpoint. However, we believe that it is more 

appropriate to categorize PCCMs as a managed care delivery system, as 

participating primary care physicians are expected to coordinate needed 

care and services as well as act as a gatekeeper/referral mechanism for 

specialty:



care. Accordingly, states are increasing their oversight of and 

expectations for PCCM programs. Two examples are Massachusetts and 

Florida.



Additionally, the final rule implementing the Medicaid managed care 

provisions of the BBA does include at section 438.6(k) specific 

contract requirements for PCCMs that includes adequate hours of 

operation, restricting enrollment to recipients who reside sufficiently 

near one of the manager’s delivery sites and providing for arrangements 

with a sufficient number of physicians and other practitioners to 

ensure that services are furnished promptly and without compromise to 

quality of care.



[End of section]



Appendix V: GAO Contact and Staff Acknowledgments:



GAO Contact:



Carolyn L. Yocom, (202) 512-4931:



Acknowledgments:



Catina Bradley, Karen Doran, Laura Sutton Elsberg, Mary Giffin, 

Michelle Rosenberg, and Ann Tynan made key contributions to this 

report.



[End of section]



Related GAO Products:



Mental Health Services: Effectiveness of Insurance Coverage and Federal 

Programs for Children Who Have Experienced Trauma Largely Unknown. GAO-

02-813. Washington, D.C.: August 22, 2002.



Children’s Health Insurance: Inspector General Reviews Should Be 

Expanded to Further Inform the Congress. GAO-02-512. Washington, D.C.: 

March 29, 2002.



Medicaid and SCHIP: States’ Enrollment and Payment Policies Can Affect 

Children’s Access to Care. GAO-01-883. Washington, D.C.: September 10, 

2001.



Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health 

Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001.



Medicaid Managed Care: States’ Safeguards for Children With Special 

Needs Vary Significantly. GAO/HEHS-00-169. Washington, D.C.: September 

29, 2000.



Oral Health: Factors Contributing to Low Use of Dental Services by Low-

Income Populations. GAO/HEHS-00-149. Washington, D.C.: 

September 11, 2000.



Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and 

Benefits. GAO/HEHS-00-86. Washington, D.C.: April 14, 2000.



Oral Health: Dental Disease Is a Chronic Problem Among Low-Income 

Populations. GAO/HEHS-00-72. Washington, D.C.: April 12, 2000.



Children’s Health Insurance Program: State Implementation Approaches 

Are Evolving. GAO/HEHS-99-65. Washington, D.C.: May 14, 1999.



Medicaid Managed Care: Challenge of Holding Plans Accountable Requires 

Greater State Effort. GAO/HEHS-97-86. Washington, D.C.: 

May 16, 1997.



GAO’s Mission:



The General Accounting Office, the investigative arm of Congress, 

exists to support Congress in meeting its constitutional 

responsibilities and to help improve the performance and accountability 

of the federal government for the American people. GAO examines the use 

of public funds; evaluates federal programs and policies; and provides 

analyses, recommendations, and other assistance to help Congress make 

informed oversight, policy, and funding decisions. GAO’s commitment to 

good government is reflected in its core values of accountability, 

integrity, and reliability.

:



Obtaining Copies of GAO Reports and Testimony:



The fastest and easiest way to obtain copies of GAO documents at no 

cost is through the Internet. GAO’s Web site (www.gao.gov) contains 

abstracts and full-text files of current reports and testimony and an 

expanding archive of older products. The Web site features a search 

engine to help you locate documents using key words and phrases. You 

can print these documents in their entirety, including charts and other 

graphics.



Each day, GAO issues a list of newly released reports, testimony, and 

correspondence. GAO posts this list, known as “Today’s Reports,” on its 

Web site daily. The list contains links to the full-text document 

files. To have GAO e-mail this list to you every afternoon, go to 

www.gao.gov and select “Subscribe to daily E-mail alert for newly 

released products” under the GAO Reports heading.

:



Order by Mail or Phone:



The first copy of each printed report is free. Additional copies are $2 

each. A check or money order should be made out to the Superintendent 

of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or 

more copies mailed to a single address are discounted 25 percent. 

Orders should be sent to:



U.S. General Accounting Office

441 G Street NW, Room LM

Washington, D.C. 20548:



To order by Phone: Voice: (202) 512-6000 

TDD: (202) 512-2537

Fax: (202) 512-6061

:



To Report Fraud, Waste, and Abuse in Federal Programs:



Contact:



Web site: www.gao.gov/fraudnet/fraudnet.htm

E-mail: fraudnet@gao.gov

Automated answering system: (800) 424-5454 or (202) 512-7470

:



Public Affairs:



Jeff Nelligan, managing director, NelliganJ@gao.gov (202) 512-4800

U.S. General Accounting Office, 441 G Street NW, Room 7149 

Washington, D.C. 20548:



FOOTNOTES



[1] A provider may be a physician, a group of physicians practicing 

together, or an outpatient clinic with physician services. 



[2] FFS systems include traditional FFS, in which a provider bills the 

program for services provided to an eligible beneficiary, and PCCM 

systems, in which a physician, physician group practice, or similar 

entity contracts with the state to locate, coordinate, and monitor 

primary health services for Medicaid beneficiaries for a nominal 

monthly, per capita case management fee (usually around $3). 



[3] See related GAO products listed at the end of this report. 



[4] States can take three approaches in designing their SCHIP programs: 

(1) expand Medicaid, (2) construct separate child health programs 

distinct from Medicaid, or (3) use a combination of both approaches. 



[5] For purposes of this report, PCPs are usually physicians trained in 

internal medicine, pediatrics, family medicine, or obstetrics and 

gynecology who participate in PCCM or managed care programs in Medicaid 

or SCHIP. 



[6] 67 Fed. Reg. 40989 (2002).



[7] The SCHIP statute allows a state to expand eligibility to 200 

percent of the poverty level or up to 50 percentage points above its 

Medicaid eligibility standard as of March 31, 1997. As of January 2002, 

states’ upper income eligibility thresholds for SCHIP ranged from 133 

to 350 percent of the federal poverty level. 



[8] Throughout this report, SCHIP beneficiaries enrolled in Medicaid 

expansion programs are included in the discussion of Medicaid.



[9] Annual allotments are made to states for use over a 3-year period. 

For SCHIP annual allotments that remain unspent after 3 years, the 

Secretary of HHS is required to determine an appropriate procedure for 

redistributing any unused SCHIP funds to states that have exhausted 

their allotments.



[10] We included PCCMs as FFS-based arrangements because participating 

providers are predominately paid on a FFS basis. Thus, throughout this 

report, the term managed care only refers to capitated managed care 

arrangements.



[11] See 42 U.S.C. § 1396a(a)(30); 42 C.F.R § 438.2.



[12] See 42 U.S.C. §1396a(a)(8).



[13] The new Medicaid managed care rules have more detailed 

requirements for states than in the past, such as requiring assurances 

from participating plans concerning the availability of services, 

adequate capacity and services, coordination and continuity of care, 

and coverage and authorization of services.



[14] Implementing managed care service delivery by amending a state’s 

Medicaid plan has been an option for states since passage of the 

Balanced Budget Act of 1997. 



[15] The freedom of choice waiver is established by section 1915(b) of 

the Social Security Act and is set forth at 42 U.S.C. §1396n(b). The 12 

states that we reviewed with program waivers were Arkansas, California, 

Colorado, Florida, Louisiana, Michigan, New York, Ohio, Pennsylvania, 

Tennessee, Texas, and Washington. 



[16] The demonstration waiver is set forth at 42 U.S.C. § 1315(a). In 

addition to comprehensive waivers, states can also use section 1115 

waivers for specific populations or services, such as pharmacy or 

extending coverage to parents. Four of the 15 states--California, 

Colorado, Florida, and Illinois--have noncomprehensive section 1115 

waivers.



[17] See 42 U.S.C. § 1397bb.



[18] A state’s SCHIP evaluation was required to address several areas, 

including (1) the quality of health coverage provided, (2) choices of 

heath benefits coverage, (3) activities to coordinate SCHIP with other 

public and private programs, (4) changes in trends in the states that 

affect the provision of health insurance, and (5) recommendations for 

improving SCHIP. 



[19] See U.S. General Accounting Office, Medicaid Managed Care: 

Challenge of Holding Plans Accountable Requires Greater State Effort, 

GAO/HEHS-97-86 (Washington, D.C.: May 16, 1997). 



[20] Federal law requires that EPSDT include services that are 

necessary to correct or ameliorate defects and physical and mental 

illnesses and conditions discovered through screening, regardless of 

whether those services are covered by the state’s Medicaid plan. 



[21] The components of an EPSDT health screening include a 

comprehensive health and developmental history, a comprehensive 

unclothed physical exam, appropriate immunizations, laboratory tests 

(including a blood lead-level assessment), and health education. 



[22] See U.S. General Accounting Office, Medicaid: Stronger Efforts 

Needed to Ensure Children’s Access to Health Screening Services, 

GAO-01-749 (Washington, D.C.: July 13, 2001).



[23] NCQA, an independent foundation, has managed HEDIS since 1992. 

Originally designed for private employers as purchasers of health care, 

it has been adapted for public purchasers, regulators, and consumers, 

including Medicaid.



[24] The four general HEDIS categories that directly relate to service 

utilization are effectiveness of care, access/availability of care, use 

of services, and satisfaction with the experience of care. The 

remaining four general categories are health plan stability, cost of 

care, informed health care choices, and health plan descriptive 

information.



[25] Tennessee and Texas did not use HEDIS to assess plan performance. 



[26] Although some HEDIS measures have a 12-month continuous enrollment 

requirement, individuals with one gap in enrollment of up to 45 days or 

less can be included in the sample. However, to be included, 

individuals must remain with the same health plan after a break in 

enrollment.



[27] According to one report, at least one state--Iowa--analyzed HEDIS 

measures for individuals that were continuously enrolled in Medicaid 

for less than 12 months. See NCQA, Medicaid HMO and Fee-For-Service 

Comparison Strategy: Methodological Issues (Washington, D.C.: NCQA, 

n.d.). http://www.ncqa.org/Programs/qsg/medicaidcomparison.html 

(downloaded July 8, 2002).



[28] 42 C.F.R. § 434.34.



[29] Tennessee opted to use a state-designed beneficiary satisfaction 

survey rather than CAHPS. In most cases, the states we reviewed 

administered CAHPS directly or through the use of an independent 

contractor. Three states--Colorado, Illinois, and Pennsylvania--

required participating plans to administer CAHPS.



[30] CAHPS was developed in 1995 by the federal AHRQ to provide 

information to help beneficiaries compare health plans. 



[31] Among the nine states in our sample that reported their response 

rates, the response rates for the CAHPS survey of families with 

children ranged from 27 percent in Nevada to 85 percent in Illinois.



[32] Maryland Department of Health and Mental Hygiene, HealthChoice 

Evaluation, Final Report and Recommendations (Washington, D.C.: Jan. 

15, 2002).



[33] See U.S. General Accounting Office, Medicaid and SCHIP: States’ 

Enrollment and Payment Policies Can Affect Children’s Access to Care, 

GAO-01-883 (Washington, D.C.: Sept. 10, 2001).



[34] These nine states were California, Colorado, Illinois, Louisiana, 

Nevada, New York, Ohio, Texas, and Washington. The share of Medicaid-

eligible children participating in these states’ traditional FFS 

programs ranged from a low of about 30 percent in California and 

Colorado to about 90 percent in Illinois and Louisiana. (See table 2 

for more detail by state.)



[35] GAO-01-749.



[36] Since 1995, Ohio has used HEDIS primary care access measures for 

beneficiaries in its traditional FFS program. 



[37] The seven states we reviewed with PCCM programs were Arkansas, 

Colorado, Florida, Louisiana, Massachusetts, Pennsylvania, and Texas. 

Arkansas and Louisiana do not have Medicaid managed care programs other 

than PCCM, whereas the other five states do. The share of Medicaid-

eligible children participating in the seven states’ PCCM programs 

ranged from a low of 12 percent in Louisiana to a high of 100 percent 

in Arkansas. (See table 2 for more detail.)



[38] Some states’ contracts with PCCMs may include a general 

requirement that PCCMs provide care on a “timely basis.”



[39] Targeting such profiles and analyses to PCPs with a certain 

minimum volume of beneficiaries allows more meaningful data comparisons 

with the program and other PCPs than would be possible for PCPs with 

only a few beneficiaries. 



[40] These states were Arkansas, Illinois, Louisiana, Maryland, 

Massachusetts, Nevada, Ohio, Tennessee, and Washington. 



[41] These states were California, Colorado, Florida, Michigan, New 

York, Pennsylvania, and Texas. With the exception of Florida, all of 

the states used managed care delivery systems for all of their SCHIP 

programs; Florida enrolled a small number of SCHIP children into a PCCM 

program.



[42] To achieve this purpose, the state contacted a sample of 50 to 200 

providers for each plan participating in Medicaid and SCHIP, twice a 

year. 



[43] Florida’s distinct SCHIP program uses the Ambulatory Care Groups 

(ACG) Case-Mix Adjustment System to assign beneficiaries to 1 of 53 ACG 

categories for the purpose of this analysis.



[44] Although these four states used HEDIS in both their separate SCHIP 

and Medicaid programs, only New York reported comparing the results 

across the two programs.



[45] The remaining four general categories are health plan stability, 

cost of care, informed health care choices, and health plan descriptive 

information.



[46] Prior to 1997, Massachusetts had a managed care program with 

voluntary enrollment for most Medicaid beneficiaries. As many as 13 

health plans participated in the state’s Medicaid managed care program 

during the early 1990s. However, since enrollment was not mandatory, 

only a small number of Medicaid beneficiaries joined health plans. 

These low enrollment figures, coupled with health plan consolidations, 

resulted in some plans leaving the Medicaid program. 



[47] Of the beneficiaries affected by plan withdrawals in Ohio from 

1996 to January 2002, nearly half were involved in withdrawals that 

were the result of a plan selling its membership to another plan.



[48] Plan withdrawals during this period affected approximately 105,000 

beneficiaries, about 7 percent of the state’s Medicaid population. 



[49] According to the state, the decision to terminate the contract was 

based on problems including the plan’s financial solvency and failure 

to pay accurate and timely claims. As of May 1, 2002, the state was 

working with the health plan in an attempt to resolve these problems. 



[50] Over 50,000 additional beneficiaries were affected when their 

health plan was acquired by another participating health plan. 



[51] There are seven service areas, each consisting of multiple 

counties, in Texas’ Medicaid capitated managed care program. Health 

plans are contracted by service area, with some health plans having 

contracts in multiple service areas. 



[52] Tennessee enrolls nearly all beneficiaries in managed care; 

therefore, we did not collect a Medicaid FFS payment schedule that can 

be compared to Medicare rates. 



[53] See PriceWaterhouseCoopers, Comparing CPT Code Payments for Medi-

Cal and Other California Payers (Oakland, Calif.: June 2001) and 

University of Washington, State Primary Care Provider Study, Health 

Policy Analysis Program (Seattle, Wash.: February 2001). A study was 

also conducted in Maryland because, even though most beneficiaries are 

served through managed care, the state Medicaid program’s FFS payment 

rates for some groups of beneficiaries are considered to affect what 

managed care plans pay physicians. See State of Maryland Department of 

Health and Mental Hygiene, Report on the Maryland Medical Assistance 

Program and Maryland Children’s Health Program - Reimbursement Rates 

Fairness Act (Baltimore, Md.: September 2001).



[54] States can have more than one Medicare payment rate for a service, 

varying by locality. 



[55] Monique Morris and Suk-fong Tang, Pediatric Service Utilization, 

Fees and Managed Care Arrangements: 2001 Report Based on 1999 Data (Elk 

Grove Village, Ill.: American Academy of Pediatrics, 2001).



GAO’s Mission:



The General Accounting Office, the investigative arm of Congress, 

exists to support Congress in meeting its constitutional 

responsibilities and to help improve the performance and accountability 

of the federal government for the American people. GAO examines the use 

of public funds; evaluates federal programs and policies; and provides 

analyses, recommendations, and other assistance to help Congress make 

informed oversight, policy, and funding decisions. GAO’s commitment to 

good government is reflected in its core values of accountability, 

integrity, and reliability.



Obtaining Copies of GAO Reports and Testimony:



The fastest and easiest way to obtain copies of GAO documents at no 

cost is through the Internet. GAO’s Web site ( www.gao.gov ) contains 

abstracts and full-text files of current reports and testimony and an 

expanding archive of older products. The Web site features a search 

engine to help you locate documents using key words and phrases. You 

can print these documents in their entirety, including charts and other 

graphics.



Each day, GAO issues a list of newly released reports, testimony, and 

correspondence. GAO posts this list, known as “Today’s Reports,” on its 

Web site daily. The list contains links to the full-text document 

files. To have GAO e-mail this list to you every afternoon, go to 

www.gao.gov and select “Subscribe to daily E-mail alert for newly 

released products” under the GAO Reports heading.



Order by Mail or Phone:



The first copy of each printed report is free. Additional copies are $2 

each. A check or money order should be made out to the Superintendent 

of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or 

more copies mailed to a single address are discounted 25 percent. 

Orders should be sent to:



U.S. General Accounting Office



441 G Street NW,



Room LM Washington,



D.C. 20548:



To order by Phone: 	



	Voice: (202) 512-6000:



	TDD: (202) 512-2537:



	Fax: (202) 512-6061:



To Report Fraud, Waste, and Abuse in Federal Programs:



Contact:



Web site: www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov



Automated answering system: (800) 424-5454 or (202) 512-7470:



Public Affairs:



Jeff Nelligan, managing director, NelliganJ@gao.gov (202) 512-4800 U.S.



General Accounting Office, 441 G Street NW, Room 7149 Washington, D.C.



20548: