0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Article |

The Status of Immunization Measurement and Feedback in the United States FREE

Charles W. LeBaron, MD; Mehran Massoudi, PhD, MPH; John Stevenson, MA; Hoang Dang, BS; Bridget Lyons, MPH
[+] Author Affiliations

From the National Immunization Program, Centers for Disease Control and Prevention, Atlanta, Ga.


Arch Pediatr Adolesc Med. 2000;154(8):832-836. doi:10.1001/archpedi.154.8.832.
Text Size: A A A
Published online

Background  A large body of scientific and programmatic data has demonstrated that provider measurement and feedback raises immunization coverage. Starting in 1995, Congress required that all states measure childhood immunization coverage in all public clinics, and federal grant guidelines encourage private practice measurements.

Objectives  To determine state immunization measurement rates and examine risk factors for high rates.

Methods  Review of 1997 state reports, with correlation of measurement rates to birth cohort and provider numbers, public/private proportions, and vaccine distribution systems.

Results  Of the 9505 public clinics, 48% were measured; 4 states measured all clinics; 29 measured a majority. Measurement rates were highest for Health Department clinics (67%), lower for community/migrant health centers (39%), and lowest for other clinics (22%). Rates were highly correlated among categories of clinics (r>+0.308, P<.03), and the fewer the clinics, the higher the measurement rates (r = −0.351, P = .01), but other factors were not significant. Of the 41,378 private practices, 6% were measured; no state measured all its practices; 1 measured a majority. Private practice measurement rates were not correlated to public clinic measurement rates or other factors examined. Of the 50,883 total providers, 14% were measured; no state measured all providers; 2 measured a majority. A trend toward higher measurement rates was found in states with fewer providers (r = −0.266, P = .06).

Conclusions  Three years after the congressional mandate, only a minority of public clinics and very few private practices had their immunization coverage measured. Greater efforts will be needed to assure implementation of the intervention.

IMMUNIZATION coverage rose from 53% to 89% over 7 years in Georgia public clinics following implementation of a measurement and feedback intervention involving low direct costs ($80,000 per year), which was characterized as Assessment, Feedback, Incentives, and eXchange of information (AFIX).1 Other states and cities adapted the Georgia AFIX model to local conditions, achieving comparable results at comparable costs,2,3 and successful efforts have been made to export the approach to private practices.4

Based on these data and a substantial body of scientific literature,58 annual provider-based measurement and feedback received strong formal endorsement from a wide range of public and private organizations. It was made one of the Standards for Pediatric Immunization Practices by the American Academy of Pediatrics, Elk Grove Village, Ill, and the American Medical Association, Chicago, Ill.9 It was the subject of a special recommendation by the Advisory Committee on Immunization Practices.10 It was strongly recommended by the Task Force on Community Preventive Services.11 It was one of the key strategies identified by the National Vaccine Advisory Committee, Washington, DC.12

Starting in 1995, Congress directed the Centers for Disease Control and Prevention (CDC) to "ensure that all states receiving IAP [federal immunization] funds conduct annual provider site assessments in all public clinics, using CDC-approved methodology."13 Federal immunization grant application guidelines (§8a) additionally "encourage periodic private provider assessments." In part because provider denominators were not readily available, the extent to which measurements actually took place was not systematically monitored. Recently, we used state and other data to obtain provider denominators for 1997,14 and we now report the first evaluation of the implementation of provider-based immunization measurements in the United States.

DATA SOURCES

All data were for 1997 and aggregated by state (District of Columbia treated as a state). Provider denominators were estimated using Vaccines for Children (VFC) and National Immunization Survey data, by methods previously described.14 Briefly, VFC supplies health care providers with federally purchased vaccines for administration to children who are uninsured, Medicaid eligible, Native American, or Alaska native. Vaccines for Children instructions define a provider site as a health care facility at which routine vaccinations are administered to children and where medical records are kept. From each state's annual VFC report, we abstracted counts of sites that were enrolled in VFC. To estimate the total number of provider sites in each state, VFC counts were adjusted upward based on state-specific estimates of the proportion of providers who were enrolled in VFC according to the National Immunization Survey.

Public/Private Proportions

The National Immunization Survey also furnished information on the proportion of infants vaccinated in the public/private sector. We assigned children with mixed sector vaccination histories (25%) to the private sector as their likely medical home.

Measurement Rates

From each state's annual report to CDC, we abstracted the number of provider sites whose immunization coverage had been measured by CDC-approved methods, sent the data via fax to each state for review and/or correction, followed with a telephone call, then resent the final data via fax for confirmation. We restricted our study to measurements performed by state immunization programs and did not attempt to examine the extent to which private bodies, such as managed care organizations, measured immunization rates among providers.

DATA ANALYSIS

All analyses were performed using the computer program SAS 6.12 (SAS Institute, Cary, NC). The unit of analysis was the state, and the outcome was the rate of measurement. We used the Spearman rank correlation test to examine the correlation of measurement rates to the number of providers, size of the birth cohort, and public/private sector proportions. We used the Wilcoxon rank sum test to examine the association of measurement rates with different state vaccine distribution systems: (1) Universal (state distribution of all Advisory Committee on Immunization Practices–approved routine childhood vaccines to all providers), (2) VFC full (VFC available to both public and private providers), and (3) VFC public (VFC available only to public providers).

PUBLIC CLINIC MEASUREMENTS

Of the 9505 total clinics, 48% were measured; 4 states (8%) measured all clinics; 29 (57%), a majority (Table 1). The rate of measurement was highest (67%) for Health Department clinics (22 states measured all; 38, a majority), lower (39%) for community/migrant health centers (14 states measured all; 24, a majority), and lowest (22%) for other clinics (5 states measured all; 12, a majority). Measurement rates were highly correlated among categories of clinics (r>+0.308, P<.03). The fewer clinics, the higher the rate of measurement (r = −0.351, P = .01), but measurement rates were not associated with size of the birth cohort (P = .69), proportion of children vaccinated in the public sector (P = .42), or vaccine distribution system (P = .50).

Table Graphic Jump LocationTable 1. Public Clinic Immunization Measurement Rates, 1997
PRIVATE PRACTICE MEASUREMENTS

Of the 41,378 private practices, 2436 (6%) were measured; no state measured all its practices; 1 (2%) measured a majority (Table 2). Rates of private practice measurement were not associated with rates of public clinic measurement (P = .19), number of practices (P = .58), or the other factors examined (P>.18).

Table Graphic Jump LocationTable 2. Private Practice Immunization Measurement Rates, 1997
TOTAL (PUBLIC AND PRIVATE) PROVIDER MEASUREMENTS

Of the 50,883 total providers, 6957 (14%) were measured; no state measured all, 2 (4%) measured a majority. A trend toward higher measurement rates was found in states with fewer providers (r = −0.266, P = .06), but rates were not associated with other factors examined (P>.12).

Three years after Congress mandated measurement of immunization coverage in all public clinics, 48% of public clinics were measured, and 4 states reported complete compliance. Since measurement of a majority of clinics may be sufficient to achieve coverage impact, as many as 29 states may have exposed their public sector children to some potential benefit from the intervention.

Higher clinic measurement rates correlated with fewer public clinics but not with birth cohort size or public sector proportion, suggesting that low burden of effort may have been more important to implementation than impact potential or access to federal resources (size of the cohort is a major determinant of federal immunization grant funding). Reinforcing this notion, measurement rates were highest (67%) for Health Department clinics, probably because of easier access of immunization program staff to sites under direct government control. Nevertheless, the high correlation of measurement rates among different clinic categories suggests that once an organizational commitment was made to measure clinic coverage, the intervention tended to be carried out across the public sector.

The potential impact of public clinic measurements should not be underestimated: 43% of all children were vaccinated entirely (18%) or in part (25%) in the public sector in 1997, and numerous studies have suggested that children vaccinated in the public sector have lower immunization rates than those vaccinated in the private sector.1518 Demonstrated success in implementing measurement and feedback in the public sector—where sites are fewer and easily enumerated, where large numbers of undervaccinated children are more likely to be found, and where a congressional mandate exists—would seem to be the necessary prelude to expansion of the intervention to the more complicated environment of the private sector.

Apparently, most state programs agreed, since only 6% of the nation's private practices were measured, and just 1 state measured a majority of its private practices, despite the fact that 82% of the birth cohort likely had its medical home in the private sector, and private practice measurements were encouraged by federal grant guidelines. Actual exposure of private sector children to the intervention may have been higher in certain states; for example, in 1996 one state measured all its private providers, found that about 30% of practices administered about 80% of vaccinations, and in 1997 concentrated on these sites.4 However, data do not suggest that many states followed this targeted approach.

Our study demonstrates that the weight of a large body of scientific evidence, repeated demonstrations of "real-world" effectiveness, widespread support from medical bodies and advisory panels, and even a congressional mandate may not be sufficient to assure swift generalization of a low-cost intervention, particularly in the absence of monitoring. Based on these findings and advice from states and private provider groups, CDC has been considering an initiative to improve measurement rates across the nation with features that include intervention-specific funding, use of VFC program data to help focus efforts on providers who vaccinate large numbers of children, simplification of measurement methods, and timely monitoring and feedback of measurement rates. The last factor may be most important, given the intervention's publicized premise of "What gets measured gets done."

Our study focused on implementation of measurement and feedback during 1 year and thus could not examine the impact of the intervention on raising immunization rates over time, though this is clearly the ultimate objective of any intervention monitoring system. Furthermore, though we were able to calculate provider site measurement rates, we could not determine the proportion of children exposed to the intervention, a more fundamental index of success. Finally, we relied on state self-reporting, which seems more likely to overestimate than underestimate measurement rates in a context of required compliance with a fiscal mandate.

Previous studies have suggested that measurement and feedback can initially produce dramatic apparent rises in individual provider-measured coverage (up to 20-35 percentage points) generated by improvements in individual provider record keeping but that the intervention's effect on population-based immunization rates is more likely to be a steady and incremental increase of 5 percentage points annually.2 Hence, it may be some years before the impact of even a fully implemented program is apparent in a state's immunization coverage. Impact on US immunization rates would require widespread implementation across states, as was required by Congress and encouraged by federal immunization grant guidelines but not actually carried out.

Measurement and feedback involving less than 15% of providers is unlikely to have any considerable effect nationally. A public program based on "What gets measured gets done" has had to learn for itself that lack of monitoring of an intervention can be associated with lack of implementation.

Accepted for publication February 28, 2000.

We wish to thank all the state and local immunization program staff who contributed their time and effort to providing the data for this report and without whose tireless efforts, implementation of measurement and feedback would never take place.

Corresponding author: Charles W. LeBaron, MD, Mail: MS E-61, NIP, CDC, Atlanta, GA 30333; Federal Express: Room 5314, 12 Corporate Square Blvd, Atlanta, GA 30329 (e-mail: cel3@cdc.gov).

LeBaron  CWChaney  MBaughman  AL  et al.  Impact of measurement and feedback on vaccination coverage in public clinics, 1988-1994. JAMA. 1997;277631- 635
Link to Article
LeBaron  CWMercer  JTMassoudi  MS  et al.  Changes in clinic vaccination coverage after institution of measurement and feedback in 4 states and 2 cities. Arch Pediatr Adolesc Med. 1999;153879- 886
Link to Article
Schlenker  TSukhan  SSwenson  C Improving vaccination coverage through accelerated measurement and feedback [letter]. JAMA. 1998;2801482- 1483
Link to Article
Massoudi  MSWalsh  JStokley  S  et al.  Assessing immunization performance of private practitioners in Maine: impact of the assessment, feedback, incentives, and exchange strategy. Pediatrics. 1999;1031218- 1223
Link to Article
Buffington  JBell  KMLaForce  FMand the Genesee Hospital Medical Staff, A target-based model for increasing influenza immunizations in private practice. J Gen Intern Med. 1991;6204- 209
Link to Article
Kouides  RWLewis  BBennett  NM  et al.  A performance-based incentive program for influenza immunization in the elderly. Am J Prev Med. 1993;9250- 254
Morrow  RWGooding  ADClark  C Improving physicians' preventive health care behavior through peer review and financial incentives. Arch Fam Med. 1995;4165- 169
Link to Article
Thompson  RSTaplin  SHMcAfee  TAMandelson  MTSmith  AE Primary and secondary prevention services in clinical practice: twenty years' experience in development, implementation, and evaluation. JAMA. 1995;2731130- 1135
Link to Article
Ad Hoc Working Group for the Development of Standards of Pediatric Immunization Practices, Standards for pediatric immunization practices. JAMA. 1993;2691817- 1822
Link to Article
Not Available, Recommendations of the Advisory Committee on Immunization Practices: programmatic strategies to increase vaccination rates: assessment and feedback of provider-based vaccination coverage information. MMWR Morb Mortal Wkly Rep. 1996;45219- 220
Centers for Disease Control and Prevention, Vaccine-preventable diseases: improving vaccination coverage in children, adolescents, and adults: a report on recommendations of the Task Force on Community Preventive Services. MMWR Morb Mortal Wkly Rep. 1999;48 ((RR-8)) 1- 15
The National Vaccine Advisory Committee, Strategies to sustain success in childhood immunizations. JAMA. 1999;282363- 370
Link to Article
Not Available, Departments of Labor, Health and Human Services, and Education and Related Agencies Appropriation Bill: 1995. 103rd Cong, 2nd Sess (1994). Report 103-318:57.
LeBaron  CWLyons  BMassoudi  MStevenson  J The childhood vaccination infrastructure of the United States.  Programs and abstracts of the 2000 Pediatric Academic Societies and American Academy of Pediatrics joint meeting May 15, 2000 Boston, Mass. Abstract 1206.
Bobo  JGale  JTharpa  PWassilak  S Risk factors for delayed immunization in a random sample of 1163 children from Oregon and Washington. Pediatrics. 1993;91308- 314
Morrow  ARosenthal  JLakkis  H A population-based study of access to immunization among urban Virginia children served by public, private, and military health care systems. Pediatrics [serial online]. 1998;101:e5. Available at: http://www.pediatrics.org/cgi/content/full/101/2/e5?. Accessed June 9, 2000.
Massoudi  MLeBaron  CWStokley  SDini  EBelmont  LSchultz  R Vaccination levels and access to care during a pertussis outbreak in a rural population.  Program and abstracts of the Ambulatory Pediatric Association Meeting May 3, 1998 New Orleans, La.Abstract 109.
Maes  ERodewald  LECoronado  VBattaglia  MIzrael  DEzzati-Rice  T Immunization providers for impoverished preschool children: results from the 1997 National Immunization Survey.  Program and abstracts of the Ambulatory Pediatric Association Meeting May 5, 1998 New Orleans, La.Abstract 365.

Figures

Tables

Table Graphic Jump LocationTable 1. Public Clinic Immunization Measurement Rates, 1997
Table Graphic Jump LocationTable 2. Private Practice Immunization Measurement Rates, 1997

References

LeBaron  CWChaney  MBaughman  AL  et al.  Impact of measurement and feedback on vaccination coverage in public clinics, 1988-1994. JAMA. 1997;277631- 635
Link to Article
LeBaron  CWMercer  JTMassoudi  MS  et al.  Changes in clinic vaccination coverage after institution of measurement and feedback in 4 states and 2 cities. Arch Pediatr Adolesc Med. 1999;153879- 886
Link to Article
Schlenker  TSukhan  SSwenson  C Improving vaccination coverage through accelerated measurement and feedback [letter]. JAMA. 1998;2801482- 1483
Link to Article
Massoudi  MSWalsh  JStokley  S  et al.  Assessing immunization performance of private practitioners in Maine: impact of the assessment, feedback, incentives, and exchange strategy. Pediatrics. 1999;1031218- 1223
Link to Article
Buffington  JBell  KMLaForce  FMand the Genesee Hospital Medical Staff, A target-based model for increasing influenza immunizations in private practice. J Gen Intern Med. 1991;6204- 209
Link to Article
Kouides  RWLewis  BBennett  NM  et al.  A performance-based incentive program for influenza immunization in the elderly. Am J Prev Med. 1993;9250- 254
Morrow  RWGooding  ADClark  C Improving physicians' preventive health care behavior through peer review and financial incentives. Arch Fam Med. 1995;4165- 169
Link to Article
Thompson  RSTaplin  SHMcAfee  TAMandelson  MTSmith  AE Primary and secondary prevention services in clinical practice: twenty years' experience in development, implementation, and evaluation. JAMA. 1995;2731130- 1135
Link to Article
Ad Hoc Working Group for the Development of Standards of Pediatric Immunization Practices, Standards for pediatric immunization practices. JAMA. 1993;2691817- 1822
Link to Article
Not Available, Recommendations of the Advisory Committee on Immunization Practices: programmatic strategies to increase vaccination rates: assessment and feedback of provider-based vaccination coverage information. MMWR Morb Mortal Wkly Rep. 1996;45219- 220
Centers for Disease Control and Prevention, Vaccine-preventable diseases: improving vaccination coverage in children, adolescents, and adults: a report on recommendations of the Task Force on Community Preventive Services. MMWR Morb Mortal Wkly Rep. 1999;48 ((RR-8)) 1- 15
The National Vaccine Advisory Committee, Strategies to sustain success in childhood immunizations. JAMA. 1999;282363- 370
Link to Article
Not Available, Departments of Labor, Health and Human Services, and Education and Related Agencies Appropriation Bill: 1995. 103rd Cong, 2nd Sess (1994). Report 103-318:57.
LeBaron  CWLyons  BMassoudi  MStevenson  J The childhood vaccination infrastructure of the United States.  Programs and abstracts of the 2000 Pediatric Academic Societies and American Academy of Pediatrics joint meeting May 15, 2000 Boston, Mass. Abstract 1206.
Bobo  JGale  JTharpa  PWassilak  S Risk factors for delayed immunization in a random sample of 1163 children from Oregon and Washington. Pediatrics. 1993;91308- 314
Morrow  ARosenthal  JLakkis  H A population-based study of access to immunization among urban Virginia children served by public, private, and military health care systems. Pediatrics [serial online]. 1998;101:e5. Available at: http://www.pediatrics.org/cgi/content/full/101/2/e5?. Accessed June 9, 2000.
Massoudi  MLeBaron  CWStokley  SDini  EBelmont  LSchultz  R Vaccination levels and access to care during a pertussis outbreak in a rural population.  Program and abstracts of the Ambulatory Pediatric Association Meeting May 3, 1998 New Orleans, La.Abstract 109.
Maes  ERodewald  LECoronado  VBattaglia  MIzrael  DEzzati-Rice  T Immunization providers for impoverished preschool children: results from the 1997 National Immunization Survey.  Program and abstracts of the Ambulatory Pediatric Association Meeting May 5, 1998 New Orleans, La.Abstract 365.

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 2

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles