0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Educational Intervention |

The Effect of a Teaching Award on the Quality of Continuing Medical Education Participant Evaluations FREE

Thomas R. Welch, MD; Mary Jane Bullen, CMP
[+] Author Affiliations

From the Division of Nephrology and Hypertension, Children's Hospital Research Foundation (Dr Welch), and the Department of Continuing Medical Education, Children's Hospital Medical Center (Dr Welch and Ms Bullen), Cincinnati, Ohio.


Arch Pediatr Adolesc Med. 2000;154(1):81-82. doi:10-1001/pubs.Pediatr Adolesc Med.-ISSN-1072-4710-154-1-pei90185.
Text Size: A A A
Published online

Objective  To improve compliance with the completion of speakers' evaluation forms in a pediatric hospital continuing medical education program.

Design  Preintervention and postintervention analysis.

Setting  Pediatric hospital in Cincinnati, Ohio.

Participants  Attendees at pediatric grand rounds programs.

Main Outcome Measure  Analysis of speaker evaluation forms for each of 20 pediatric grand rounds programs were used as the basis for speakers' awards.

Results  Spontaneous written comments were found on a mean of 7.3 evaluations per preintervention program and 13.5 evaluations per postintervention program (P<.01). The distribution of objective scores in 3 items examined was wider postintervention than preintervention (P<.01).

Conclusion  When participants in continuing medical education programs know that their evaluations of an activity are used as the basis for an educational award, they may be more reflective in completing such evaluations.

THE FIFTH essential of the Essentials and Guidelines for Accreditation of Sponsors of Continuing Medical Education1 (CME) by the Accreditation Council for CME mandates the evaluation of CME activity by participants.2 Many programs satisfy this requirement by incorporating a questionnaire-format evaluation form into the attendance record so that participants must complete an evaluation of the activity to claim credit.

It has been our experience, especially with recurring programs, that such forms frequently are completed with minimal reflection. This, in turn, has minimized their utility in providing helpful feedback to speakers and program planners. To address this problem, and to offer recognition for outstanding CME providers, we instituted a system by which participants' evaluations were tied to a teaching award.

SETTING

Children's Hospital Medical Center, Cincinnati, Ohio, is a 330-bed tertiary care pediatric hospital serving a referral region with a population of 1½ million. The hospital's CME department provides programs for virtually all of the pediatricians in the region, as well as for many family physicians and general practitioners who care for children.

The major ongoing CME activity for this group is a series of weekly pediatric grand rounds programs. The program is attended by an average of 140 participants, both at the base hospital and at multiple off-campus, satellite-networked sites.

The participant evaluation instrument for this program includes an objective rating of 5 items—(1) objectives met, (2) educational aids, (3) pertinence to practice, (4) presentation quality, and (5) knowledge gained—on a scale of 0 (lowest) to 5 (highest). Space for individual comments is also included. The CME office staff tabulates these evaluations, and the results are reported to individual speakers as well as to the Children's Hospital Medical Center CME committee.

INTERVENTION

Beginning in January 1998, it was announced to attendees of the grand rounds programs that participant evaluations would be used by the CME committee to choose the recipient of a quarterly CME teaching award. This information was also provided in a newsletter.

MEASUREMENTS

The hypothesis being tested by this intervention was that participants would be more reflective in completing evaluations after the intervention. Since reflection is a somewhat subjective quality, several surrogates were chosen.

Evaluations for 20 consecutive grand rounds programs prior to and following the intervention were reviewed. Next, for each item, the total number of scores for each of 6 possible (0-5) were tabulated. The assumption was that there would be a wider distribution of scores if participants were being more reflective in their evaluations. Finally, the number of evaluations that contained individual written comments was tabulated. The assumption was that such comments indicated more thorough consideration on the part of participants. Comparisons between the preintervention and postintervention periods were made by the χ2 test.

PROGRAMS

The preintervention series consisted of the first 20 weekly grand rounds programs starting in January 1997. The postintervention series consisted of the first 20 weekly grand rounds programs starting in January 1998.

Average attendance at the programs was not significantly different between the series (preintervention, 128 participants per week; postintervention, 119 per week). The 2 series did not differ significantly in obvious ways such as faculty locale (preintervention, 5 visiting speakers; postintervention, 7) or professional status (4 presentations in each group included nonphysician professionals).

SPONTANEOUS COMMENTS

In the preintervention group, an average of 7.3 evaluations per week included spontaneous written comments about the program. In the postintervention group, an average of 13.5 evaluations per week had such comments (P<.01).

DISTRIBUTION OF OBJECTIVE SCORES

Three of 5 items on which objective evaluation of programs are based (objectives met, quality of presentation, and knowledge gained) were chosen for more detailed analysis. The other 2 (educational aids and pertinence to practice) tended to be less reliably completed and were very event specific.

For each of these items, the total number of scores for each of 6 possible (0-5) were tabulated. A 6 × 2 table was thus generated for the groups (preintervention and postintervention), and a χ2 test was applied to each score. In each case the distribution of total scores was significantly different in the postintervention period (P<.01).

Evaluation of CME activities is a cornerstone of the Accreditation Council for CME accreditation process.1,2 Planning of future programs and selection of future speakers are typically based on the results of program evaluations. Additionally, summaries of comments from evaluation instruments are used to provide constructive feedback to teachers. All of these uses of evaluation instruments are predicted on their careful, reflective completion by participants. Although this has not, to the best of our knowledge, been studied systematically, a look at the audience in many CME meetings is revealing. Evaluation instruments are sometimes completed in a very cursory fashion, occasionally even before a program has finished.

It is easy to speculate on the reasons for such behavior. The regulatory requirements of contemporary medical practice, including those regarding CME, are increasingly viewed as onerous. Participants who perceive no direct benefit or outcome from completing a form may have minimal reason to do so with care. By tying the responses to these forms to a speakers' award system, however, participants are able to see a direct, highly visible outcome of their efforts in providing evaluations. To reinforce this further, the teaching awards are presented every 3 months at the grand rounds programs themselves. Thus, participants receive regular reinforcement of the importance of their behavior.

Benefits occur to many with this system. Good teachers receive formal recognition in front of their peers, and other speakers receive more useful feedback regarding their teaching style. Our CME planning committee receives more thoughtful evaluations that, in turn, are useful in future program planning. Finally, the participants themselves benefit if the overall quality of educational programs is raised.

Editor's Note: Put gold at the end and the rainbow becomes brighter.—Catherine D. DeAngelis, MD

Accepted for publication July 29, 1999.

We thank Mead Johnson Pharmaceuticals Inc, Cincinnati, Ohio, and Dave Fulkerson for sponsoring the CME teaching award.

Reprints: Thomas R. Welch, MD, Division of Nephrology and Hypertension, Children's Hospital Research Foundation, 3333 Burnet Ave, TCHRF5, Cincinnati, OH 45229-3039 (e-mail: welct0@chmcc.org).

Accreditation Council for Continuing Medical Education, Essentials and Guidelines for Accreditation of Sponsors of Continuing Medical Education.  Lake Bluff, Ill Accreditation Council for Continuing Medical Education1984;
Rosof  ABFelch  WB Continuing Medical Education: A Primer Evaluation.  Westport, Conn Praeger Publishers1992;

Figures

Tables

References

Accreditation Council for Continuing Medical Education, Essentials and Guidelines for Accreditation of Sponsors of Continuing Medical Education.  Lake Bluff, Ill Accreditation Council for Continuing Medical Education1984;
Rosof  ABFelch  WB Continuing Medical Education: A Primer Evaluation.  Westport, Conn Praeger Publishers1992;

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 1

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles