0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Educational Intervention |

A Pediatric Clinical Skills Assessment Using Children as Standardized Patients FREE

J. Lindsey Lane, MD; Amitai Ziv, MD; John R. Boulet, PhD
[+] Author Affiliations

From the Department of Pediatrics, Medical College of Pennsylvania and Hahnemann University (Dr Lane), and the Educational Commission for Foreign Medical Graduates (Dr Boulet), Philadelphia, Pa; and Department of Pediatrics, Hadassah, Hebrew University Medical Center, Jerusalem, Israel (Dr Ziv). Dr Lane is now with the Department of Pediatrics, Jefferson Medical College, Children's Health Center, Philadelphia.


Arch Pediatr Adolesc Med. 1999;153(6):637-644. doi:10.1001/archpedi.153.6.637.
Text Size: A A A
Published online

Objectives  To develop and implement a pediatric clinical skills assessment (PCSA) for residents, using children as standardized patients (SPs); to assess the psychometric adequacy of the PCSA and use it to evaluate the performance of residents; and to evaluate the feasibility of using child SPs and the response of the residents and the child SPs to participation in the PCSA.

Methods  Ten 22-minute complete patient encounters were developed, 7 with child SPs. Fifty-six residents (10 second-year pediatric residents, 29 first-year pediatric residents, and 17 first-year family practice residents) were evaluated on the following clinical skills: history taking, physical examination, interpersonal skills, and documentation and interpretation of clinical data/patient note.

Main Outcome Measures  Patient encounter checklists, focus groups, and questionnaires.

Results  Average skill scores for the 56 residents were 68% (SD, 12%) for history taking, 56% (SD, 26%) for physical examination, 46% (SD, 12%) for patient note, and 68% (SD, 16%) for interpersonal skills. Second-year pediatric residents scored significantly higher on history taking than first-year pediatric and first-year family practice residents; first-year pediatric residents scored significantly higher on interpersonal skills than second-year pediatric and first-year family practice residents; and first- and second-year pediatric residents scored significantly higher on the patient note component than first-year family practice residents. All differences noted were significant at P<.05. There were no significant differences on physical examination between the groups. Reliabilities were 0.69 for history taking, 0.64 for physical examination, 0.76 for interpersonal skills, and 0.81 for the patient note component. On a Likert scale (5 indicates high; 1, low), residents rated the PCSA 3.9 for realism, 4.1 for challenge, 3.1 for enjoyment, and 2.9 for fairness. Child SPs found the experience positive. No negative effects on the children were identified by their real parents or their SP parents.

Conclusions  Our development method gives content validity to our PCSA, and resident scores give indication of PCSA construct validity. Reliabilities are in the acceptable range. Residents found the PCSA challenging and realistic but less than enjoyable and fair. Use of child SPs is feasible. Resident performance scores were low relative to the performance criteria of the PCSA development group. The adequacy of clinical skills teaching and assessment in residency programs needs to be reviewed. Deficits in specific skills and overall performance of residents identified by a PCSA could be used to guide individual remediation and curricular change.

Figures in this Article

THE AMERICAN Board of Pediatrics1 has defined the clinical skills in which each graduate of a pediatric residency should have competence. They are (1) attitudes, (2) factual knowledge, (3) interpersonal skills (IPS), (4) technical skills (which includes physical examination [PE]), and (5) clinical judgment. It is easy to reliably and validly assess factual knowledge but difficult to do the same for attitudes, IPS, data gathering (history taking [HX] and PE) skills, and clinical judgment. Attitudes and IPS are known to have a critical effect on medical outcomes and patient satisfaction,2 and because of this, they clearly should be evaluated. Data gathering and clinical judgment, which have a direct, easily recognizable effect on medical outcomes, should also be evaluated during residency training. In response to the need to assess clinical skills and performance, different assessment formats using standardized patients (SPs) have been developed and refined for the past 20 years.35 One format that is generally referred to as an objective structured clinical examination (OSCE) requires the performance of a single task (eg, examine the heart, obtain a social history, interpret a radiograph) at multiple short "stations." Another format, which is generally referred to as a clinical skills assessment, requires the performance of several complete patient encounters and requires the sequential use of HX skills, PE skills, information-giving skills, and patient management skills, as well as the continuous use of interpersonal and communication skills during each encounter. The SPs used for these encounters may be actual patients with a medical problem or individuals who have been trained to portray a particular medical problem, symptom, or physical finding; they are educated to always respond to the same question or PE maneuver in the same way.

The use of SPs has made it possible to evaluate those clinical skills that cannot be assessed using paper-and-pencil evaluation techniques. The scores derived from SP-based assessments have been shown to have encouraging psychometric properties611; according to the Association of American Medical Colleges Curriculum Directory (1996-1997), 96% of American medical schools are teaching clinical skills using various SP formats. The directory also shows that 60% of US medical schools use some form of examination with either SPs or real patients to evaluate medical students' clinical skills. The Medical Council of Canada has used an assessment of clinical skills as part of the licensure examination since 1993,12 and in the United States, the Educational Commission for Foreign Medical Graduates13 introduced an SP-based clinical skills assessment in July 1998. The National Board of Medical Examiners14 is planning to introduce a similar assessment for US licensure in the near future. However, despite the widespread use of clinical skills teaching and assessment programs in undergraduate medical education and the widespread use of SPs in these programs, their use in graduate medical education is much less frequent.

Some of the first reports on the use of SPs come from pediatrics, where "simulated mothers" were introduced as a tool for teaching medical interviewing and IPS.15 Three centers in the United Kingdom developed OSCEs in pediatrics for medical students in the early 1980s.1618 However, it was not until this decade that pediatric OSCEs (with the short-station format and use of some SPs) were used to evaluate the clinical skills of residents.1922

There is a need to further investigate the SP methods for use in pediatric training. The development of a pediatric clinical skills assessment (PCSA) that uses children as SPs in a complete patient encounter format is described herein. The psychometric properties of the PCSA and the findings on resident performance are also described. Qualitative data about how the residents perceived the PCSA and about how the children who were trained responded to the experience of being SPs are provided.

The first objective of this study was to develop and implement a PCSA using children as SPs. The second objective was to evaluate the psychometric adequacy of the scores derived from our SP assessment. The third objective was to obtain and analyze data on resident performance on the PCSA. The components of the residents' performance to be assessed were data gathering skills (HX and PE skills), IPS, and the ability to organize and interpret clinical data gathered during a complete patient encounter in a written patient note (PN). The fourth objective was to obtain qualitative data concerning the response of the residents to the test, the appropriateness and feasibility of using children as SPs, and the response of the children to the experience of being SPs. By fulfilling these objectives, we hoped to highlight the educational potential of using a clinical skills assessment with child and adult SPs in pediatric residency programs.

PCSA BLUEPRINT DEVELOPMENT

A PCSA development group with 8 members was recruited. It was composed of 6 general pediatricians, 1 pediatric nurse practitioner, and 1 medical educator. The group defined the skill areas to be evaluated by the PCSA: HX, PE, IPS, and documentation and interpretation of clinical data (PN). To achieve acceptable score reliabilities on a PCSA, a broad sampling of clinical situations and several hours of testing time are necessary.7 Therefore, we planned to develop 10 complete patient encounters providing 220 minutes of testing time for our PCSA. A general concept for each complete patient encounter was developed by taking into account the prevalence and importance of different pediatric medical problems, the desirability of representing various specialty areas within pediatrics, the appropriateness of certain cases for the proficiency level of the examinees, and the goal of having a representative range of patient ages.

PATIENT ENCOUNTER DEVELOPMENT

Using this blueprint, the PCSA members worked in pairs to develop the specific content of each clinical encounter. A description of the SP clinical encounters is presented in Table 1. These SP encounters represent a sample of the possible clinical problems that a physician would be likely to see in practice. Three of the clinical problems are from the preschool age group, and, because of the impossibility of being able to train a young child or an infant to perform as an SP, these cases only had adult SPs. This meant that HX, IPS, and PN skills were measured in every encounter and PE skills were measured only in the 7 encounters in which a child SP was present. Each clinical problem was divided into 2 parts: a 15-minute patient encounter, followed by a 7-minute documentation of the encounter in the medical record. Each encounter was written to include a communication challenge that was specifically designed to trigger an empathetic or counseling response from the resident. For example, in the asthma case, the child says that he wants to move back to Arizona because he misses his friends, which should elicit an empathetic response from the resident. In this way, each encounter provided the SPs with an additional specific opportunity to assess the resident's IPS.

Table Graphic Jump LocationTable 1. Summary of Pediatric Clinical Skills Assessment Patient Encounters
SP RECRUITMENT AND TRAINING

Eleven children between 7 and 16 years of age were recruited as SPs. All the children had permission from their school to participate and were required to submit a report of their experience. Three children were trained for clinical encounter 7 (dark urine), and 3 adolescents were trained for clinical encounter 9 (headache), because they each had permission to miss 2 days of school only. One adolescent and one child were able to attend all 6 days of the PCSA. The other 3 children were home schooled, and participation as an SP was incorporated into their curriculum as a special educational project. Nine adults were recruited to portray the parents. The child and adult SPs were paid an hourly rate.

Standardized patient training followed a structured program: (1) orientation to the PCSA process, (2) delineation of core concepts essential to being an SP, (3) explanation of checklists and rating scales, and (4) case training of three or four 112- to 2-hour sessions. The SPs were trained to be consistent in their portrayal of patient cases. They were also trained to evaluate the residents' performance by filling out checklists for the HX and PE skills and to rate the IPS component using a Likert scale. Training continued until the SPs' checklist and rating scale completion were consistent with the performance portrayed by the trainer. The adult and adolescent SPs completed the HX and PE checklists and the IPS ratings, and gave a global rating of their overall patient satisfaction with the encounter. In the encounters in which a child SP was paired with an adult SP (simulating a parent), the children only completed the rating of their overall patient satisfaction with the encounter. Once all the cases had been developed and the SPs trained, the cases were validated for accuracy and realism by a pediatrician who had not participated in the development process. Minor modifications to case content, case checklists, and SP portrayal were made on the recommendation of the validating physician.

STUDY SAMPLE

Residency program directors of 2 pediatric programs and 4 family practice programs agreed to have their residents participate in this study. The resident sample consisted of 29 first-year pediatric residents (PL-1s) with 4 months of pediatric experience after medical school, 17 first-year family practice residents (FP-1s) with no pediatric experience after medical school, and 10 second-year pediatric residents (PL-2s) with 16 months of pediatric experience after medical school.

PILOT TEST

The 10 PL-2s participated in the piloting of the PCSA. Observers from the PCSA development group evaluated all aspects of the test, including logistics. Feedback from the SPs, administrative personnel, and examinees was also used to assess the adequacy of the PCSA. After the pilot, the only major change made to the PCSA procedure was to expand the scope and length of the orientation for the participating residents.

TEST ADMINISTRATION

Following the pilot test, the first-year residents (n=46) were tested over 5 days. Residents had an hour-long orientation to the PCSA several days before they were due to take the assessment. Up to 10 residents participated each day, and they received a 15-minute focused orientation just before commencing the PCSA. The PCSA was run at the clinical skills assessment center of the Medical College of Pennsylvania and Hahnemann University School of Medicine, Philadelphia. All the resident encounters were videotaped, and SP performance was observed on video monitors or directly through 1-way mirrors. The residents received instructions and information about the clinical problem before entering the room. The same information was also placed on the desk in the examination room. The door instructions for clinical problem 6 are shown in Table 2. After leaving the examination room, each resident spent 7 minutes writing the PN and was instructed to document the pertinent positive and negative history and physical findings, to list a differential diagnosis, and to outline a management workup plan. The PCSA, including orientation, refreshment breaks, and completion of resident feedback questionnaires, took 6 hours to complete.

Table Graphic Jump LocationTable 2. Door Instructions for Clinical Problem 6
DATA COLLECTION
Checklists

History-taking and PE checklists were specifically written for each patient encounter. Each item on the checklists was carefully selected by the PCSA development team, and each item represented a question that would be asked or a PE maneuver that would be performed by a competent pediatrician seeing the same patient. A generic IPS checklist was developed with IPS subsets in interviewing skills, counseling skills, ability to establish rapport, and personal manner. The adult SPs completed the yes/no items on the HX and PE checklists and completed the IPS checklist giving a Likert scale rating (1-5) of the IPS subsets and an overall patient satisfaction rating of the encounter. The child SPs also gave an overall patient satisfaction rating of the encounter. The HX, PE, and IPS performance scores of the residents were derived based on the 3 checklists. A key word checklist to score the PN was created by the PCSA development group. The key words were the pertinent positives and negatives in the HX and PE components, the differential diagnosis, and the immediate management plan that a competent pediatrician would document. The residents' PN scores were derived based on the key word checklist developed for each case.

Resident Feedback Questionnaire

The residents completed a feedback questionnaire at the end of the PCSA sessions in which they rated each case for realism and challenge and the entire PCSA for enjoyment and fairness. This was done using a Likert scale rating (1-5).

SP Focus Groups

Shortly after the PCSA, 2 focus groups were held. One focus group consisted of the child and adolescent SPs; the other consisted of the SP parents and the real parents of the children. Discussion was directed by the facilitator with the goal of obtaining information about the effect of participation as an SP on the children.

DATA ANALYSIS

Component scores for HX and PE were calculated by summing the HX and PE checklist items performed by the examinee, and dividing by the total number of checklist items. This was done individually for each of the 10 clinical encounters. The clinical encounter scores were then averaged over the 10 stations. Scores on the PN were derived based on the key word checklist developed for each case. Ten percent of the PNs for each of the 10 clinical encounters were scored by one of us (J.L.L.) and another pediatrician. Almost 100% concordance was achieved between the 2 scorers, and the remaining 90% of the PNs were scored solely by the pediatrician not involved in the study. Patient note scores were reported as percentage correct, reflecting the proportion of key word items credited. The individual clinical encounter IPS score was derived from the average of the 4 IPS subsets (Likert scale, 1-5), ratings required of the SP for each case. Averaging these clinical encounter scores over the 10 stations generated a total IPS score. This total IPS score, which could range from 1 to 5, was converted to a percentage score through a linear transformation. Statistical comparisons among the scores for the 3 resident groups were completed using 1-way analysis of variance. These analyses were done separately for each of the PCSA components. If the omnibus test of significance indicated that there were significant differences among the groups (P<.05), step-down tests, with appropriate Bonferroni adjustments, were used to establish where the differences in mean scores existed. The reproducibility of the resident scores, across the 10 encounters, was established by calculating α coefficients. This provided a measure of interstation reliability for the PCSA component scores. Descriptive statistics (means and SDs) were generated based on the responses to the feedback questionnaires. A statistical analysis software program (SAS, Raleigh, NC) was used for all analyses.

RESIDENT PCSA PERFORMANCE PROFILE

Each resident received a profile of 4 scores: (1) data gathering, which included HX and PE; (2) documentation and interpretation of clinical data (PN); (3) integrated clinical encounter, which was a simple average of (1) and (2); and (4) IPS. In addition to the score report, residents were given the videotape of their PCSA. The videotape was to be reviewed with the residents by their individual program director and serve as an additional formative evaluation tool. A list of checklist items that were missed in each clinical encounter by individual residents was available for review by the residency program directors.

A summary of the PCSA development and implementation process is shown in Table 3.

Table Graphic Jump LocationTable 3. Summary of Pediatric Clinical Skills Assessment (PCSA) Development and Implementation

By using a blueprint for the development of the patient encounters, our PCSA was comprised of clinical problems that reflected common pediatric clinical encounters and required the use of "authentic" clinical skills to complete the encounter successfully. We can, therefore, assume some degree of content validity for the assessment. Other evidence of the psychometric adequacy of the PCSA was gathered by calculating the reliability of component scores across the 10 patient encounters. The reliability of the scores for the HX, PE, PN, and IPS components was 0.69, 0.64, 0.81, and 0.76, respectively. These figures were based on the entire sample (N=56) of residents.

The clinical skills scores of the 3 groups of residents (PL-2s, PL-1s, and FP-1s) are displayed in Table 4. The summary values are presented as percentage scores for all clinical skills components in Figure 1. The PL-2s scored significantly higher (P<.05) than the other 2 groups on the HX component of the assessment. On the PE component, there were no significant differences in performance between the 3 groups. On the PN component, which assesses the ability of a candidate to summarize and interpret, in writing, the relevant information gathered in the clinical encounter, the FP-1s scored significantly lower (P<.05) than either of the other 2 groups. There was no significant difference in performance between the PL-2s and the PL-1s on the PN component. On the IPS component, the PL-1s scored significantly higher (P<.05) than the FP-1s and the PL-2s. The global ratings of overall patient satisfaction given by the adult and child SPs for the 3 groups of residents are shown in Figure 2. Overall patient satisfaction ratings correlate with the IPS scores (Figure 1), and the global ratings of satisfaction given by the child SPs are concordant with those given by the adult SPs.

Table Graphic Jump LocationTable 4. Component Scores by Resident Group*
Place holder to copy figure label and caption
Figure 1.

Comparison of history taking (HX), physical examination (PE), documentation and interpretation of data/patient note (PN), and interpersonal skills (IPS) scores. The asterisk indicates a significant (P<.05) difference.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Overall patient satisfaction. The asterisk indicates a significant (P<.05) difference.

Graphic Jump Location
RESIDENT FEEDBACK QUESTIONNAIRE

The residents rated the 10 clinical encounters individually between 3.6 and 4.0 for realism using the Likert scale. The overall rating of the PCSA was 3.9 for realism, 4.1 for challenge, 3.1 for enjoyment, and 2.9 for fairness. Figure 3 shows the overall ratings.

Place holder to copy figure label and caption
Figure 3.

Resident feedback on the pediatric clinical skills assessment clinical encounters.

Graphic Jump Location
FOCUS GROUPS

All the real parents of the child SPs agreed that they would allow their children to be SPs again, and they believed that the experience had been a privilege for the children. They observed that their children were proud about participating and that they believed that they had worked hard at a "real" job to earn their money. The real and SP parents believed that the children had learned a lot. The SP parents believed that the children were aware of the interpersonal dynamics of an encounter and reported that during the PCSA the children had strong negative reactions if a resident ignored them or talked down to them. The SP parents found that the children had an excellent memory for the checklist items even though they were not required to complete the checklists. The children said that they found the experience at times exciting, at times nerve wracking, and at times boring; and they said that they were tired at the end of the 6-hour PCSA session. They believed that they learned some biology and got an idea about how physicians are trained and what physicians do. All the children liked earning money.

Even though the use of SPs throughout medical education has burgeoned, and hundreds of studies have been published, the implementation of SP-based assessments in pediatrics has been limited. The "complete encounter" format of our assessment contrasts with the assessment of selected skills at multiple short stations of the traditional OSCE20,21 in that our PCSA requires residents to sequence and integrate their clinical skills in the context of a complete patient encounter.

From a psychometric perspective, some evidence of the reliability and validity of the PCSA was derived. The reliabilities found in our study are within the acceptable ranges for similar clinical assessments.23 The content validity of the SP-based assessment is established by using a blueprint in that authentic skills are being measured across common clinical encounters. The clinical encounters were all developed in a structured, standardized manner, and common clinical problems representing a wide range of patient ages and disciplines within the field of pediatrics were selected. The use of children as SPs undoubtedly enhanced the realism and the content validity, and it also required the residents to perform the social-linguistic task of changing "registers" or "frames" when addressing the child.24 Because we believed we could not use children younger than 7 years in this PCSA format, we were forced to develop clinical problems that were "second opinions" and that used only adult SPs. The PE component was not measured in these encounters, compromising, to some degree, the representativeness of our clinical problem pool. Overall, the feedback received from the examinees on case realism was encouraging, and even the stations without children were still evaluated to be as realistic as the other stations. While the authenticity of the skills being measured, combined with the selection of the clinical encounters being modeled, does provide evidence of the validity of the assessment, other support for validity was also derived. Even though not all of the differences among the 3 groups of residents in HX and PN were significant, the trend was for residents' scores to increase with increasing pediatric experience. In general, one would expect that residents with more pediatric experience would ask more pertinent questions and be better able to collect and synthesize information. The fact that our tool was sensitive enough to identify some expected differences in clinical performance between more and less experienced residents provides some evidence to suggest that meaningful inferences can be derived from the PCSA scores.

Residents with marked deficiencies in IPS were identified based on scores from this performance assessment. As medical outcomes are so closely linked with effective communication, the establishment of rapport by a physician, and a patient's perception of the physician being caring, the importance of being able to identify and remediate individual residents with deficiencies in this domain early in their training cannot be overemphasized.25 In the domain of IPS, it was interesting to find that the PL-1s performed significantly better than the PL-2s or the FP-1s. The reason that the more experienced pediatric residents had significantly worse IPS vs the less experienced pediatric residents could be attributed to the small sample size used in this study, but it may also be a worrisome indicator of some degree of deterioration of IPS in the course of medical education. Whether this deterioration might result from the large clinical care load carried by residents in many programs and the sleep deprivation from frequent nights on call leading to limited time and a diminution of the inclination to really communicate is a matter for speculation. There have, however, been some published reports2628 of deterioration in IPS as training progresses.

The examinee scores, in an absolute sense, may also provide some insights into the complexities of medical training. The average skill scores were 68% for HX, 56% for PE, 46% for documentation and interpretation of clinical data (PN), and 3.4 (of 5) for IPS. If we assume that the checklists developed by the expert panel of faculty reflect realistic expectations for performance at this level of residency training, we find, in our sample of residents, a large gap between expectations and performance. Joorabchi and Devries21 reported a similar finding in their resident OSCE. Our residents' most startling deficiencies were in PE skills and in the PN component. This finding could be explained in 2 ways. First, the difference reflects actual performance deficits in our residents, and may follow from a decreased emphasis in medical education on physical diagnosis and clinical reasoning skills, compared with knowledge-based skills. Second, the discrepancy may reflect a methodological difficulty in the case development process, in that by evaluating residents with specific checklist items we may be including unimportant items and failing to include important items to measure performance. This would mean that the more subtle, and perhaps more critical, factors in the clinical encounter might not be evaluated.29 The use of global, or holistic, ratings in future PCSA assessments may alleviate this potential problem.30

The residents' feedback about the PCSA indicated that they found the exercise challenging but less than fair or enjoyable. As residents in training are used to receiving preceptor input to manage each clinical problem, the fact that they had to complete 10 clinical encounters in succession without input may be why the PCSA was evaluated by the residents as challenging. Because of the general lack of formal clinical skills assessment for graduate medical education trainees and because most of these residents had never had their clinical skills evaluated before, the PCSA may also have represented a challenge to their self-concept of being clinically competent physicians. Their view that the assessment was less than fair would be a normal defensive response, and makes it unlikely that their rating of enjoyment of the PCSA would be high. Whether this negative attitude may have adversely affected any resident's performance is difficult to say. It does, however, highlight the importance of ensuring that the participants in a PCSA have a full understanding of the purpose of the performance-based assessment, including the benefit of being able to develop and improve their clinical skills based on appropriate formative feedback.

Our experience with recruiting, training, and then running the PCSA exercise with children as SPs brought into focus some important practical considerations and has led us to believe that selected children must be intelligent and self-confident but not necessarily extroverted. Our child SPs were no more difficult to train than our adult SPs, and by observing the encounters through a 2-way mirror, we observed that the children were able to maintain consistency in the portrayal of their role. The child SPs' overall satisfaction rating of each resident was concordant with the overall satisfaction rating given by the adult SPs, showing that, despite their young age, these children had the same perception of the residents' performance as the adults. Whether all children are as perceptive or whether these children were more perceptive because they had completed the SP training is an interesting question. From our focus groups with the children, the real parents, and the SP parents, we were not able to identify any negative effects that could be attributed to performance as an SP; these findings are consistent with previous studies31 on the use of children as SPs. All the child SPs agreed that overall the experience had been a positive one. However, all our children came from stable, supportive families and neither the children nor their families had experienced adverse or negative interactions with the medical profession. We believe that SP children need to be carefully selected by screening the families before recruitment, followed by careful, in-depth training before the exercise, and debriefing by individuals experienced in communicating with children after the exercise.

A PCSA is not only a valid and reliable way to summatively assess an individual resident's performance; a PCSA is also able to identify specific skill deficits. Our PCSA was able to identify general and individual deficits of residents who gave an overall poor performance as well as residents who gave an overall better performance. For example, a general skill deficit in PE was identified in the child abuse case when more than half the residents auscultated through the child's shirt, thereby missing the whip marks on the child's back. This kind of information can be used to guide not only individual remediation but also, when large numbers of residents have the same skill deficits, can be used to guide curricular change and the development of clinical skills teaching programs in which residents are regularly observed and given formative feedback.

Studies3234 have shown that, although programs for graduate and undergraduate medical education state that trainees are observed, the reality is different. In addition, even when residents report that they are observed, the amount of "observation" may be limited to one HX and PE component per year. Even if this observation is followed by excellent formative feedback, this is clearly an inadequate amount of structured teaching in the area of clinical skills. Unstructured and informal clinical skills teaching will vary among programs and among residents, with some residents receiving no teaching at all, a situation that would not be acceptable to program directors or residents if it affected the teaching of the knowledge required to pass the pediatric board examination. Indeed, it seems that the assessment of resident competence in the areas of attitudes, IPS, data collection, PE, and information giving is still mainly based on the "halo effect," and the rating of skills is never directly observed.

Evaluation of 3 groups of residents using our PCSA revealed marked deficiencies in HX skills, PE skills, IPS, and the ability to interpret and document data. Residency programs need to develop and implement ongoing clinical skills teaching and formative feedback methods into the curriculum to ensure that all residents have adequate clinical skills. Otherwise, residents who may have inadequate skills in the areas defined by the American Academy of Pediatrics as necessary for certification by the American Board of Pediatrics will continue to graduate.

Please note that a PCSA encounter with checklist is available from the authors for review.

Accepted for publication November 23, 1998.

This study was supported by the Office of Medical Education, Medical College of Pennsylvania and Hahnemann University School of Medicine, Philadelphia; the Educational Commission for Foreign Medical Graduates, Philadelphia; and a faculty development grant from Robert Wood Johnson Foundation, Princeton, NJ.

Presented in part at the Association of American Medical Colleges Annual Meetings as RIME (Research in Medical Education) posters in Washington, DC, October 27 to November 2, 1995, and in San Francisco, Calif, November 6-12, 1996; and at the 1998 Ambulatory Pediatric Association National Meeting as a small group presentation, New Orleans, La, May 3, 1998.

We thank Sue Spachmann, CPNP, for help with case development and standardized patient training; Deborah Sandrock, MD, for help with case development; Mara Crans, MD, for validating the cases; Nazrat Mirza, MD, for scoring the patient notes; Barbara Hudecki, Diane Cohen, and Benita Brown for their administrative support and for running the pediatric clinical skills assessment; Miriam Friedman Ben-David, PhD, for her guidance and expertise; and Gerald J. Kelliher, PhD, for his support during this project.

Corresponding author: J. Lindsey Lane, MD, duPont at Jefferson, Children's Health Center, 841 Chestnut St, Philadelphia, PA 19107-4414 (e-mail: lindsey.lane@mail.tju.edu).

Purpose: This section is intended to share information concerning educational efforts in the broad field of pediatrics. We welcome studies on the following topics: undergraduate and graduate education in medicine and allied health occupations; continuing education of health professionals; education of patients and families; and health education for the general public, the community, and organizations that contribute to the promotion and improvement of the health of children and adolescents.

Editor's Note: Standardized patients are wonderful teaching models. Can you imagine what it would be like if all patients were standardized? BORING!—Catherine D. DeAngelis, MD

American Board of Pediatrics Inc, Foundations for Evaluating the Competency of Pediatricians.  Chapel Hill, NC American Board of Pediatrics Inc1978;
Bartlett  EEGrayson  MBarker  RLevine  DMGolden  ALibber  S The effects of physician communication skills on patient satisfaction, recall and adherence. J Chronic Dis. 1984;37755- 764
Link to Article
Harden  RMStevenson  MWilson-Downie  WWilson  GM Assessment of clinical competence using objective structured examination. BMJ. 1975;1447- 451
Link to Article
Stillman  PLSwanson  DB Ensuring the clinical competence of medical school graduates through standardized patients. Arch Intern Med. 1987;1471049- 1052
Link to Article
Barrows  HS An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med. 1993;68443- 453
Link to Article
Stillman  PLSwanson  DBSmee  S  et al.  Assessing clinical skills of residents with standardized patients. Ann Intern Med. 1986;105762- 771
Newble  DISwanson  DB Psychometric characteristics of the objective structured clinical examination. Med Educ. 1988;22325- 334
Link to Article
Vu  NVMarcy  MMColliver  JAVerhulst  SJTravis  TABarrows  HS Standardized (simulated) patients accuracy in recording clinical performance check-list items. Med Educ. 1992;2699- 104
Link to Article
van der Vleuten  CPMSwanson  DB Assessment of clinical skills with standardized patients: state of the art. Teaching Learning Med. 1990;258- 76
Link to Article
Tamblyn  RMKlass  DJSchnabl  GKKopelow  ML Sources of unreliability and bias in standardized-patient rating. Teaching Learning Med. 1991;374- 85
Link to Article
Colliver  JAWilliams  RG Technical issues: test application. Acad Med. 1993;68454- 460
Link to Article
Reznick  RSmee  SRothman  A  et al.  An objective structured clinical examination for the licentiate: report of the pilot project of the Medical Council of Canada. Acad Med. 1992;67487- 494
Link to Article
Ziv  AFriedman Ben-David  MSutnick  AIGary  NE Lessons learned from six years of international administrations of the ECFMG's SP-based clinical skills assessment. Acad Med. 1998;7384- 91
Link to Article
Clauser  BERipkey  DFletcher  BKing  AKlass  DOrr  N A comparison of pass/fail classifications made with scores from the NBME standardized-patient examination and part II examination. Acad Med. 1993;68 ((suppl 10)) S7- S9
Link to Article
Helfer  REBlack  MATeitelbaum  H A comparison of pediatric interviewing skills using real and simulated mothers. Pediatrics. 1975;55397- 400
Waterston  TCater  JIMitchell  RG An objective undergraduate clinical examination in child health. Arch Dis Child. 1980;55917- 922
Link to Article
Watson  ARHouston  IBClose  GC Evaluation of an objective structured clinical examination. Arch Dis Child. 1982;57390- 398
Link to Article
Smith  LJPrice  DAHouston  IB Objective structured clinical examination compared with other forms of student assessment. Arch Dis Child. 1984;591173- 1176
Link to Article
Matsell  DGWolfish  NMHsu  E Reliability and validity of the objective structured clinical examination in paediatrics. Med Educ. 1991;25293- 299
Link to Article
Joorabchi  B Objective structured clinical examination in a pediatric residency program. AJDC. 1991;145757- 762
Joorabchi  BDevries  JM Evaluation of clinical competence: the gap between expectation and performance. Pediatrics. 1996;97179- 184
Hilliard  RITallett  SE The use of an objective structured clinical examination with postgraduate residents in pediatrics. Arch Pediatr Adolesc Med. 1998;15274- 78
Norman  GRvan der Vleuten  CPMGraff  EDE Pitfalls in the pursuit of objectivity: issues of reliability. Med Educ. 1991;25119- 126
Link to Article
Tannen  DWallat  C Interactive frames and knowledge schemas in interaction: examples from a medical examination/interview. Tannen  Ded.Framing in Discourse. New York, NY Oxford University Press1993;57- 76
Simpson  MBuckman  RStewart  M  et al.  Doctor-patient communication: the Toronto consensus statement. BMJ. 1991;3031385- 1387
Link to Article
Craig  JL Retention of interviewing skills learned by first-year medical students: a longitudinal study. Med Educ. 1992;26276- 281
Link to Article
Engler  CMSaltzman  GAWalker  MLWolf  FM Medical student acquisition and retention of communication and interviewing skills. J Med Educ. 1981;56572- 579
Kraan  HFCrijnen  AAMDe Vries  MWZuidweg  JImbos  Tvan Der Vleuten  CP To what extent are medical interviewing skills teachable? Med Teacher. 1990;12315- 328
Link to Article
Colliver  JA Validation of standardized-patient assessment: a meaning for clinical competence. Acad Med. 1995;701062- 1064
Link to Article
Boulet  JRFriedman Ben-David  MHambleton  RKBurdick  WPZiv  A Assessing the adequacy of post-encounter written scores in standardized patient exams. Scherpbier  AJJAvan der  Vleuten CPMRethans  JJvan der  Steeg AFWeds.Advances in Medical Education Proceedings From the Seventh Ottawa Conference on Medical Education and Assessment, Maastricht, Netherlands. Dordrecht, the Netherlands Kluwer Academic Publishers1997;410- 412
Woodward  CAGliva-McConvey  G Children as standardized patients: initial assessment of effects. Teaching Learning Med. 1995;7188- 191
Link to Article
Heins  MRuggill  JBaker  H Education of residents. AJDC. 1983;137691- 695
Day  SCGrosso  LJNorcini  JJBlank  LLSwanson  DBHorne  MH Residents' perception of evaluation procedures used by their training program. J Gen Intern Med. 1990;5421- 426
Link to Article
Not Available, The Role of Faculty Observation in Assessing Students' Clinical Skills: Contemporary Issues in Medical Education. Vol 1. Washington, DC Association of American Medical Colleges1997;

Figures

Place holder to copy figure label and caption
Figure 1.

Comparison of history taking (HX), physical examination (PE), documentation and interpretation of data/patient note (PN), and interpersonal skills (IPS) scores. The asterisk indicates a significant (P<.05) difference.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Overall patient satisfaction. The asterisk indicates a significant (P<.05) difference.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Resident feedback on the pediatric clinical skills assessment clinical encounters.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 1. Summary of Pediatric Clinical Skills Assessment Patient Encounters
Table Graphic Jump LocationTable 2. Door Instructions for Clinical Problem 6
Table Graphic Jump LocationTable 3. Summary of Pediatric Clinical Skills Assessment (PCSA) Development and Implementation
Table Graphic Jump LocationTable 4. Component Scores by Resident Group*

References

American Board of Pediatrics Inc, Foundations for Evaluating the Competency of Pediatricians.  Chapel Hill, NC American Board of Pediatrics Inc1978;
Bartlett  EEGrayson  MBarker  RLevine  DMGolden  ALibber  S The effects of physician communication skills on patient satisfaction, recall and adherence. J Chronic Dis. 1984;37755- 764
Link to Article
Harden  RMStevenson  MWilson-Downie  WWilson  GM Assessment of clinical competence using objective structured examination. BMJ. 1975;1447- 451
Link to Article
Stillman  PLSwanson  DB Ensuring the clinical competence of medical school graduates through standardized patients. Arch Intern Med. 1987;1471049- 1052
Link to Article
Barrows  HS An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med. 1993;68443- 453
Link to Article
Stillman  PLSwanson  DBSmee  S  et al.  Assessing clinical skills of residents with standardized patients. Ann Intern Med. 1986;105762- 771
Newble  DISwanson  DB Psychometric characteristics of the objective structured clinical examination. Med Educ. 1988;22325- 334
Link to Article
Vu  NVMarcy  MMColliver  JAVerhulst  SJTravis  TABarrows  HS Standardized (simulated) patients accuracy in recording clinical performance check-list items. Med Educ. 1992;2699- 104
Link to Article
van der Vleuten  CPMSwanson  DB Assessment of clinical skills with standardized patients: state of the art. Teaching Learning Med. 1990;258- 76
Link to Article
Tamblyn  RMKlass  DJSchnabl  GKKopelow  ML Sources of unreliability and bias in standardized-patient rating. Teaching Learning Med. 1991;374- 85
Link to Article
Colliver  JAWilliams  RG Technical issues: test application. Acad Med. 1993;68454- 460
Link to Article
Reznick  RSmee  SRothman  A  et al.  An objective structured clinical examination for the licentiate: report of the pilot project of the Medical Council of Canada. Acad Med. 1992;67487- 494
Link to Article
Ziv  AFriedman Ben-David  MSutnick  AIGary  NE Lessons learned from six years of international administrations of the ECFMG's SP-based clinical skills assessment. Acad Med. 1998;7384- 91
Link to Article
Clauser  BERipkey  DFletcher  BKing  AKlass  DOrr  N A comparison of pass/fail classifications made with scores from the NBME standardized-patient examination and part II examination. Acad Med. 1993;68 ((suppl 10)) S7- S9
Link to Article
Helfer  REBlack  MATeitelbaum  H A comparison of pediatric interviewing skills using real and simulated mothers. Pediatrics. 1975;55397- 400
Waterston  TCater  JIMitchell  RG An objective undergraduate clinical examination in child health. Arch Dis Child. 1980;55917- 922
Link to Article
Watson  ARHouston  IBClose  GC Evaluation of an objective structured clinical examination. Arch Dis Child. 1982;57390- 398
Link to Article
Smith  LJPrice  DAHouston  IB Objective structured clinical examination compared with other forms of student assessment. Arch Dis Child. 1984;591173- 1176
Link to Article
Matsell  DGWolfish  NMHsu  E Reliability and validity of the objective structured clinical examination in paediatrics. Med Educ. 1991;25293- 299
Link to Article
Joorabchi  B Objective structured clinical examination in a pediatric residency program. AJDC. 1991;145757- 762
Joorabchi  BDevries  JM Evaluation of clinical competence: the gap between expectation and performance. Pediatrics. 1996;97179- 184
Hilliard  RITallett  SE The use of an objective structured clinical examination with postgraduate residents in pediatrics. Arch Pediatr Adolesc Med. 1998;15274- 78
Norman  GRvan der Vleuten  CPMGraff  EDE Pitfalls in the pursuit of objectivity: issues of reliability. Med Educ. 1991;25119- 126
Link to Article
Tannen  DWallat  C Interactive frames and knowledge schemas in interaction: examples from a medical examination/interview. Tannen  Ded.Framing in Discourse. New York, NY Oxford University Press1993;57- 76
Simpson  MBuckman  RStewart  M  et al.  Doctor-patient communication: the Toronto consensus statement. BMJ. 1991;3031385- 1387
Link to Article
Craig  JL Retention of interviewing skills learned by first-year medical students: a longitudinal study. Med Educ. 1992;26276- 281
Link to Article
Engler  CMSaltzman  GAWalker  MLWolf  FM Medical student acquisition and retention of communication and interviewing skills. J Med Educ. 1981;56572- 579
Kraan  HFCrijnen  AAMDe Vries  MWZuidweg  JImbos  Tvan Der Vleuten  CP To what extent are medical interviewing skills teachable? Med Teacher. 1990;12315- 328
Link to Article
Colliver  JA Validation of standardized-patient assessment: a meaning for clinical competence. Acad Med. 1995;701062- 1064
Link to Article
Boulet  JRFriedman Ben-David  MHambleton  RKBurdick  WPZiv  A Assessing the adequacy of post-encounter written scores in standardized patient exams. Scherpbier  AJJAvan der  Vleuten CPMRethans  JJvan der  Steeg AFWeds.Advances in Medical Education Proceedings From the Seventh Ottawa Conference on Medical Education and Assessment, Maastricht, Netherlands. Dordrecht, the Netherlands Kluwer Academic Publishers1997;410- 412
Woodward  CAGliva-McConvey  G Children as standardized patients: initial assessment of effects. Teaching Learning Med. 1995;7188- 191
Link to Article
Heins  MRuggill  JBaker  H Education of residents. AJDC. 1983;137691- 695
Day  SCGrosso  LJNorcini  JJBlank  LLSwanson  DBHorne  MH Residents' perception of evaluation procedures used by their training program. J Gen Intern Med. 1990;5421- 426
Link to Article
Not Available, The Role of Faculty Observation in Assessing Students' Clinical Skills: Contemporary Issues in Medical Education. Vol 1. Washington, DC Association of American Medical Colleges1997;

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles