Osce And Clinical Skills Handbook Pdf

11/09
99

Osce And Clinical Skills Handbook Pdf

Posted in:

Osce And Clinical Skills Handbook Pdf

As an attending physician working with a student for a week, you receive a form that asks you to evaluate the student's fund of knowledge, procedural skills, professionalism, interest in learning, and “systems-based practice.” You wonder which of these attributes you can reliably assess and how the data you provide will be used to further the student's education. You also wonder whether other tests of knowledge and competence that students must undergo before they enter practice are equally problematic. In one way or another, most practicing physicians are involved in assessing the competence of trainees, peers, and other health professionals. As the example above suggests, however, they may not be as comfortable using educational assessment tools as they are using more clinically focused diagnostic tests. This article provides a conceptual framework for and a brief update on commonly used and emerging methods of assessment, discusses the strengths and limitations of each method, and identifies several challenges in the assessment of physicians' professional competence and performance. Competence and Performance Elsewhere, Hundert and I have defined competence in medicine as “the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individuals and communities being served.” In the United States, the assessment of medical residents, and increasingly of medical students, is largely based on a model that was developed by the Accreditation Council for Graduate Medical Education (ACGME).

This model uses six interrelated domains of competence: medical knowledge, patient care, professionalism, communication and interpersonal skills, practice-based learning and improvement, and systems-based practice. Competence is not an achievement but rather a habit of lifelong learning; assessment plays an integral role in helping physicians identify and respond to their own learning needs. Ideally, the assessment of competence (what the student or physician is able to do) should provide insight into actual performance (what he or she does habitually when not observed), as well as the capacity to adapt to change, find and generate new knowledge, and improve overall performance. Competence is contextual, reflecting the relationship between a person's abilities and the tasks he or she is required to perform in a particular situation in the real world. Common contextual factors include the practice setting, the local prevalence of disease, the nature of the patient's presenting symptoms, the patient's educational level, and other demographic characteristics of the patient and of the physician.

By Katrina F. Hurley MD MHI FRCPC (Author),‎ Peter Green (Contributor),‎ Rose P. Mengual (Contributor) & 0 more. In an effort to standardize the clinical evaluations of these skills, North American medical schools use Objective Structured Clinical Examinations (OSCEs). OSCE and Clinical Skills Handbook, 2e PDF TAGS: OSCE and Clinical Skills Handbook, 2e pdf. Pdf OSCE and Clinical Skills Handbook, 2e. OSCE and Clinical Skills Handbook, 2e book. Epub OSCE and Clinical Skills Handbook, 2e. OSCE and Clinical Skills Handbook, 2e mobi. Ebook OSCE and Clinical Skills Handbook.

Many aspects of competence, such as history taking and clinical reasoning, are also content-specific and not necessarily generalizable to all situations. A student's clinical reasoning may appear to be competent in areas in which his or her base of knowledge is well organized and accessible but may appear to be much less competent in unfamiliar territory. However, some important skills (e.g., the ability to form therapeutic relationships) may be less dependent on content. Competence is also developmental.

Habits of mind and behavior and practical wisdom are gained through deliberate practice and reflection on experience. Students begin their training at a novice level, using abstract, rule-based formulas that are removed from actual practice. At higher levels, students apply these rules differentially to specific situations. During residency, trainees make judgments that reflect a holistic view of a situation and eventually take diagnostic shortcuts based on a deeper understanding of underlying principles. Experts are able to make rapid, context-based judgments in ambiguous real-life situations and have sufficient awareness of their own cognitive processes to articulate and explain how they recognize situations in which deliberation is essential. Development of competence in different contexts and content areas may proceed at different rates.

Context and developmental level also interact. Although all clinicians may perform at a lower level of competence when they are tired, distracted, or annoyed, the competence of less experienced clinicians may be particularly susceptible to the influence of stress. Goals of Assessment Over the past decade, medical schools, postgraduate training programs, and licensing bodies have made new efforts to provide accurate, reliable, and timely assessments of the competence of trainees and practicing physicians.

Such assessments have three main goals: to optimize the capabilities of all learners and practitioners by providing motivation and direction for future learning, to protect the public by identifying incompetent physicians, and to provide a basis for choosing applicants for advanced training. Assessment can be formative (guiding future learning, providing reassurance, promoting reflection, and shaping values) or summative (making an overall judgment about competence, fitness to practice, or qualification for advancement to higher levels of responsibility). Formative assessments provide benchmarks to orient the learner who is approaching a relatively unstructured body of knowledge. They can reinforce students' intrinsic motivation to learn and inspire them to set higher standards for themselves. Although summative assessments are intended to provide professional self-regulation and accountability, they may also act as a barrier to further practice or training. A distinction should be made between assessments that are suitable only for formative use and those that have sufficient psychometric rigor for summative use. This distinction is especially important in selecting a method of evaluating competence for high-stakes assessments (i.e., licensing and certification examinations).

Correspondingly, summative assessments may not provide sufficient feedback to drive learning. However, because students tend to study that which they expect to be tested on, summative assessment may influence learning even in the absence of feedback. Assessment Methods All methods of assessment have strengths and intrinsic flaws ( Table 1 Commonly Used Methods of Assessment.

The use of multiple observations and several different assessment methods over time can partially compensate for flaws in any one method. Van der Vleuten describes five criteria for determining the usefulness of a particular method of assessment: reliability (the degree to which the measurement is accurate and reproducible), validity (whether the assessment measures what it claims to measure), impact on future learning and practice, acceptability to learners and faculty, and costs (to the individual trainee, the institution, and society at large).

Written Examinations Written examination questions are typically classified according to whether they are open-ended or multiple choice. In addition, questions can be “context rich” or “context poor.” Questions with rich descriptions of the clinical context invite the more complex cognitive processes that are characteristic of clinical practice.

Conversely, context-poor questions can test basic factual knowledge but not its transferability to real clinical problems. Multiple-choice questions are commonly used for assessment because they can provide a large number of examination items that encompass many content areas, can be administered in a relatively short period, and can be graded by computer. These factors make the administration of the examination to large numbers of trainees straightforward and standardized. Formats that ask the student to choose the best answer from a list of possible answers are most commonly used. However, newer formats may better assess processes of diagnostic reasoning.

Key-feature items focus on critical decisions in particular clinical cases. Script-concordance items present a situation (e.g., vaginal discharge in a patient), add a piece of information (dysuria), and ask the examinee to assess the degree to which this new information increases or decreases the probability of a particular outcome (acute salpingitis due to Chlamydia trachomatis). Because the situations portrayed are ambiguous, script-concordance items may provide insight into clinical judgment in the real world. Answers to such items have been shown to correlate with the examinee's level of training and to predict future performance on oral examinations of clinical reasoning. Multiple-choice questions that are rich in context are difficult to write, and those who write them tend to avoid topics — such as ethical dilemmas or cultural ambiguities — that cannot be asked about easily. Multiple-choice questions may also create situations in which an examinee can answer a question by recognizing the correct option, but could not have answered it in the absence of options. This effect, called cueing, is especially problematic when diagnostic reasoning is being assessed, because premature closure — arriving at a decision before the correct diagnosis has been considered — is a common reason for diagnostic errors in clinical practice.

Extended matching items (several questions, all with the same long list of possible answers), as well as open-ended short-answer questions, can minimize cueing. Structured essays also preclude cueing. In addition, they involve more complex cognitive processes and allow for more contextualized answers than do multiple-choice questions. When clear grading guidelines are in place, structured essays can be psychometrically robust. Assessments by Supervising Clinicians Supervising clinicians' observations and impressions of students over a specific period remain the most common tool used to evaluate performance with patients.

Students and residents most commonly receive global ratings at the end of a rotation, with comments from a variety of supervising physicians. Although subjectivity can be a problem in the absence of clearly articulated standards, a more important issue is that direct observation of trainees while they are interacting with patients is too infrequent. Direct Observation or Video Review The “long case” and the “mini–clinical-evaluation exercise” (mini-CEX) have been developed so that learners will be directly observed more frequently. In these assessments, a supervising physician observes while a trainee performs a focused history taking and physical examination over a period of 10 to 20 minutes. The trainee then presents a diagnosis and a treatment plan, and the faculty member rates the resident and may provide educational feedback. Structured exercises with actual patients under the observation of the supervising physician can have the same level of reliability as structured examinations using standardized patients yet encompass a wider range of problems, physical findings, and clinical settings.

Direct observation of trainees in clinical settings can be coupled with exercises that trainees perform after their encounters with patients, such as oral case presentations, written exercises that assess clinical reasoning, and literature searches. In addition, review of videos of encounters with patients offers a powerful means of evaluating and providing feedback on trainees' skills in clinical interactions. Clinical Simulations Standardized patients — actors who are trained to portray patients consistently on repeated occasions — are often incorporated into objective structured clinical examinations (OSCEs), which consist of a series of timed “stations,” each one focused on a different task. Since 2004, these examinations have been part of the U.S. Medical Licensing Examination that all senior medical students take. The observing faculty member or the standardized patient uses either a checklist of specific behaviors or a global rating form to evaluate the student's performance.

The checklist might include items such as “asked if the patient smoked” and “checked ankle reflexes.” The global rating form might ask for a rating of how well the visit was organized and whether the student was appropriately empathetic. A minimum of 10 stations, which the student usually visits over the course of 3 to 4 hours, is necessary to achieve a reliability of 0.85 to 0.90. Under these conditions, structured assessments with the use of standardized patients are as reliable as ratings of directly observed encounters with real patients and take about the same amount of time.

Interactions with standardized patients can be tailored to meet specific educational goals, and the actors who portray the patients can reliably rate students' performance with respect to history taking and physical examinations. Faculty members who observe encounters with standardized patients can offer additional insights on trainees' clinical judgment and the overall coherence of the history taking or physical examination. Unannounced standardized patients, who with the examinees' prior approval present incognito in actual clinical settings, have been used in health services research to evaluate examinees' diagnostic reasoning, treatment decisions, and communication skills. The use of unannounced standardized patients may prove to be particularly valuable in the assessment of higher-level trainees and physicians in practice. The use of simulation to assess trainees' clinical skills in intensive care and surgical settings is on the rise. Simulations involving sophisticated mannequins with heart sounds, respirations, oximeter readings, and pulses that respond to a variety of interventions can be used to assess how individuals or teams manage unstable vital signs.

Surgical simulation centers now routinely use high-fidelity computer graphics and hands-on manipulation of surgical instruments to create a multisensory environment. High-technology simulation is seen increasingly as an important learning aid and may prove to be useful in the assessment of knowledge, clinical reasoning, and teamwork. Multisource (“360-Degree”) Assessments Assessments by peers, other members of the clinical team, and patients can provide insight into trainees' work habits, capacity for teamwork, and interpersonal sensitivity. Although there are few published data on outcomes of multisource feedback in medical settings, several large programs are being developed, including one for all first- and second-year house officers in the United Kingdom and another for all physicians undergoing recertification in internal medicine in the United States. Multisource feedback is most effective when it includes narrative comments as well as statistical data, when the sources are recognized as credible, when the feedback is framed constructively, and when the entire process is accompanied by good mentoring and follow-up. Recent studies of peer assessments suggest that when trainees receive thoughtful ratings and comments by peers in a timely and confidential manner, along with support from advisers to help them reflect on the reports, they find the process powerful, insightful, and instructive.

Peer assessments have been shown to be consistent regardless of the way the raters are selected. Such assessments are stable from year to year and predict subsequent class rankings as well as subsequent ratings by supervisors. Peer assessments depend on trust and require scrupulous attention to confidentiality.

Otherwise they can be undermining, destructive, and divisive. Although patients' ratings of clinical performance are valuable in principle, they pose several problems. As many as 50 patient surveys may be necessary to achieve satisfactory reliability. Patients who are seriously ill often do not complete surveys; those who do tend to rate physicians less favorably than do patients who have milder conditions. Furthermore, patients are not always able to discriminate among the elements of clinical practice, and their ratings are typically high.

These limitations make it difficult to use patient reports as the only tool for assessing clinical performance. However, ratings by nurses can be valuable. Such ratings have been found to be reliable with as few as 6 to 10 reports, and they correlate with both patients' and faculty members' ratings of the interpersonal aspects of trainees' performance. Fundamental cognitive limitations in the ability of humans to know themselves as others see them restrict the usefulness of self-assessment. Furthermore, rating oneself on prior clinical performance may not achieve another important goal of self-assessment: the ability to monitor oneself from moment to moment during clinical practice. A physician must possess this ability in order to meet patients' changing needs, to recognize the limits of his or her own competence, and to manage unexpected situations. Portfolios Portfolios include documentation of and reflection about specific areas of a trainee's competence.

This evidence is combined with self-reflection. In medicine, just as in the visual arts, portfolios demonstrate a trainee's development and technical capacity.

They can include chart notes, referral letters, procedure logs, videotaped consultations, peer assessments, patient surveys, literature searches, quality-improvement projects, and any other type of learning material. Portfolios also frequently include self-assessments, learning plans, and reflective essays. For portfolios to be maximally effective, close mentoring is required in the assembly and interpretation of the contents; considerable time can be expended in this effort. Portfolios are most commonly used in formative assessments, but their use for summative evaluations and high-stakes decisions about advancement is increasing. New Domains of Assessment There are several domains in which assessment is in its infancy and remains problematic. Quality of care and patient safety depend on effective teamwork, and teamwork training is emphasized as an essential element of several areas of competence specified by the ACGME, yet there is no validated method of assessing teamwork. Experts do not agree on how to define professionalism — let alone how best to measure it.

Dozens of scales that rate communication are used in medical education and research, yet there is little evidence that any one scale is better than another; furthermore, the experiences that patients report often differ considerably from ratings given by experts. Multimethod and Longitudinal Assessment The use of multiple methods of assessment can overcome many of the limitations of individual assessment formats. Variation of the clinical context allows for broader insights into competence, the use of multiple formats provides greater variety in the areas of content that are evaluated, and input from multiple observers provides information on distinct aspects of a trainee's performance.

Longitudinal assessment avoids excessive testing at any one point in time and serves as the foundation for monitoring ongoing professional development. Standardization of Assessment Although accrediting organizations specify broad areas that the curriculum should cover and assess, for the most part individual medical schools make their own decisions about methods and standards of assessment. This model may have the advantage of ensuring consistency between the curriculum and assessment, but it also makes it difficult to compare students across medical schools for the purpose of subsequent training. The ideal balance between nationally standardized and school-specific assessment remains to be determined. Furthermore, within a given medical school, all students may not require the same package of assessments — for example, initial screening examinations may be followed by more extensive testing for those who have difficulties. Assessment and Learning It is generally acknowledged that assessment drives learning; however, assessment can have both intended and unintended consequences. Students study more thoughtfully when they anticipate certain examination formats, and changes in the format can shift their focus to clinical rather than theoretical issues.

Assessment by peers seems to promote professionalism, teamwork, and communication. The unintended effects of assessment include the tendency for students to cram for examinations and to substitute superficial knowledge for reflective learning. Dimonized Unp Female Body Download Movies there. Assessment of Expertise The assessment of trainees and physicians who have higher levels of expertise presents particular challenges. Expertise is characterized by unique, elaborated, and well-organized bodies of knowledge that are often revealed only when they are triggered by characteristic clinical patterns. Thus, experts who are unable to access their knowledge in artificial testing situations but who make sound judgments in practice may do poorly on some tests that are designed to assess communication skills, knowledge, or reasoning.

Furthermore, clinical expertise implies the practical wisdom to manage ambiguous and unstructured problems, balance competing explanations, avoid premature closure, note exceptions to rules and principles, and — even when under stress — choose one of the several courses of action that are acceptable but imperfect. Testing either inductive thinking (the organization of data to generate possible interpretations) or deductive thinking (the analysis of data to discern among possibilities) in situations in which there is no consensus on a single correct answer presents formidable psychometric challenges. Assessment and Future Performance The evidence that assessment protects the public from poor-quality care is both indirect and scarce; it consists of a few studies that show correlations between assessment programs that use multiple methods and relatively crude estimates of quality such as diagnostic testing, prescribing, and referral patterns. Correlating assessment with future performance is difficult not only because of inadequacies in the assessment process itself but also because relevant, robust measures of outcome that can be directly attributed to the effects of training have not been defined.

Current efforts to measure the overall quality of care include patient surveys and analyses of institutional and practice databases. When these new tools are refined, they may provide a more solid foundation for research on educational outcomes.

Conclusions Considering all these challenges, current assessment practices would be enhanced if the principles summarized in Table 2 Principles of Assessment. Were kept clearly in mind. The content, format, and frequency of assessment, as well as the timing and format of feedback, should follow from the specific goals of the medical education program.

The various domains of competence should be assessed in an integrated, coherent, and longitudinal fashion with the use of multiple methods and provision of frequent and constructive feedback. Educators should be mindful of the impact of assessment on learning, the potential unintended effects of assessment, the limitations of each method (including cost), and the prevailing culture of the program or institution in which the assessment is occurring. Assessment is entering every phase of professional development. It is now used during the medical school application process, at the start of residency training, and as part of the “maintenance of certification” requirements that several medical boards have adopted. Multiple methods of assessment implemented longitudinally can provide the data that are needed to assess trainees' learning needs and to identify and remediate suboptimal performance by clinicians.

Decisions about whether to use formative or summative assessment formats, how frequently assessments should be made, and what standards should be in place remain challenging. Educators also face the challenge of developing tools for the assessment of qualities such as professionalism, teamwork, and expertise that have been difficult to define and quantify. References • 1 Epstein RM, Hundert EM. Defining and assessing professional competence.

JAMA 2002;287:226-235 • 2 Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood) 2002;21:103-111 • 3 Leach DC. Competence is a habit. JAMA 2002;287:243-244 • 4 Fraser SW, Greenhalgh T.

Coping with complexity: educating for capability. BMJ 2001;323:799-803 • 5 Klass D. Reevaluation of clinical competency. Am J Phys Med Rehabil 2000;79:481-486 • 6 Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: categories and prototypes. Med Educ 1984;18:406-416 • 7 Gruppen LD, Frohna AZ. Clinical reasoning.

In: Norman GR, Van Der Vleuten CP, Newble DI, eds. International handbook of research in medical education. Dordrecht, the Netherlands: Kluwer Academic, 2002:205-30. • 8 Epstein RM, Dannefer EF, Nofziger AC, et al. Comprehensive assessment of professional competence: the Rochester experiment.

Teach Learn Med 2004;16:186-196 • 9 Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004;79:Suppl:S70-S81 • 10 Epstein RM. Mindful practice. JAMA 1999;282:833-839 • 11 Schon DA. Educating the reflective practitioner.

San Francisco: Jossey-Bass, 1987. • 12 Epstein RM. Mindful practice in action. Cultivating habits of mind. Fam Syst Health 2003;21:11-17 • 13 Dreyfus HL. On the Internet (thinking in action). New York: Routledge, 2001.

• 14 Eraut M. Learning professional processes: public knowledge and personal experience. In: Eraut M, ed. Developing professional knowledge and competence.

London: Falmer Press, 1994:100-22. • 15 Shanafelt TD, Bradley KA, Wipf JE, Back AL. Burnout and self-reported patient care in an internal medicine residency program. Ann Intern Med 2002;136:358-367 • 16 Borrell-Carrio F, Epstein RM. Preventing errors in clinical practice: a call for self-awareness.

Ann Fam Med 2004;2:310-316 • 17 Leung WC. Competency based medical training: review. BMJ 2002;325:693-696 • 18 Friedman Ben-David M. The role of assessment in expanding professional horizons. Med Teach 2000;22:472-477 • 19 Sullivan W.

Work and integrity: the crisis and promise of professionalism in America. San Francisco: Jossey-Bass, 2005.

• 20 Schuwirth L, van der Vleuten C. Merging views on assessment. Med Educ 20-1210 • 21 Wass V, Van der Vleuten C, Shatzer J, Jones R.

Assessment of clinical competence. Lancet 2004;357:945-949 • 22 Van Der Vleuten CPM. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67 • 23 Schuwirth LW, van der Vleuten CP. Different written assessment methods: what can be said about their strengths and weaknesses?

Med Educ 2004;38:974-979 • 24 Schuwirth LW, Verheggen MM, van der Vleuten CP, Boshuizen HP, Dinant GJ. Do short cases elicit different thinking processes than factual knowledge questions do? Med Educ 2001;35:348-356 • 25 Case S, Swanson D. Constructing written test questions for the basic and clinical sciences. Philadelphia: National Board of Medical Examiners, 2000. • 26 Farmer EA, Page G.

A practical guide to assessing clinical decision-making skills using the key features approach. Med Educ 20-1194 • 27 Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The Script Concordance test: a tool to assess the reflective clinician.

Teach Learn Med 2000;12:189-195 • 28 Brailovsky C, Charlin B, Beausoleil S, Cote S, Van der Vleuten C. Measurement of clinical reflective capacity early in training as a predictor of clinical reasoning performance at the end of residency: an experimental study on the script concordance test. Med Educ 2001;35:430-436 • 29 Frederiksen N. The real test bias: influences of testing on teaching and learning. Am Psychol 1984;39:193-202 • 30 Schuwirth LW.

This will serve as an excellent guide for medical and allied health students when preparing for the OSCE because it places the content and how to perform various procedures before the cases.--Doody's Book Review Service 'This will serve as an excellent guide for medical and allied health students when preparing for the OSCE because it places the content and how to perform various procedures before the cases.' -Doody's Book Review Service 'This will serve as an excellent guide for medical and allied health students when preparing for the OSCE because it places the content and how to perform various procedures before the cases.' '-Doody's Book Review Service.

Overall, this book didn't really come out as SPECTACULAR for me. The book was very basic - black text and occasional use of green in the layout.

The content was also nothing out of the extraordinary compared to the other clinical skills books out in the market. Air English Standard 򮰰殲 򰥪汻 there. However, this book does a really good job of summarizing OSCE stations and OSCE topics in a format that's really exam-orientated. There are loads of mnemonics to help with memorizing things to look for in any particular station (both clinical and history-taking). I also particularly liked the checklists written in each section.

If you like learning or revising clinical skills for a 'tick-the-box' method of getting OSCE marks, this book is amazing for helping you remember ques for the exam. Otherwise, I would prefer learning from other clinical skills books, and use this as a revision guide or if i want to revise examinations for specific diseases/signs/symptoms.