Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?

  

 Article Access Statistics    Viewed1832        Printed83        Emailed0        PDF Downloaded65        Comments [Add]    

Recommend this journal


 

    Table of Contents      EDUCATION FORUM Year : 2023  |  Volume : 69  |  Issue : 1  |  Page : 35-40

Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?

T Singh1, N Shah2
1 Center for Health Professions Education, Adesh University, Bathinda, Punjab, India
2 Department of Psychiatry, Smt. NHL Municipal Medical College and SVPIMSR, Ahmedabad, Gujarat, India

Date of Submission15-Apr-2022Date of Decision06-Jun-2022Date of Acceptance15-Jun-2022Date of Web Publication12-Oct-2022

Correspondence Address:
Dr. N Shah
Department of Psychiatry, Smt. NHL Municipal Medical College and SVPIMSR, Ahmedabad, Gujarat
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None

Crossref citationsCheck

DOI: 10.4103/jpgm.jpgm_337_22

Rights and Permissions


The McNamara fallacy refers to the tendency to focus on numbers, metrics, and quantifiable data while disregarding the meaningful qualitative aspects. The existence of such a fallacy in medical education is reviewed in this paper. Competency-based medical education (CBME) has been introduced in India with the goal of having Indian Medical Graduates competent in five different roles – Clinician, Communicator, Leader and member of the health care team, Professional, and Lifelong learner. If we only focus on numbers and structure to assess the competencies pertaining to these roles, we would be falling prey to the McNamara fallacy. To assess these roles in the real sense, we need to embrace the qualitative assessment methods and appreciate their value in competency-based education. This can be done by using various workplace-based assessments, choosing tools based on educational impact rather than psychometric properties, using narratives and descriptive evaluation, giving grades instead of marks, and improving the quality of the questions asked in various exams. There are challenges in adopting qualitative assessment starting with being able to move past the objective–subjective debate, to developing expertise in conducting and documenting such assessment, and adding the rigor of qualitative research methods to enhance its credibility. The perspective on assessment thus needs a paradigm shift – we need to assess the important rather than just making the assessed important; and this would be crucial for the success of the CBME curriculum.

Keywords: Assessment, competency, McNamara fallacy, narrative


How to cite this article:
Singh T, Shah N. Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?. J Postgrad Med 2023;69:35-40
How to cite this URL:
Singh T, Shah N. Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?. J Postgrad Med [serial online] 2023 [cited 2023 Jan 11];69:35-40. Available from: https://www.jpgmonline.com/text.asp?2023/69/1/35/358459  :: Introduction Top

Robert McNamara studied at the Harvard School of Business; worked for the Ford Company helping it to recover losses and reach great heights. He achieved this by meticulous data analysis. He and his team members were considered as the “whiz kids” who could turn around any situation by suggestions based on data analysis. Given his track record, he was appointed the US Secretary of Defense during the Vietnam War.[1]

McNamara applied the same mathematical models to war analysis. He, for example, considered the “dead body counts” of the enemy side versus his own side, and area under occupation, as signs of winning the war, whereas turning a blind eye to the actual dynamics of the war situation, such as the spirit of the people fighting the war, their resistance and guerrilla warfare, feelings of the rural Vietnamese people, and the local forest conditions. As a result, for a long time, he kept conveying to USA that it was winning the war, although the ground reality at Vietnam was entirely different.[1] Many years have passed but this thinking continues to haunt us.

Several years later, Daniel Yankelovich, a sociologist, coined the phrase “The McNamara Fallacy,” which is also known as the quantitative fallacy: relying heavily on metrics and numbers to draw conclusions; and ignoring the ground-level realities.[2] He described four stages of beliefs, ranging from measuring all that is easily measurable, disregarding what is not easily measurable, considering that as unimportant, and even non-existent! The problem gets exaggerated whenever we deal with issues that are non-linear, complex, and unpredictable. Like war, medicine[3] and medical education are classic examples of non-linearity, complexity, and unpredictability. Medical education in the psychometric era appears to be highly influenced by McNamara's thinking. [Table 1] depicts how, for various decisions, we depend on numbers while ignoring important aspects. Even when we have seen the fallibility of such a practice, the tendency to believe in numbers just seems to be the norm.

Table 1: A glimpse into the focus on numbers in the medical education system

Click here to view

Competency-based medical education (CBME): A new era

The CBME has been introduced in India since 2019[7] with the goal of producing an Indian Medical Graduate (IMG) who is competent in fulfilling the health needs of the society. Five roles of the IMG have been defined as follows: Clinician, Communicator, Leader and member of the health care team, Professional, and Lifelong learner.[7] The global and subject-level competencies have been listed,[8] and there is a provision in the curriculum for specific learning opportunities for various roles, for example, learner-doctor program in clinical postings to function as members of the health care team, self-directed learning to impart skills for lifelong learning, and the Attitude, Ethics, and Communication module[9] that spans across the curriculum.

However, defining roles, competencies, and providing learning opportunities alone will not lead us to the goal if our assessment is not “fit for purpose.” Assessment in CBME is concerned with not only just the product but also the process; with not only just proving but also improving. The major obstacle for this is going to be our overdependence on numbers – making only the measurable important rather than measuring the important. The obsession with timetables, teaching hours and proportions of integration, checklists, and marks suitably illustrates this.

Although psychometric rigor is presumed to have advantages, such as ensuring fairness, standardization, and comparability,[10] it is interesting to note that out of the five envisaged roles of the IMG, at least four (leader, lifelong learner, professional, and communicator) are not amenable to measurement by numbers. If we continue assessing the numeric way, we will end up assessing only a certain part of the student's learning, that too inaccurately. Just like McNamara while counting the dead bodies continued to dwell in the illusion that they were winning the war, we may also assume, that using the yardstick of numbers, we are producing competent IMGs, although the reality may only manifest itself many years down the lane.

The consequences of being obsessed with numbers: A deep trap

Let us have a look at what exactly happens when the entire focus of assessment is on generating marks: we undermine the richness of information and reduce it to plain hollow numbers. Schuwirth and van der Vleuten[11] have explained this very well as to how we transition from rich information about students' performance to only dichotomous pass/fail decisions.

In addition, the focus on structure, standardization, and uniformity hinders the assessment of competency in the real sense. Competency is a “habitual and judicious use of knowledge, skills, attitudes and communication for the welfare of a patient.”[12] It means that competency assessment is more than stacking marks in individual components; that there is a vast difference, for example, in inserting a nasogastric tube in a mannequin (skill) and doing so in a struggling child brought with ingestion of a poison by anxious parents (competency). Although both skill and competency literally mean the “ability” to do something, educationally, competency is contextual and contributes to positive patient outcomes. It includes a component of using one's values and responsiveness.[12] Thus, competency assessment must focus, in addition to technical details, on the context and outcome, both of which vary in different situations. It must be much more than a checklist-based or a structured exercise. The value of checklists, however, can be improved by providing narrative comments. This is especially true for skills/competencies that are not objectively measurable.[13]

The focus on numbers alone gives the wrong message to the students as well – misdirecting their efforts from actual learning to collecting marks – promoting test-wise ness and performance orientation.[14] There also remains a risk of cheating or manipulation. Sometimes, the numbers are intentionally manipulated (e.g., using Item Response Theory framework and/or removing items to improve reliability). Unfortunately, manipulating the numbers does not change the reality.[15],[16]

That numbers could be manipulated is one problem, but believing, that numbers if accurate, could give us a complete picture is a bigger one. If it were so, there would be no need for having any “Discussion Section” in research papers and everyone would be able to comprehend and conclude the exact same thing as the authors wanted to convey![16]

Even then, the commitment to numbers is so deeply ingrained in our psyche that when it becomes apparent that numbers capture only certain aspects of the desired qualities, instead of exploring other ways of assessing, these other aspects are somehow measured and forcibly fit into numbers. This is called as “objectification”: a process by which abstract concepts are treated as if they are a physical thing.[17],[18] This makes matters worse, as it only gives rise to a higher level of test-wise ness, with students focusing only on scoring well and going farther away from the real attainment of competencies.

This exemplifies the Goodhart's law,[19] which states that when a measure becomes a target, it ceases to be a real measure. Originally, the purpose of measurement is to find out something worthwhile, but the fact that it is going to be measured gives rise to distortions/corruptions to ensure that the measurement “looks good” or “criteria are somehow met.” This behavior defeats the original purpose of measurement.[20] The focus on numbers changes a decision-making issue to a measurement issue. All these aspects of focusing on numbers are depicted in [Figure 1].

How to redeem ourselves from this fallacy in medical education

Let us look at a very common scenario of a girl being taught how to cook by her mother. After initial few sessions, the mother stands with the daughter observing her making the food. She does not just leave the kitchen saying “4 out of 10”; rather she tastes the food and tells her to increase a bit of salt, reduce the flame, boil it longer, and serve with garnishing. A father does something similar while coaching his daughter to drive a car. We all have learnt and taught many things very well like this, but we fall back on numbers alone when teaching medicine to students.

We know that what is not assessed is not learnt. The need to ensure that the students become competent in performing all the five roles cannot be overstated. To believe that we can assess the role as a clinician by various numeric assessments would be level one of the fallacy – measuring all that can be measured and believing that it would suffice. The other roles as communicator, leader, lifelong learner, and professional are not amenable to measurements (at least with our current understanding). To disregard these roles because of their unquantifiability, to consider them as unimportant or non-existent would be falling prey to levels two, three, and four of the fallacy, respectively, which is dangerous. These roles are major contributors to the quality of patient care and decisive of success or failure in practice. To mention them as of “core” importance but not assessing them or considering them “non-core” as far as assessment goes, is a sure recipe for killing the competencies corresponding to these roles. Some assessment – even if it is by narratives – is better than no assessment, as it conveys the importance of various competencies to the students. Fortunately, the role of such assessments in CBME is increasingly being recognized.[21]

The new curriculum provides many opportunities to move beyond numbers in the form of formative and internal assessment. The grades/scores in these assessments are not to be added to the final scores, and this opens many untreaded paths, allowing a meaningful process of collecting rich information regarding their learning and then guiding them appropriately to facilitate the attainment of competencies. Many medical schools in India are trying in this direction. We came across an example of case-based blended learning eco-system in an Indian medical school, where the undergraduate students post their reflections on the cases seen by them on a blog and they are utilized for formative assessment, contributing to internal marks.[22]

Our initial perception of anything is qualitative rather than numerical and examiners first make a qualitative assessment before converting it to marks. Furthermore, when the student converts these marks to meaningful information to understand his or her performance, there is loss of fidelity. It would probably make more sense to convey qualitative information directly in the first place rather than getting into dual conversion.[23]

The choice of the tools should be based on their educational impact rather than psychometric properties alone. With some tools, we may be able to generate and defend numbers (marks) very easily. But they promote rote learning/surface learning.[24] Tools such as mini-Clinical Evaluation Exercise, on the other hand, have great educational impact because of the expert feedback based on direct observation in a real-life setting.[25] A toolbox of assessment methods to cover all the roles of an IMG has recently been published.[26]

Assessment in artificial or standardized settings may be good in the initial phase of learning or to practice skills; however, the real-life practice scenario is never a “standard” one. Hence, it is important to assess performance in a real-life scenario – noisy OPD, un-co-operative patient, inconsistent history, confusing laboratory reports, diagnostic dilemma, and so on. How a student handles such situations is a matter of qualitative judgement. Such assessment can be done using the various workplace-based assessment tools.[27] Because assessment using such tools happens in unstandardized workplace-associated situations, a narrative provides better data for feedback and learning. This also helps students become reflective learners themselves, which in turn improves their performance, as against a certain marksheet or score that only tells whether a student could score pass marks or not.[28]

The new curriculum mandates the use of a logbook and recommends the use of portfolio.[29] Maintaining such documents and indulging in authentic self-assessments would provide an opportunity for meaningful teacher–student interactions, optimizing students' learning. The assessment of reflections recorded in the students' portfolio would give the teachers deep insight into their learning trajectory and needs.[30]

To redeem ourselves from the McNamara fallacy, we shall have to embrace the qualitative methods of assessment and embed them in the system. There is plenty of evidence to suggest that narratives without using grades/marks have a better influence on student learning.[31] Individualized narrative seems to work even more.[32]

[Figure 2] enlists the key points we need to keep in mind to avoid becoming victims of the McNamara fallacy.

Challenges and the way forward

Incorporating a qualitative component in assessment in a system that has relied heavily on numbers for many decades is not going to be an easy task. The following are the important points to consider:

Getting past the subjective–objective debate

An inevitable entrant to any discussion on non-numerical assessment is the debate on subjectivity and objectivity. Although we do not want to enter that now, enough has been written on the role and utility of expert subjective judgments and their equivalence to objective measures.[21],[33]

Conducting a good qualitative assessment

It requires a lot of time and resources. Assessors shall have to be trained in the task. They will need expertise not only in the competency being assessed but also in the observational and recording task intrinsic to the assessor's role.[34] They need to keep in mind the goals of assessment and be familiar with the various tools being used for the purpose. Students shall have to be sensitized too, for which the foundation course can provide a useful opportunity.

Qualitative assessments may involve observing a lot of tasks being performed in the workplace and/or reviewing a lot of data in a portfolio. The use of technology may help with data management, for example, having videotapes of clinical consultations or maintaining e-portfolios instead of physical copies.[35]

From marks to grades

We are so accustomed to giving marks that it may be a challenge to shift to grades. The problem with marks is that some examiners may use only “45–60” part of the scale, whereas others may use “10–70.” This shrinking or expanding the scale results in “contamination” of marks.[23] Using grades in place of marks is an effective intervention to cut down the problem of numbers. It would make the interpretation of results clearer and allow comparability between different subjects, students, and universities.[36]

Developing a vocabulary and documenting qualitative assessments

It is important to develop a vocabulary to describe advancements in competence. For example, the Uniformed Services University has a “RIME” framework describing various levels as Reporter, Interpreter, Manager, and Educator, with respect to the clinical competence of the students.[35] The use of rubrics may also help in this regard. Virk et al.[21] have described the characteristics of an effective rubric and how to construct rubrics to enhance the rigor and acceptability of subjective assessments. Documenting the qualitative assessments must include the rules, the evidence, the thought process ensued, and the reasons for decision making.[34],[37] This is especially important when qualitative assessments are done to make high-stakes decisions.

Increasing the acceptability of qualitative assessments

Qualitative assessments can capture certain nuances which the most psychometrically sophisticated and mathematically robust assessments cannot.[21],[33] However, it is only the “numbers” despite all the flaws and “standardization” despite its significantly higher cost,[38] that have had an unconditional appeal so far, to all the stakeholders. This perspective needs to change. Narrative comments alone have shown to be an extremely reliable means of assessing residents in a competency-based internal medicine program. In fact, rich and meaningful comments could illustrate weaknesses not otherwise picked up by various “scores,” thereby helping to overcome the well-described phenomenon of “failure to fail”[39] and providing learners more guidance regarding how to improve.[40]

To be more accepting of qualitative methods, Valentine et al.[41] suggested that the lens with which the assessment is viewed as appropriate or not should change from objectivity to fairness. Additionally, to add rigor to qualitative assessments, we shall have to apply the criteria of qualitative research methods. It is possible to achieve this by practical steps such as clarifying the intent of assessment and its justification, considering the reflexivity of assessors, appropriate sampling strategy, ensuring richness of raw data and synthesis, using multiple and different settings and observers, use of clear methodological framework, and demonstrating the impact on learner, assessor, and system.[42]

 :: Conclusion: The final words Top

Robert McNamara died an old man in 2009 and had an opportunity to reflect on his long life. His obituary in the Economist[43] records:

“He was haunted by the thought that amid all the objective-setting and evaluating, the careful counting and the cost-benefit analysis, stood the ordinary human beings. They simply behaved unpredictably.”

We are at a crucial moment of history. We have already taken the plunge in competency-based model without making a corresponding change in assessment. We need a paradigm shift in our perspective on assessment, to steer clear of number dependency, and evaluate the dynamics of the students' learning and attainment of competencies qualitatively. We need a system in which such assessment is documented, defended, and considered not only fair, transparent, and credible but also highly valuable. This would be critical to the success of CBME. At the same time, this should not be seen as a plea to discard numbers; they have their place. This is only an attempt to emphasize the value of narratives in assessment, and use them, especially in formative and internal assessment.

It is worthwhile remembering that “not everything that can be counted counts, and not everything that counts can be counted.”[44]

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 

 :: References Top
1.Rosenzweig P. Robert S. McNamara and the evolution of modern management. Harvard Business Review, 91 (2010): 87-93. Available from: https://hbr.org/2010/12/robert-s-mcnamara-and-the-evolution-of-modern-management. [Last accessed on 2022 Jul 29].  Back to cited text no. 1
    2.Yankelovich D. Corporate Priorities: A Continuing Study of the New Demands on Business. Stanford, CT: Yankelovich Inc; 1972.  Back to cited text no. 2
    3.O'Mahony S. Medicine and the McNamara fallacy. J R Coll Physicians Edinb 2017;47:281-7.  Back to cited text no. 3
    4.Singh T, Modi JN, Kumar V, Dhaliwal U, Gupta P, Sood R. Admission to undergraduate and postgraduate medical courses: Looking beyond single entrance examinations. Indian Pediatr 2017;54:231-8.  Back to cited text no. 4
    5.Medical Council of India. Minimum Qualifications for Teachers in Medical Institutions Regulations, 1998 (Amended Up to 8th JUNE, 2017), New Delhi. Available from: https://www.nmc.org.in/wp-content/uploads/2017/10/Teachers-Eligibility-Qualifications-Rgulations-1998.pdf. [Last accessed on 2022Apr 09].  Back to cited text no. 5
    6.National Medical Commission. Standard assessment form for AY 21-22. Available from: https://www.nmc.org.in/AppForms_nmc/UGforms/1_STANDARD_SAF_A_&_B_UG_FINAL.pdf. [Last accessed on2022 Apr 09].  Back to cited text no. 6
    7.Regulations on Graduate Medical Education (Amendment), 2019. Board of Governors in Super-session of Medical Council of India. Amendment Notification. The Gazette of India, Extraordinary, Part III-Section 4, Published by Authority, New Delhi, Nov 6, 2019. Available from: https://www.nmc.org.in/ActivitiWebClient/open/getDocument?path=/Documents/Public/Portal/Gazette/GME-06.110.2019.pdf. [Last accessed on 2022 Apr 09].  Back to cited text no. 7
    8.Medical Council of India. Competency based undergraduate curriculum for the Indian Medical Graduate, New Delhi, 2019. Available from: https://www.nmc.org.in/wp-content/uploads/2020/01/UG-Curriculum-Vol-I.pdf Last. [Last accessed on 2022 March 15].  Back to cited text no. 8
    9.Medical Council of India. Attitude, Ethics and Communication Competencies for the Indian Medical Graduate. 2018. Available from: https://www.nmc.org.in/wp-content/uploads/2020/01/AETCOM_book.pdf.[Last accessed on 2022 Apr 09].  Back to cited text no. 9
    10.van der Vleuten CP, Norman GR, De Graaff E. Pitfalls in the pursuit of objectivity: Issues of reliability. Med Educ 1991;25:110-8.  Back to cited text no. 10
    11.Schuwirth LW, van Der Vleuten CP. Current assessment in medical education: Programmatic assessment. J Appl Test Technol 2019;20:2-10.   Back to cited text no. 11
    12.Ten Cate O, Schumacher DJ. Entrustable professional activities versus competencies and skills: Exploring why different concepts are often conflated. Adv Health Sci Educ Theory Pract 2022;27:491-9.   Back to cited text no. 12
    13.van Nuland M, van den Noortgate W, van der Vleuten CP, Jo G. Optimizing the utility of communication OSCEs: Omit station-specific checklists and provide students with narrative feedback. Patient Educ Couns 2012;88:106-12.  Back to cited text no. 13
    14.Kool A, Mainhard T, Brekelmans M, van Beukelen P, Jaarsma D. Goal orientations of health profession students throughout the undergraduate program: A multilevel study. BMC Med Educ 2016;16:100.  Back to cited text no. 14
    15.Saiyad S, Bhagat P, Virk A, Mahajan R, Singh T. Changing assessment scenarios: Lessons for changing practice. Int J Appl Basic Med Res 2021;11:206-13.  Back to cited text no. 15
    16.Schuwirth LW, Ash J. Assessing tomorrow's learners: In competency-based education only a radically different holistic method of assessment will work. Six things we could forget. Med Teach 2013;35:555-9.  Back to cited text no. 16
    17.Eva KW, Hodges BD. Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? Med Educ 2012;46:914-9.  Back to cited text no. 17
    18.Norman GR, van der Vleuten CP, De Graaff E. Pitfalls in the pursuit of objectivity: Issues of validity, efficiency and acceptability. Med Educ 1991;25:119-26.  Back to cited text no. 18
    19.Goodhart C. Problems of Monetary Management: The UK Experience. In: Papers in Monetary Economics. Sydney: Reserve Bank of Australia; 1975.  Back to cited text no. 19
    20.Mattson C, Bushardt RL, Artino AR Jr. When a measure becomes a target, it ceases to be a good measure. J Grad Med Educ 2021;13:2-5.  Back to cited text no. 20
    21.Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med 2020;66:200-5.  Back to cited text no. 21
[PUBMED]  [Full text]  22.Department of General Medicine. Bimonthly formative and summative assessment of 3rd semester MBBS batch. Available from: https://generalmedicinedepartment.blogspot.com/2021/07/bimonthly-formative-and-summative.html?m=1. [Last accessed on 2022 May 29].  Back to cited text no. 22
    23.IGNOU. Developing valid grading procedures. Unit 15. Available from: https://egyankosh.ac.in/bitstream/123456789/7317/1/Unit-15.pdf. [Last accessed on 2022 Apr 13].  Back to cited text no. 23
    24.Rooney PJ, Allen SW, Dodd P, Norman GR, Powles AC, Rosenfeld J, et al. Can objective assessment of clinical skills be contained within the small group setting? In: Research in medical education, 1989: Proceedings of the twenty-eighth annual conference Association of American Medical Colleges. Washington DC: Section for Student and Educational Programs Association of American Medical Colleges; 1989. p. 1–273.  Back to cited text no. 24
    25.Lörwald AC, Lahner FM, Nouns ZM, Berendonk C, Norcini J, Greif R, et al. The educational impact of Mini-Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) and its association with implementation: A systematic review and meta-analysis. PLoS One 2018;13:e0198009.  Back to cited text no. 25
    26.Singh T, Saiyad S, Virk A, Kalra J, Mahajan R. Assessment toolbox for Indian medical graduate competencies. J Postgrad Med 2021;67:80-90.  Back to cited text no. 26
[PUBMED]  [Full text]  27.Singh T, Sood R. Workplace-based assessment: Measuring and shaping clinical learning. Natl Med J India 2013;26:42-6.  Back to cited text no. 27
    28.Humphrey-Murto S, Wood TJ, Ross S, Tavares W, Kvern B, Sidhu R, et al. Assessment pearls for competency-based medical education. J Grad Med Educ 2017;9:688-91.  Back to cited text no. 28
    29.Medical Council of India. Guidelines for preparing Logbook for Undergraduate Medical Education Program. New Delhi. Available from: https://www.nmc.org.in/wp-content/uploads/2020/08/Logbook-Guidelines_17.01.2020.pdf. [Last accessed on 2022 Apr 09].  Back to cited text no. 29
    30.Shah N, Singh T. The promising role of the logbook and portfolio in the new competency driven medical curriculum in India. SouthEast Asian J Med Educ 2021;15:18-25.  Back to cited text no. 30
    31.Lipnevich AA, Smith JK. Response to assessment feedback: The effects of grades, praise, and source of information. ETS Res Rep Series 2008:i–57. Avaialable from https://files.eric.ed.gov/fulltext/EJ1111274.pdf . [Last accessed on 2022 July 29].  Back to cited text no. 31
    32.Stewart LG, White MA. Teacher comments, letter grades, and student performance: What do we really know? J Educ Psychol 1976;68:488–500.  Back to cited text no. 32
    33.Rotthoff T. Standing up for subjectivity in the assessment of competencies. GMS J Med Educ 2018;35:Doc29.  Back to cited text no. 33
    34.Lockyer J, Carraccio C, Chan MK, Hart D, Smee S, Touchie C, et al. Core principles of assessment in competency-based medical education. Med Teach 2017;39:609-16.  Back to cited text no. 34
    35.Pangaro LN. Investing in descriptive evaluation: A vision for the future of assessment. Med Teach 2000;22:478-81.  Back to cited text no. 35
    36.Singh T, Gupta P, Singh D. From marks to grades. In: Singh T, Gupta P, Singh D, editors. Principles of Medical Education. 5th ed. New Delhi: Jaypee Brothers; 2022.  Back to cited text no. 36
    37.van Enk A, Ten Cate O. “Languaging” tacit judgment in formal postgraduate assessment: The documentation of ad hoc and summative entrustment decisions. Perspect Med Educ 2020;9:373-8.  Back to cited text no. 37
    38.Nelson H. Testing more, teaching less: What America's Obsession with Student Testing Costs in Money and Lost Instructional Time. American Federation of Teachers, AFL-CIO (AFT 2013). Available from: https://www.aft.org/sites/default/files/news/testingmore2013.pdf. [Lastaccessed on 2022 Apr 14].  Back to cited text no. 38
    39.Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med 2005;80:S84–7.  Back to cited text no. 39
    40.Ginsburg S, van der Vleuten CP, Eva KW. The hidden value of narrative comments for assessment: A quantitative reliability analysis of qualitative data. Acad Med 2017;92:1617-21.  Back to cited text no. 40
    41.Valentine N, Durning SJ, Shanahan EM, van der Vleuten CP, Schuwirth LW. The pursuit of fairness in assessment: Looking beyond the objective. Med Teach 2022;44:353-9.  Back to cited text no. 41
    42.Cook DA, Kuper A, Hatala R, Ginsburg S. When assessment data are words: Validity evidence for qualitative educational assessments. Acad Med 2016;91:1359-69.  Back to cited text no. 42
    43.McNamara R. The Economist, July 09, 2009. Available from: https://www.economist.com/obituary/2009/07/09/robert-mcnamara. [Last accessed on 2022 Apr 09].  Back to cited text no. 43
    44.Cameron WB. Informal Sociology a Casual Introduction to Sociological Thinking. New York: Random House; 1963.  Back to cited text no. 44
    
  [Figure 1], [Figure 2]
 
 
  [Table 1]
  Top Print this article  Email this article  

留言 (0)

沒有登入
gif