Evaluating Leadership Development Competencies of Clinicians to Build Health Equity in America

In the growing complexity of the United States (US) health care system, patients need higher quality and more integrated care at lower costs.1 Beginning with a global push by the WHO's Commission on Social Determinants of Health (SDoH) in 2008, SDoH has been increasingly recognized as a critical driver of health conditions and outcomes.2 The commission recognized that “…inequities in health…arise because of the circumstances in which people grow, live, work, and age, and the systems put in place to deal with illness.”2 The move toward population health interventions that address SDoH requires improved interdisciplinary collaboration and clinical leadership to address complex health issues effectively. However, education in health-related fields has not responded at scale to the need to educate health care leaders on SDoH, and to our knowledge, few curricula address it sufficiently.3,4

Health care must be grounded in understanding SDoH to achieve more equitable health outcomes across the broader population. Although leadership development focused on SDoH may be mentioned in traditional educational settings in the health care field, it is generally not a targeted area of focus. Several national organizations have made recent efforts to incorporate SDoH into clinical care. For example, in 2019, the American Medical Association (AMA) launched an SDoH module as part of a larger Health Systems Science Learning Series. This online education resource is free and available to all levels of health care providers.5 Although the AMA resource is an essential step toward developing the skills of established medical providers; training is needed in interdisciplinary settings where clinicians can expand SDoH skills alongside leaders from diverse disciplinary backgrounds because they work collaboratively to treat the whole patient. Leaders in health care require skills that allow them to engage beyond the clinical setting with the populations and communities they serve.

Developing leadership skills has long been considered critical to the success of public health and health care practitioners.6 Increasingly, recommendations call for interdisciplinary training and cross-sectoral collaboration.7,8 In addition to learning competencies specific to individual disciplinary or organizational success, leaders must also understand how individuals and communities work together in a system. Successful leadership among health care professionals requires training beyond siloed clinical fields. Professional development opportunities are necessary to explore and address upstream, systems-level challenges affecting patient health. Leaders who wish to impact change on a systems level must be competent in personal, interpersonal, organizational, and community-level skills.

In response to this need for clinicians to promote SDoH within cross-sectoral settings, we developed an equity-centered leadership development approach that broadly expands the skills of interprofessional teams. We included competencies to create lasting impacts and advance health equity within the organizations, communities, and individuals with which program participants work. This study describes the self-reported ratings of three dimensions of competencies within four domains throughout this equity-centered leadership development program.

METHODS Setting

The Clinical Scholars (CS) program develops clinicians as leaders who collaborate with community partners to address complex health problems. A new cohort of teams begins training each Fall, with up to 35 individuals comprising teams of 2 to 5 clinicians. Interprofessional teams enter the program with a proposal to address a complex problem in the population they serve. The first cohort of participants began in September 2016. The search for applicants is nationally based. Program staff partner with local and national professional organizations, recruit at conferences, local community-based organizations and associations to ensure a broad recruitment reach and to target disciplines that may have been underrepresented in previous cohorts (Table 1). Applicants themselves are self-selected and organized into teams. Participants must be clinically active to be eligible for enrollment. Participants in each cohort represent various career levels ranging from early career to advanced (Table 1). Applications are reviewed in stages by program staff and external stakeholders. Supplemental Digital Content 1 (see Figure, https://links.lww.com/JCEHP/A258) provides a summary of each stage of the review cycle and Supplemental Digital Content I (see Appendix, https://links.lww.com/JCEHP/A262) summarizes the rubric reviewers use to assess applications. For three years, CS Participants engage in onsite learning retreats, distance learning modules, and individual and team executive coaching. Supplemental Digital Content 2 (see Figure, https://links.lww.com/JCEHP/A259) shows a summary of the curriculum pedagogy. Learning modules incorporate 25 core competencies critical to leadership in health care settings (see Appendix, Supplemental Digital Content II, https://links.lww.com/JCEHP/A262). The set of core competencies used in CS were identified by reviewing competencies used in other similar leadership development programs,9–13 with a focus on ensuring equity-focused competencies were integrated throughout the curriculum.14–16 Based on the content and intended leadership impact of each competency, the CS program staff assigned each competency to one of four leadership domains that mirror the socioecological model: personal, interpersonal, organizational, and community and systems impact.17 Detailed information related to the full pedagogical underpinnings of the training program have been published elsewhere.16,18

TABLE 1. - Demographics of Clinical Scholar Participants n Licensed Health Profession  Physician 56  Nurse/Nurse Practitioner 33  Social Worker 22  Psychologist 18  Pharmacist 8  Clinical Counselor/Therapist 4  Occupational/Physical Therapist 4  Veterinarian 4  Dentist 3  Dietician/Nutritionist 3  Physician Assistant 3  Other 3 Gender  Male 37  Female 115  Other 0 Career level (years of professional experience)  Early (5 or less) 17  Mid (6–14) 81  Advanced (15+) 64

Throughout the three-year program, participants translate the skills and knowledge gained through the program directly into their communities by implementing Wicked Problem Impact Projects (WPIPs). Teams of Participants design WPIPs to address a complex, systems-level problem burdening their service population's health. Supplemental Digital Content III (see Appendix, https://links.lww.com/JCEHP/A262) outlines the titles of team projects from each of the five cohorts of participants. The term “Wicked Problem,” coined by Horst Rittel in the 1970s, describes evolving, socially complex, multicausal, and interdependent problems with no clear solution.19 Wicked problems are resistant to straightforward solutions, highlighting the importance of a multifaceted and context-driven leadership development program for clinicians.20

Sample

The sample for this analysis consists of 169 participants in the CS program. We enrolled five cohorts of participants in CS—the first cohort started in 2016 and the final cohort began in 2020. Cohort sizes ranged from 29 to 35 participants. The following evaluation research was conducted on human subjects volunteers and approved by an internal institutional review board.

Data Collection

Kirkpatrick's four-level model guides the evaluation of the CS training program.21–23 Level 1 (reaction) is assessed through a process evaluation of onsite leadership training institutes that are held twice during each program year. Learnings associated with Level 2 (learning) are captured through the process evaluation, competency assessment, and concept mapping activity. This study presents analysis from the competency assessment, which includes each of the 25 competencies taught in CS. Behavior (Level 3) is explored in part through the competency assessment, and further via social network analysis, most significant change stories, and an alumni evaluation. Kirkpatrick's final level (4; results) is assessed through most significant change stories, concept mapping, social network analysis, and alumni evaluation. Other publications have presented evaluation results aligned with Kirkpatrick's Level 1,24 three and 4.16,25 The competency assessment results and discussion presented here represent a singular evaluation activity within the CS program's comprehensive evaluation approach.

Data were collected from Participants at four time points during the program: at baseline (ie, when the program started), 6, 18, and 36 months (ie, at completion of the program). At each data collection point, participants rated their competencies across three dimensions (1) knowledge, (2) self-efficacy (ie, ability to use in a leadership role), and (3) use (ie, likelihood of using). Each of the 25 competencies falls under one of four overarching domains: (1) personal; (2) interpersonal; (3) organizational; and (4) community and systems. (see Appendix, Supplemental Digital Content II, https://links.lww.com/JCEHP/A262). Each item was rated using a validated 7-point Likert scale (1 = lowest rating, 7 = highest rating) adapted from rating scales identified in the research literature (see Appendix, Supplemental Digital Content IV, https://links.lww.com/JCEHP/A262).26–32 The survey was distributed through Qualtrics software33 using unique email links in 2016 and 2017, then via Research Electronic Data Captrue tools hosted at the university34 starting in 2018.

The evaluation team modified the instrumentation and distribution for clarity in response to learning from data analysis and participant feedback. The team revised the wording for the skill usage dimension starting in 2017. Because of this change, we have excluded Cohort 1's 6-month measure from the analysis. In addition, we added baseline (time point 0) for cohorts beginning in 2018. Thus, we include data from 14 time points across the five cohorts (Table 2).

TABLE 2. - Data Collection by Cohort, by Timepoint* 0 mo 6 mo 18 mo 36 mo Cohort 1 ✓☐ ✓☐ ✓☐ Cohort 2 ✓☐ ✓☐ (✓☐) Cohort 3 ✓☐ ✓☐ (✓☐) Anticipated Fall 2021 Cohort 4 ✓☐ ✓☐ (✓☐) Anticipated Fall 2022 Cohort 5 ✓☐ ✓☐ Anticipated Spring 2022 Anticipated Fall 2023

*(✓☐) indicates data collected on a Fellow's “current” rating.


Data Analysis

Analysis was conducted using the R software.35–38 We used descriptive statistics to characterize our sample. To compute scores for domains, we averaged the ratings made by each participant for the competencies that are categorized as on the specified domain (see categorization in see Appendix, Supplemental Digital Content II, https://links.lww.com/JCEHP/A262). Competency ratings were averaged by participant for a given question, on a given measurement occasion (ie, 0, 6, 18, 36 months), and on being asked about a particular point in time (ie, now or six months ago). Within each participant, all responses within a given domain (ie, personal, interpersonal, organizational, or community and systems impact), on a given measurement occasion (ie, at 0, 6, 18, or 36 months), and within a given dimension (ie, knowledge, self-efficacy, or use) were averaged to yield a score for that participant for that combination of attributes (ie, domain × time × type). Thus, each participant contributes 12 scores for three competencies measured at four time points. A paired-sample t test was conducted to compare domain ratings from Cohorts 1 and 2 at baseline and endpoint. Similarly, a paired-sample t test was used to compare ratings from Cohorts 3 and 4 at baseline and midpoint.

Our primary evaluation question is whether average self-reported ratings of competencies within the domains changed throughout the training program. We fit a linear mixed-effects model for the ratings as a function of time, domain, and type of question while controlling for the cohort. We used random intercepts to account for the nonindependence of repeated measures. Because there is no reason, a-priori, to assume that ratings for different domains or dimensions change at the same rate over time, we include interactions between these three variables. We tested variables with likelihood ratio tests. Given the complexity that interactions create if the P-value for interaction was >.10, we dropped the interaction and refitted the model to simplify the results for greater interpretability. We present covariate-adjusted least-squares means and mean trends with Tukey-adjusted 95% confidence intervals (CIs).

Ethics Approval

The following evaluation research was conducted on human subjects volunteers and approved by the institutional review board at the University of North Carolina at Chapel Hill: UNC IRB Study #16-1817.

RESULTS

Table 1 provides a descriptive overview of the participant characteristics. Participants (n = 169) from five cohorts contributed 65,968 individual ratings of competencies, yielding 13,355 averaged scores for the dimensions for the various combinations of attributes.

The three-way interaction between time, domain, and dimension were nonsignificant (χ2 = 2.2, df = 6, P = .90), so we dropped it from the analyses, and the model was refit with three two-way interactions (ie, time × domain, time × dimension, and domain × dimension). The two-way interaction between the dimension and domain was likewise not significant (χ2 = 7.2, df = 6, P = .30). So we also dropped that interaction and refitted the model with only the interactions between time and domain and between time and dimension while controlling for the cohort. The time by domain interaction was significant (χ2 = 24.3, df = 3, P < .001), as was the time by dimension interaction (χ2 = 7.9, df = 2, P = .019). The cohort (χ2 = 48.8, df = 4, P < .001) was also significant.

These findings imply that the change in ratings over time differed by domain and type of question. The full model is: 5.5 (SE = 0.13) + 0.012 (0.002) × months since start, + 0.36 (0.031) if the question is about self-efficacy, − 0.16 (0.032) if the question is about use, − 0.40 (0.037) if the question is about the interpersonal domain, − 0.61 (0.037) if the question is about the organizational domain, −0.48 (0.037) if the question is about the community domain, + 0.003 (0.171) for respondents in the 2017 cohort, − 0.46 (0.169) for respondents in the 2018 cohort, − 0.83 (0.17) for respondents in the 2019 cohort, − 0.94 (0.171) for respondents in the 2020 cohort, + 0.005 (0.002) x months if the question is about self-efficacy, + 0.001 (0.002) x months if the question is about use, + 0.001 (0.002) x months if the question is about the interpersonal domain, + 0.006 (0.002) x months if the question is about the organizational domain, and + 0.009 (0.002) x months if the question is about the community domain. The standard deviation of the intercepts is 0.68, and the residual SD is 0.61. The model is complex. However, to facilitate greater understanding, we present covariate-adjusted (“least squares”) average slopes (with Tukey-adjusted 95% CIs) by dimension and by domain in Table 3 (below), respectively. Positive slopes indicate a positive linear relationship. In other words, as time increases, so does the participants' learning in the competency dimension and domain. It is worth noting that none of the CIs include 0, meaning that we can be confident that self-ratings changed, and they improved in every case.

TABLE 3. - The Model's Predicted Increase in Mean Ratings per Month Since the Start of the Program, Averaged Over Cohorts Slope Lower Limit Upper Limit Dimensions  Knowledge 0.016 0.012 0.020  Efficacy 0.021 0.017 0.025  Use 0.017 0.013 0.020 Domains  Personal 0.014 0.010 0.018  Interpersonal 0.014 0.010 0.019  Organizational 0.020 0.016 0.024  Community 0.023 0.019 0.028
Results by the Domain (Personal, Interpersonal, Community, Organization)

The average rating of each competency domain increased from baseline to endpoint data collection (Table 4). Furthermore, there was significant growth between baseline and endpoint ratings using combined data from Cohorts 1 and 2, and significant growth between baseline and midpoint ratings in Cohorts 3 and 4 (Table 5).

TABLE 4. - Average Domain Rating by Cohort, by Timepoint* Cohort Domain Personal Interpersonal Organizational Community & Systems Baseline Midpoint Endpoint Baseline Midpoint Endpoint Baseline Midpoint Endpoint Baseline Midpoint Endpoint 2016 4.82 5.92 6.08 4.54 5.85 5.76 4.37 5.74 5.78 4.47 5.84 5.87 2017 4.86 5.90 6.01 4.59 5.43 5.59 4.39 5.17 5.46 4.52 5.49 5.89 2018 4.65 5.84 — 4.20 5.27 — 4.06 5.34 — 4.24 5.34 — 2019 4.26 5.73 — 3.76 5.15 — 3.85 5.20 — 3.72 5.09 —

*Baseline: 6 months pre | Midpoint: 18 months current | Endpoint: 36 months current.

†“—”: data to be collected in the future.


TABLE 5. - Paired-Sample t test Results Domain
Time Mean SD t(df) P Cohorts 1 and 2  Personal
Baseline 4.91 0.89 −5.79 (42) 0.00  Endpoint 6.03 0.72  Interpersonal
Baseline 4.60 0.88 −5.27 (42) 0.00  Endpoint 5.63 0.77  Organizational
Baseline 4.34 0.93 −6.25 (39) 0.00  Endpoint 5.56 0.83  Community and systems
Baseline 4.48 0.99 −6.58 (37) 0.00  Endpoint 5.81 0.75 Cohorts 3 and 4  Personal
Baseline 4.77 1.04 −6.72 (31) 0.00  Midpoint 5.78 0.54  Interpersonal
Baseline 4.09 1.06 −7.84 (31) 0.00  Midpoint 5.18 0.59  Organizational
Baseline 4.18 1.04 −7.80 (30) 0.00  Midpoint 5.23 0.68  Community and systems
Baseline 4.18 1.04 −9.66 (29) 0.00  Midpoint 5.19 0.66
Results by Dimension (Knowledge, Self-Efficacy, Use)

The slope for improvement in self-reported self-efficacy over time (β = 0.021, 95% CI = 0.018–0.024) was larger than for use (β = 0.017, 95% CI = 0.013–0.020) or knowledge (β = 0.016, 95% CI = 0.013–0.019). We found that ratings for each competency dimension improved over time (see Figure, Supplemental Digital Content 3, https://links.lww.com/JCEHP/A260). The most dramatic improvement in each dimension of growth occurred between 0 and 6 months of program participation.

DISCUSSION

One of the purposes of this equity-centered leadership development program is to train health care professionals to become leaders who think about and can act on health challenges using a systems-thinking and a health-equity lens. The program's curriculum is designed around two focus areas—leadership development and equity, diversity and inclusion (EDI)—working in tandem to develop leaders with the skills and mindset need to approach their work more equitably and to challenge the archetypes they see that create or cause health disparities.14 Participants learn and practice the program's 25 competencies through lectures, small and large group discussions, practice scenarios, simulations, and case study debriefs. Sometimes these activities focus on leadership or EDI skills, other times these areas are woven together to provide a fuller training experience. This is reflected in the table in Supplemental Digital Content II (see Appendix, https://links.lww.com/JCEHP/A262) that lists and defines the competencies, highlighting those that co-occur in leadership and EDI domains.

Participants enrolled in this leadership program reported competency growth across four leadership competency domains, with the most significant increase over time in ratings for the community and systems domain (see Figure, Supplemental Digital Content 4, https://links.lww.com/JCEHP/A261 and Table 4).

Learning-by-doing is a solid pedagogical approach in adult learning theory.39 A particular strength of our model was the inclusion of the WPIP strategy, which created an implementation science component to the participants' learning as they partnered with communities to impact health disparities. We designed the program to provide a diverse support system for participants' professional development, including team and executive coaching and multiple and consecutive curricular components that continually reinforced skills.40 We found that participants reported increased knowledge and use of all four leadership development domains. Our evaluation findings support the need and implementation of a clinician-oriented equity-centered leadership development model. We speculate that clinicians may be hungry for leadership training that helps them impact the communities they serve.

In addition to looking at the 25 leadership development competencies by domain, we assessed them by three dimensions (ie, knowledge, self-efficacy, and use). We found the most significant increase over time for self-efficacy. We asked participants to rate their level of perceived self-efficacy for each of the 25 competencies at each time point of data collection.27–29,41 We designed our equity-centered leadership development program to complement technical expertise by focusing on leadership skill development and applicability. As the data in Table 3 and Supplemental Digital Content 3 (see Figure, https://links.lww.com/JCEHP/A260) indicates, the most prominent slope illustrating the increase in participants' perceived ability to apply skills in their clinical roles suggests that the program successfully develops health care professionals with the knowledge and the ability to be leaders. This statistical evidence of increased self-efficacy corroborates our findings from a qualitative evaluation with Cohort 2016 participants. Participants documented their experiences of increased self-confidence resulting from participating in the CS program using the most significant change method.25 It also echos findings from the literature that suggest increased self-efficacy may lead to increased leadership impacts.12,13

Engagement in the CS program requires application of sophisticated concepts in action-learning settings, calling on participants to analyze situations, evaluate contextual factors and collaboratively create approaches to address structural factors that contribute to the health disparities they targeted through their community-based health-equity-focused projects. The fact that the most considerable improvement in slope was for the self-efficacy dimension suggests that the program contributed to producing confident leaders in their mastery of these skills.

There are notable strengths to our evaluation of leadership development competencies. First, we had a robust dataset to test for significant growth changes over time, including multiple cohorts across multiple timepoints. Second, we analyzed growth across 25 competencies aggregated into four domains and three dimensions of leadership development, which provided a detailed assessment of this unique curricular approach. Third, the participants in our sample represented a diversity of health care professions, and our data showed that the equity-centered program and training model benefited each cohort. Also, participants participated in the program in interprofessional teams, which is a closer replication of real-world application of skills than homogenous team structures and lends strength to our findings.

There are also a few limitations to the evaluation design. Perhaps the most significant limitation is the self-report nature of the participants' competency data. We cannot control for differences between participants that may have affected their rating choices. Although we did define each of the seven response options on the Likert scale, there may be a disconnect between Participant A's belief about what level they are on (“expert” is 7/7 on the Likert scale) and what level they are on. To address this in the analysis, we model the data within-participant to get a clearer understanding of the results, which provides reassurance in the consistency of the final results.

Second, we recognize it is possible that external influences affected the internal validity of some of our findings. Participants provided self-ratings of their knowledge, self-efficacy, and intent to use the 25 competencies three or four times over their program years. During the years in which data were collected, there were changes to the US health care system, national- and state-level election cycles, and the COVID-19 pandemic. It is possible that any single or combination of these and other external factors influenced how participants viewed their skills at a given time.

In the future, we recognize the need for a similar evaluation that explores EDI-focused competencies. Although the CS training curriculum included components of EDI across each of the four domains, conducting an EDI-specific analysis of the data was beyond the scope of this evaluation. Such an evaluation may explore how health care leaders use their skills in their communities to address health disparities, creating measurable differences that help close the equity gap in a particular area. Another future direction of related leadership development research is investigating how WPIPs (in the CS context) become sustainable and scalable, which is perhaps the ultimate measure of the self-efficacy of participants' newly learned and enhanced leadership skills.

CONCLUSION

The findings presented in this manuscript indicate that the clinically active health professionals enrolled in the CS program reported a statistically significant growth in competency levels across all 25 leadership competencies addressed in the program. The most remarkable change occurred in the competencies in the domain of Communities and Systems. In addition, participants reported the most significant growth over time as being in their self-efficacy across competencies. Such findings suggest that investment in equity-centered leadership development efforts can potentially enhance leadership competencies to advance health equity in leadership work and build the self-efficacy essential for leaders to implement their learnings in their organizations, communities, and systems. Fellows engaged in extensive training in issues of social determinants of health, health equity, affecting organizational capacity for advancing health equity, and social justice; all of which also represent competencies undergirding the program. Through the combination of the training and their action- and application-based learning, we hypothesized that participants gained significant skills to address issues of the social determinants of health that lead to health inequity—the data presented here support that hypothesis. Further analysis of data collected to evaluate different outcomes and impact measures will explore the extent and impact of applying the competencies addressed in this equity-centered leadership development program. Another much needed area for future research and evaluation is in assessing the impact of leadership training specifically on advancing health equity, beyond the advancement of skill presented in this manuscript.Lessons for Practice ■ Health care providers perceive benefit from an equity-centered curriculum in leadership training. ■ The Clinical Scholars program model is effective in improving knowledge, self-efficacy and use of personal, interpersonal, organizational, and community and systems leadership competencies.

REFERENCES 1. Trastek VF, Hamilton NW, Niles EE. Leadership models in health care: a case for servant leadership. Mayo Clinic Proc. 2014;89:374–381. 2. Closing the Gap in a Generation Health Equity through Action on the Social Determinants of Health Commission on Social Determinants of Health FINAL REPORT | EXECUTIVE SUMMARY. Published online; 2008. 3. Siegel J, Coleman DL, James T. Integrating social determinants of health into graduate medical education: a call to action. Acad Med. 2018;93:159–162. 4. Hunter K, Thomson B. Canadian Medical Education Journal A scoping review of social determinants of health curricula in post-graduate medical education. Can Med Educ J. 2019;10:62. 5. Social determinants of health. What medical students need to know | American Medical Association. Available at: https://www.ama-assn.org/delivering-care/patient-support-advocacy/social-determinants-health-what-medical-students-need-know. Accessed August 22, 2021. 6. Fernandez C, Steffen D, Upshaw V. Leadership for public health. In: Shi L, Johnson J, eds. Public Health Administration: Principles for Population Based Management. 4th ed. Burlington, MA: Jones and Bartlett; 2020. 7. Institute of Medicine (U.S.). The Future of Public Health. National Academies Press; 1988. 8. Services USD of H and H. Healthy People 2020; 2017.Available at: https://www.healthypeople.gov/2020/About-Healthy-People. 9. Fernandez CSP, Steffen D. Leadership for public health. In: Shi L, Johnson JA, eds. Novick and Morrow's Public Health Administration: Principles for Population Based Management. 3rd ed. Burlington, MA: Jones and Bartlett Publishers; 2013:241–265. 10. Fernandez CSP, Noble CC, Jensen ET, et al. Improving leadership skills in physicians: a 6-month retrospective study. J Leadersh Stud. 2016;9:6–19. 11. Fernandez CSP, Peterson HB, Holmstrom SW, et al. Developing emotional intelligence for health care leaders. In: di Fabio A, ed. Emotional Intelligence: New Perspectives and Applications. London, United Kingdom: InTech; 2012:239–260. 12. Fernandez C, Noble C, Jensen E, et al. A retrospective study of academic leadership skill development, retention and use: the experience of the food systems leadership institute. J Leadersh Educ. 2016;15:150–171. 13. Fernandez CSP, Noble CC, Jensen E, et al. Moving the needle: a retrospective pre- and post-analysis of improving perceived abilities across 20 leadership skills. Matern Child Health J. 2015;19:343–352. 14. Brandert K, Corbie-Smith G, Berthiuame R, et al. Clinical scholars: making equity, diversity and inclusion learning an integral part of leadership development. In: Fernandez CSP, Corbie-Smith G, eds. Leading Community Based Changes in the Culture of Health in the US: Experiences in Developing the Team and Impacting the Community. London, United Kingdom: InTech; 2021:29–51. 15. Upshaw V, Rice D, Cipriani K, et al. Diversity and inclusion in public health practice. In: Shi L, Johnson JS, eds. Novick and Morrow's Public Health Administration: Principles for Population-Based Management. 4th ed. Burlington, MA: Jones and Bartlett Pu

留言 (0)

沒有登入
gif