Implicit and explicit: a scoping review exploring the contribution of anthropological practice in implementation science

The database searches produced 3450 results combined after duplicates were removed, which were added to Rayyan for two rounds of screening by title and abstract. A total of 487 articles were included in the full-text screening. Of these, 227 were included as describing anthropological practice in implementation science and received data extraction, which we recorded and analyzed with descriptive statistics. See the PRISMA workflow diagram (Fig. 1) for the complete review process, with details from the search, review, and selection processes included. Because the screening process was updated between the first and second searches, we have included details with dates (first search in 2021, and second in 2022) to clarify the process.

Study characteristics

Of the 227 included articles (Supplemental file 3), we recorded and analyzed with descriptive statistics three main domains: (1) the use of data collection and analysis methods; (2) implementation science methods; and (3) study context (Table 1). The table included as Supplemental file 4 includes additional selected attributes of the included articles. The three categories of data abstracted together with our bibliometric analyses allowed us to answer our primary questions about who is conducting implementation research with anthropological approaches, how they are describing what they have done, and where and with whom they are doing this work.

Table 1 Characteristics of included articles

The included studies had considerable variation in the description of their overall design (Supplemental file 4); 29% of the manuscripts (n = 67) had some form of mixed methods (e.g., convergent parallel mixed methods, hybrid mixed methods, mixed methods time-motion study) as their study design. Many study designs were case studies (n = 46, 20%) and these also encompassed a broad range of qualifying descriptions, including qualitative, descriptive-explanatory case study; prospective observational case study; narrative case study; multiple case study with nested levels of analysis; and comparative, qualitative, explanatory embedded case study design. The descriptors were used in many different combinations as well. Ethnography (n = 34, 15%) including focused, institutional, rapid, interpretive, and autoethnography; evaluation (n = 33, 15%) including formative, mixed methods, process, prospective, realist, longitudinal, and partially mixed sequential dominant status evaluation; and participatory (n = 9) were common descriptions of study designs, although there were many others (e.g., critical pedagogy, situational analysis, quality improvement) and several (n = 14, 6%) did not include any study design description that could be identified by our team. Interestingly, eight studies (4%) described their design simply as qualitative. The variety and heterogeneity of terms and their combinations to describe research suggests a discomfort or mismatch between the realist epistemology of anthropological practice and the need to describe the work in terms considered methodologically rigorous.

Hidden anthropology?

In answer to our primary research question about whether and how anthropology is practiced in implementation research, ethnography is not usually mentioned explicitly but components of the anthropological methodological toolkit often are. A third of the included texts (n = 73, 32%) mention ethnography explicitly with 30 (13%) describing the overall study design as ethnographic, yet only 18 (8%) mention anthropology either as a discipline or as the identity of one or more of the research team members. In anticipation of this invisibility or lack of explicit identification in terms of anthropological background, our inclusion criteria were based on the overall design, data collection, and analysis, where we felt that even “hidden” anthropology, which could be conceptualized more as an epistemic sensibility, would emerge with careful consideration.

The individual data collection and analysis methods that compose ethnography are more frequent: 38% of the articles mention observation as a data collection method explicitly (n = 86), while another 70 articles (31%) imply observation from data collection methods such as site visits. Field notes, which tend to be the hallmark of long-term participant observation in anthropology, are used explicitly in most descriptions of data collection and analysis methods (n = 134, 59%), but those studies did not usually systematically describe findings from field notes in their results. The most often used method was some form of interview (i.e., structured, semi-structured, unstructured, informal) in both data collected (n = 198, 87%) and presentation of results, with 194 articles (85%) including direct quotations from the interviews. In general, most articles (n = 217, 96%) used multiple data collection and analytic methods, with 3.5 as the average number of specified data collection methods (not including “other methods”) and 20% of the articles using five or more methods (n = 49). The ones that used observation were more likely to have multiple data collection methods, as well (Fig. 2). Interestingly, only one used all seven data collection methods, and one was categorized as having described none of the traditional collection methods but was an auto-ethnography, which was written as a first-person narrative and described data collection as such.

Fig. 2figure 2

Co-occurrence of data collection methods. a Number and percentage of all articles (n = 227) reporting the use of document analysis, focus groups, and/or site visits. Forty-four articles (19%) reported no use of any of these methods. b Number and percentage of all articles (n = 227) reporting the use of focus groups, observations and/or surveys. Fifty-five articles (24%) reported no use of any of these methods. c Number and percentage of all articles (n = 227) reporting the use of surveys, focus groups, observations and/or site visits. Twenty-two articles (10%) reported no use of any of these methods. d Number and percentage of all articles (n = 227) reporting the use of surveys, focus groups, observations and/or interviews. Four articles (2%) reported no use of any of these methods

The use of data collection methods described in the included manuscripts changed slightly over time. We did not include date limits in our searches, so the articles included in the results represent the duration of implementation science as a field and even a little before; the year of the first manuscript we included was published in 2000, and Implementation Science’s first issue was published in 2006. The number of included articles increased steadily over time until our final search in September 2022 (Fig. 3), which is not surprising given the growth of the field. The Cochrane-Armitrage Trend Test, which assesses whether there is an association between a two-level categorical variable (data collection methods) and an ordinal categorical variable (time) [24] however, gave us more details on individual data collection methods (Fig. 4). The analysis showed the frequency of observation and focus groups did not change significantly over time. Articles mentioning the use of interviews and surveys did increase over time, while field notes, document review, and site visits decreased over time.

Fig. 3figure 3

Number of articles included in review by year

Fig. 4figure 4

Frequency of data collection methods over time

Descriptions of analytic methods and techniques were more varied and were described using a wider variety of terminology overall. Most of the included articles mentioned some form of thematic or content analysis (n = 193, 85%), and more than half used field notes explicitly in the analysis or results sections (n = 123). However, descriptions of the overall analytic strategy, when explicit (12%, n = 28 included no description of an overarching analytic approach), showed the most heterogeneity. Those that did describe an analytic approach included mixed or multiple methods, deductive, inductive, grounded theory, triangulation, immersion crystallization, rapid ethnography, concept analysis, interpretive descriptive analysis, and framework analysis among others, and most articles described more than one analytic technique. Similar to the heterogeneity in study design descriptions, there was also no association between particular designs and analytic methods, although “qualitative” appeared most often across design, collection, and analysis domains.

Does context matter?

One of our questions was whether anthropological approaches were used more often in one type or one aspect of implementation research. Our results show that research describing facilitators and barriers is represented in 70% of included manuscripts (n = 159), which aligns with our expectations that facilitators and barriers are more often investigated in implementation science with qualitative methods, as opposed to a topic in the anthropological literature. However, just slightly fewer (n = 146, 64%) described one or more standard implementation science outcomes (i.e., acceptability, adoption, feasibility, fidelity, reach, implementation cost, maintenance, and sustainability). The clinical setting was well distributed, although there were fewer studies conducted in long-term care and the emergency department, but that is likely reflective of overall implementation research. The top 4 countries where research was conducted were the USA, Canada, England, and Australia in rank order, but included studies encompassed 50 countries.

Whose voice?

The bibliometric analyses conducted through Scopus, SciVal, and VOS Viewer contributed to an overall understanding of who is publishing this type of work, where, and the relationships between them. Figure 5 shows the cluster density visualization created by VOSViewer, and Table 2 shows details for the top 15 authors. The visualization shows co-authorship links between authors of the included publications; a link for each author represents the number of co-authors that person has in the dataset and the total link strength would be the number of co-authored publications that the author has in the dataset. Of the 15 authors who have the most links and strongest total links, only three are trained as anthropologists and two of those work for the US Department of Veteran Affairs (VA), which is known for its large number of medical anthropologist researchers [25]. The training disciplines of the others include public health, sociology, psychology, medicine, nursing, and computer and information science. Not all the top 15 were first authors on included publications, but their strong associations reflect the team science nature of implementation research.

Fig. 5figure 5

VOSViewer cluster density visualization

Table 2 Details of the top 15 authors in the VOSViewer cluster density visualization

While only 220 articles were included in the Scopus analysis because seven did not have a PMID, Scopus is considered a trustworthy bibliometric data source for research assessments, research landscape studies, science policy evaluations, and university rankings [26]. The top 5 journals in which included articles were published were BMC Health Services Research (n = 22), Implementation Science (n = 14), BMJ Open (n = 7), BMJ Quality and Safety (n = 6), and Social Science and Medicine (n = 6). The included 220 articles were published across 121 unique journals. The top 4 funding sources listed for the research were the Canadian Institute of Health Research (n = 16), National Institute for Health Research (n = 14), National Institutes for Health (n = 22), and the National Health and Medical Research Council (n = 7) of 158 funding sources captured in Scopus. Scopus also analyzes the affiliations of the authors, the top 5 of which were the University of Toronto (n = 11), University of Edinburgh (n = 10), University of Alberta (n = 10), University of Sydney (n = 8), and University of Montreal (n = 7), but there were 159 institutional affiliations listed overall.

留言 (0)

沒有登入
gif