Planning for and Assessing Rigor in Rapid Qualitative Analysis (PARRQA): a consensus-based framework for designing, conducting, and reporting

Table 2 presents the PARRQA framework and considerations to support rigor and validity of a rapid qualitative project throughout all phases. The framework intentionally focuses on semi-structured qualitative data collection methods, as RQA is designed for these methods (i.e., not for unstructured qualitative data collection methods).

Table 2 Planning for and Assessing Rigor in Rapid Qualitative Analysis (PARRQA): consensus-based framework for designing, conducting and reportingRigorous designArticulate the research question(s) and finite purpose of the project

In implementation and health services research, most core elements of study design emerge from addressing the project’s central research or evaluation question(s). This is also true for projects using rapid qualitative analysis. During initial project planning, it is important to articulate and document the research question(s) guiding qualitative data collection and analysis, from designing data collection instrument(s) through development of summary templates and matrices, and reporting. Furthermore, it is helpful to specify time-sensitive project goals, e.g., a product needs to be generated within a specified timeline, including executive summaries, brief presentations, synthesized reports for team members and/or implementation partners. This specification provides appropriate focus and scope for the rapid qualitative project but does not preclude the team from generating additional products, perhaps outside of the immediate timeline. The research question should be revisited to ensure its relevance and to document any new questions that arise during data collection and analysis.

Describe the rationale for using RQA

When designing a study, teams should describe and document the rationale for using RQA (see Fig. 1). Projects typically use rapid analysis when a project requires rapid turnaround of findings, e.g., in an implementation project where context and barriers are being assessed and information is shared with local sites or implementation teams in a timely way. Rapid methods are best suited to projects that have a narrow scope or focused research question (see #1). Additional examples for when it is appropriate to use RQA include when time is limited and there is urgent need to deliver findings on schedule, as in a pilot or other brief study; in phased work where next steps are data-dependent; when operational partners or policymakers are in need of mission-critical data; or when conducting longitudinal work with multiple data collection waves [5]. RQA is also appropriate when the qualitative component is not the focus (e.g., in some types of mixed methods designs), in generating qualitative findings to explain unexpected quantitative findings, and in developing high-level takeaways for dissemination via publications or other types of products for partners and other constituents.

Fig. 1figure 1

Benefits and common myths about rapid qualitative projects

In considering the rationale(s), it is important to dispel common myths about RQA (see Fig. 1)—first and foremost, that this approach is easier and can be done by those with little or no qualitative experience. This is not an appropriate nor accurate rationale. On the contrary, shorter timeframes often require greater focus, intention, and cognitive load during both data collection and analysis, and benefit from methods leadership with specific expertise/training in this approach; as with all methods, the resulting rigor (or lack thereof) derives from the expertise of the researchers and the care taken to align with research best practices throughout the process [5].

Define what is meant by “rapid qualitative analysis”

Teams should describe their approach to using RQA, including the methods (e.g., semi-structured interviews), data types (e.g., transcripts, audio files, notes), data collection instruments, analysis plans, and intended or priority products. Regarding analysis, researchers should provide detailed information about how methods aligned with the analytic approach, including how data collection instruments were prepared and used, how summary templates and matrices were designed, when, and by whom, and so forth. This offers a transparent and replicable methods account to reviewers and readers. Given the variety of qualitative methods and approaches available, citations should be provided and specific to the approach used.

Consider whether a theory, model, or framework will be used to inform the study and if so, why and how

In designing projects that will use RQA, it is important to consider whether theories, models, and/or frameworks will be used, and if so, in what ways. Models, theories, and frameworks can be used, but are not required, in rapid qualitative projects [17]. Deciding if one should be used, and which to apply, is dependent on the type of research (e.g., this would be expected in implementation research), the project goals, and the research question(s). If planning to use a theory, model, or framework, it is important to document the rationale behind the decision, which core constructs and elements will be included, how they will be integrated as part of data collection (e.g., reflected in the interview guide) and analysis, and if adaptations will be needed to better align with RQA. For example, frameworks such as the Consolidated Framework for Implementation Research (CFIR) can be adapted such that only necessary and focused portions are utilized for data collection and analysis, i.e., by including only constructs expected to be relevant to the focused research question [5].

Define the intended timeframe of data collection, analysis, and products/deliverables

To achieve rigor throughout the rapid qualitative project, the expected timeline for methods should be delineated from the beginning of the study, with regular check-ins to ensure the project remains on schedule. In addition, researchers should state when rapid analysis occurs in relation to data collection. Beyond facing delay, projects that do not adhere to planned timelines may not achieve expected sample sizes or allow adequate time for planned analyses, negatively impacting study rigor.

Plan for appropriate staffing

Ensuring the study design is feasible is important for projects that involve rapid turnaround of qualitative findings, given the abbreviated timeline. Feasibility and timeline are tied to staffing, which relates to budgeting. Qualitative data collection geared toward RQA should be conducted by a team of at least two for greater rigor and staff should have sufficient dedicated time; the number of team members is proportional to the scope of the study (e.g., number of sites, number of participants, volume of data collection). Ideally interviewers are involved in both data collection and analysis, but it is also possible that some staff will be more responsible for data collection, while others focus on analysis.

Projects involving RQA are not necessarily inexpensive. For example, a rapid project may require several experienced team members working intensively to generate the intended product. When constructing a project budget, the intended timeframe of data collection and analysis; feasibility of recruitment; number, type(s), and length of data collection approaches; data management and cleaning; and analysis and write-up should all be considered in determining staffing needs. Without sufficient staff, rigor will be difficult to achieve.

Explain the purpose and timeline of the study, communication plan, and roles to all team members

In projects with rapid turn-around timelines (as in all projects), it is critical for team members to understand the purpose of the study, as well as the timeline, expectations, and deliverables in relation to the timeline (e.g., a certain number of interviews completed by a certain date). Efficiency will be increased by ensuring team members understand their role(s) and responsibilities; clear roles will support efficient and rigorous collaboration. Teamwork is essential to rapid qualitative projects because of the intensive timelines and need for consistent data collection and analysis across the team. Effective and regular team communication can ensure consistency and rigor and contribute to a favorable work environment and timely completion of deliverables.

Data collectionDevelop and refine semi-structured data collection instruments to address specified research questions

Projects involving RQA rely on semi-structured data collection instruments (e.g., interview guides, observation templates). This means specifying the focused, yet still open-ended questions and the sampling frame. While traditional qualitative data collection tools may cover a breadth of topics, domains, and experiences and may be more exploratory, we recommend designing rapid data collection with focused questions (hence, less exploratory) designed for specific samples/settings—think of this as asking THESE people to answer THESE questions [19]. The qualitative data collection instruments should be designed by team members with qualitative methods expertise, with input from other team members. Semi-structured interview guide questions should be inviting, accessible, and analyzable, and the number of questions should be geared toward the time available for the interview [4]. In rapid qualitative projects, work should be done prior to data collection to facilitate rapid analysis, e.g., the interview guide questions can be mapped to key topics (and framework constructs, if relevant) in advance, with the caveat that these topics will need to be revisited during and after data collection to ensure that they are still relevant and to explore unanticipated topics. Teams should map the interview questions to the rapid analysis matrix during early planning. This is in contrast to traditional qualitative projects where analytic work may be done after data collection. The guide or other data collection instrument should be reviewed to ensure its feasibility, relevance, and productivity. The qualitative methodologist and/or analyst should review the guide with the study’s team to ensure that unnecessary questions are not included, the number of questions is feasible for the time allotted for the interview, and that questions effectively address the key aims.

Pilot data collection instruments to ensure they are clear, feasible, and appropriately targeted

We recommend pilot testing data collection instruments with appropriate testers (e.g., patients, clinicians) since rapid qualitative projects require focused and sometimes brief opportunities for data collection. Pilot testing and subsequent revision contribute to more rigorous and valid findings, e.g., by ensuring that interview questions are accessible and relevant, clarifying terminology that is unclear to interviewees, understanding how interviewees may respond to questions, determining whether the interview is too detailed/lengthy, and assessing whether the guide is addressing the research question(s). Semi-structured observational templates also need to be pilot tested among the team-members who will be conducting observations to ensure feasibility, relevance, and consistency.

Develop and use a plan for review throughout the data collection process

Consider regular data review that is documented and adhered to throughout data collection. This includes weekly de-briefs, interview/fieldwork reflections, and quality checks by methods experts. These steps to support rigor and validity can prevent drift and discrepancies in the data collected. This is important for rapid qualitative data collection because it occurs over a shortened timeline and therefore, requires early adaptions and refinements to ensure data collected is focused and addresses the research question(s).

RQA: summary template developmentDevelop and pilot test a user-friendly summary template

Develop a clear and systematic plan for summarizing data, whether distilled from notes, audio recordings, or transcripts. As opposed to traditional qualitative methods where codes may not be developed until the onset of analysis, key summary template domains are initially based on domains in the data collection instruments (see #8). Therefore, the summary template can be drafted prior to data collection. The summary template should be user friendly and not unwieldy, and it should be reviewed by the research team to ensure that it covers all the topics in the data collection instruments. The team will need to consider whether multiple summary templates are needed in one project, depending on the nature of the sampling frame and potential variations across data collection instruments (e.g., different interview guides for patients versus providers).

Ensure there are cross-references to raw data to support continuous comparison and validation necessary for rapid analysis

Condensing data occurs quickly in RQA. Therefore, it is important to maintain connections to the raw data to support continuous checking and emergent interpretations (e.g., identifying novel results or recommendations). Throughout the summarization process, we recommend including references to raw data. Given that the summaries are designed to be concise encapsulations of data collection episodes, this is a way to enable quick access to important quotes or areas of narrative for later reporting. For example, if the team is creating summaries from transcripts, transcript line numbers can be listed alongside bulleted summary points, allowing analysts to return to the transcript and re-examine the original data as needed for clarification and validation. If notes or audio files are being used instead of transcripts, providing time stamps during interview note-taking or while listening to the recordings can be used to link summaries to relevant segments of the audio files. Transcription may or may not be used. Sometimes it is necessary due to time/budget constraints to generate a summary from notes and supplement with the audio file when needed. Teams will need to practice cross-referencing to whatever form(s) of data are being used for analysis. We recommend creating a training dataset (e.g., 2–3 transcripts) and examining (and potentially standardizing) the ways in which team members cross-reference the raw data. If possible, we encourage researchers to try to analyze data with and without transcripts with the training dataset to determine what works best for a particular project and team members.

Develop summaries that are accurate and concise, but detailed enough to meet project aims

We recommend that researchers write a concise summary immediately after data collection to provide an accessible and accurate window into that episode [20]. Summaries should average 2–3 pages. Good summaries are: 1) based on and explicitly connected to the raw data; 2) brief yet thorough enough to give an overall sense of the data collection episode; 3) minimally interpretive; and 4) accurate reflections of the data collection process. The team should discuss whether reflective writing will be done while preparing summaries or at other junctures during the analytic process, and where that reflective writing will be placed (e.g., in a separate document/memo, in comment bubbles or brackets within the summary). These decisions will vary by team and by project and should be documented.

We strongly recommend calendar blocking for writing summaries at the conclusion of data collection episodes, unless transcripts will be used to prepare summaries in which case summaries can be completed as transcripts are received [5]. Plan for 1–2 h to write each summary. When the summaries are created as soon as possible after data collection, it adds rigor because it builds on researcher recall. This also ensures that summaries are created in a timely manner to meet project deadlines and goals rather than creating a backlog of analytic work. Summaries written immediately after data collection should be cross-checked against audio files or transcripts as those files become available, to ensure accuracy.

Identify training and calibration processes to ensure consistency and accuracy in summaries

Training and calibration are necessary to support rigor in the completion of summaries. Initial summaries can vary greatly within teams, with some being too brief and others too detailed, or some being too interpretive and distal from the data itself. This is why the “test-driving” step of RQA is essential and should not be skipped, no matter how experienced the team [3]. This step involves having all team members review the same 2–3 data sources and prepare summaries independently, and then review and compare the summaries.

The lead qualitative methodologist should also periodically audit a cross-section of summaries (e.g., 20%, with a set from each person completing summaries) to ensure: 1) appropriate length of the summary; 2) alignment between the summary and the data collection domains; 3) consistent formatting, use of quotations, and amount of detail; 4) descriptive rather than interpretive summaries; and 5) description that can be adequately understood by the team such that they have a solid sense of what was heard or observed in the data collection episode [21].

Teams should ensure that summaries of semi-structured interview data are trustworthy, accurate representations of participants’ experiences and perspectives. For example, in one of our projects, the summary stated that the “patient was complaining.” However, a secondary analyst noted that in the transcript, the patient expressed “concerns” about her care. Extrapolating to “complaining” is a form of interpretation that is not appropriate forRQA; instead, we stay close to the data and use the participant’s words, even in snippets of phrases [20].

Of note, there are projects where solo RQA takes place by necessity, due to a lack of staff or funding, and there are steps that can be taken to ensure rigor, such as verification of summaries with audio files or transcripts. We recommend pausing, perhaps for a standardized time-period, between developing the initial summary and verifying the summary, to allow for reflection and learning.

RQA: matrix analysisPlan qualitative matrix structure to reflect project aims/questions

Plan your qualitative matrix structure to reflect project aims/research questions. Domain names from the summary template can be column headers in the matrix, which means that a preliminary matrix can be created prior to onset of data collection. To facilitate rigorous analysis of the matrix, the research team should continuously and systematically review each domain within and across subsamples (if relevant) and memo extensively on observations and potential themes to explore (see #18). Matrix analysis in rapid qualitative projects may not be as exploratory as in other types of qualitative projects given the focused nature of rapid turn-around work.

Describe use of software for matrix analysis

It is important for researchers to specify if and how software programs are used to support RQA [5]. Summaries are typically developed in MS Word or PowerPoint, and matrices in MS Word and/or Excel.[5] RQA was specifically designed not to necessitate specialized qualitative data analysis software (QDAS). However, if QDAS is used, its use needs to be thoroughly explained and justified. To this point, reviewers and editors should not assume use of QDAS inherently imparts rigor, not should it be assumed that the lack of QDAS signifies lack of rigor.

Develop a plan for review throughout matrix analysis

Data included in the matrix should be concise. Lengthy segments from the raw data should not be included, as the matrix will become unwieldy. As with the summarizing process, teamwork is critical to the rigor of matrix analysis. Some teams copy and paste their summaries into matrices, while others engage in data transformation as matrices are developed [23]. The lead qualitative methodologist should oversee the development of matrices to ensure their alignment with project goals and should review matrices for accuracy and consistency, including monitoring for consistent level of detail and whether the text entered is answering the research questions, well-organized, and focused [5]. All observations about the analysis process should be shared with the team in order to educate and ensure consistency across the team. Throughout matrix analysis, team members should document (e.g., in memos) their efforts to ensure rigor and have this information ready for reporting in a manuscript or other forms of dissemination.

Rapid qualitative data synthesisConduct synthesis that is rigorous and responsive to research priorities

Researchers should outline their synthesis approach and how it addresses research questions. The synthesis process is enabled by the design of the summaries and matrices. Analysts can review a summary or a column in a matrix to compare across cases, whether that be a type of interviewee (physicians compared to nurses, for example), or sites, clinics, or units, for example. Synthesis details and steps will vary based on goals for dissemination. For example, if the data will be fed back to a site or operational partner, a synthesis output may make more sense as a PowerPoint presentation or brief report—something understood and digestible by the recipient. Whenever possible, we recommend visual displays and other ways of conveying details. If the goal of the synthesis is a manuscript and quotes will be utilized, the data linkages embedded in summaries (see #12) will help to find pertinent quotes to illustrate concepts. Engage in data review to ensure accuracy and completeness of key points through cross-checking and cross-comparison to enhance rigor and validity of synthesized findings. In synthesis and dissemination materials, we recommend specifying whether and how formative feedback (e.g., from participants, constituents) informed the analytic process.

Rigor and validity in reporting and publishing on rapid qualitative projects

Journal expectations are not always consistent, and authors may be unsure of what to mention and describe about their rapid qualitative projects. We provide a suggested planning and reporting framework to aid reviewers and editors in assessing the rigor and validity of rapid qualitative projects in Table 3. Editors and reviewers can use Tables 2 and 3 to help evaluate the rigor of rapid qualitative projects. We also suggest that it may be helpful to have editors select reviewers with appropriate qualitative expertise for the methods under review. [11] Restrictive word limits may be problematic for describing study design and conduct in detail and can lead to authors leaving out information on methods that help readers to contextualize their results. If not possible within the manuscript, editors could consider publication of supplemental materials, such as inclusion of a completed Table 2 as part of the submission and publication process. A well-articulated research question and rationale for why and how RQA was used should be included in reporting to the journal. It is also useful to communicate to prospective reviewers that qualitative checklists are flexible tools to guide the authors/reviewers rather than universal requirements for all qualitative methods (see also Table 1). Reviewers should also check for rationale for why the authors are using RQA (see Table 2, item 2), review detail in the description of semi-structured qualitative methods (see Table 2, item 3), assess for indications of rigor in the methods (see Table 2, items 8–14, 17–18), and check for indications of teamwork in data collection and analysis (Table 2, items 7, 10, 14, 17).

Table 3 Guidelines to support rigor and validity in reporting and publishing rapid qualitative approaches

It is important to note that in addition to grant and manuscript reviews, reviewers or operational partners receiving qualitative or visual display reports based on rapid qualitative analyses can also use Tables 2 and 3 to evaluate rigor.

留言 (0)

沒有登入
gif