Improving the communication of multifactorial cancer risk assessment results for different audiences: a co-design process

In total, 34 people contributed to this co-design process (Table 2). We received 56 sets of feedback over a 13-month period, as some people participated in multiple stages (Table 3).

Table 3 Feedback iterationsThink-aloud testing of the original CanRisk report

During the think aloud testing session, participants reported being overwhelmed by the amount of information presented in the report, the medical jargon, tables and graphs, and the lack of structure and explanatory text. Examples notes from a range of patient/public partners participating in this activity include:

“What am I meant to do with all of this?”; “Am I meant to read all of this?”; “What are all these genes? What do they mean?”; “How do I read this?”; “What is ‘PRS’?”; “What is tubal ligation?”

Further issues identified in this activity included that participants expected the report to start by telling them their specific results and although the NICE Risk Category section was helpful for this aim, it was presented later in the report. Participants found the graphs and mathematical symbols difficult to understand; the polygenic scores, genetic tests and tumour pathology information were particularly challenging. Participants expressed interest in this information but required support to interpret and understand it. They also stressed the importance of the report being clear about addressing the individual undertaking the risk assessment, having a consistent audience (i.e. avoid sometimes addressing clinicians and sometimes addressing individuals undergoing the assessment) and avoiding indirect forms of speech (i.e. ‘the’ risk versus ‘your’ risk). Participants highlighted the need for more clarity when comparing ‘personal risk’ and ‘population risk’. Formatting issues identified included making sure graphs worked in black and white, a preference for smaller scale icon arrays (i.e., 100 instead of 1000) and adding headings and keys for all graphs and tables.

This activity allowed us to establish that the original CanRisk report was not accessible for patients and members of the public and confirmed the need to engage with a co-design process to develop a new report.

Structured feedback on the original report

Seventeen stakeholders from a network of healthcare professionals working in clinical genetics and primary care, and our existing network of patient/public partners provided feedback on the same report tested in the think-aloud session (Table 3). Participants scored sections 1–12 in the report by their relevance for both individuals undergoing the risk assessment and healthcare professionals (see Table 4). The average scores showed that half of the sections (sections 1-4 and 11–12) were more relevant (i.e. scored 2 or above on average) for healthcare professionals than individuals undergoing the risk assessment. The remaining half (sections 5–10) scored highly for individuals undergoing the risk assessment and healthcare professionals. The scores relating to the importance of each section for individuals undergoing the risk assessment did not differ between groups of respondents for 11 out of 12 sections. For Sect. 3 (Summary of genetic tests and pathology), there was a significant difference (F(5,11) = 3.42, p < 0.05). Similarly, the scores relating to the importance of each section for healthcare professionals did not differ between groups of respondents for 11 out of 12 sections. For Sect. 11 (Mutation carrier probability), there was a significant difference (F(5,11) = 7.42, p < 0.01).

Table 4 Relevance per report section for different audiences

Feedback was split in terms of whether it would be better to have one or two reports, with eight participants preferring one report for all audiences and nine preferring separate reports for individuals undergoing the risk assessment and healthcare professionals. Open text box answers highlighted the importance of including as much information as possible while keeping it as simple as possible. The main suggestions on how to achieve this included having a summary with key information for individuals undergoing the risk assessment at the start of the report, using lay terms, merging sections that covered similar information, adding definitions when relevant and explanatory text to support interpretation, and signposting so different stakeholders can go straight to the information most relevant to them.

Literature review

The review of eight relevant papers (Brockman et al. 2021; Dorschner et al. 2014; Farmer et al. 2020; Haga et al. 2014; Lewis et al. 2022; Recchia et al. 2020, 2022; Stuckey et al. 2015) informed decision making around three key areas and where consensus was not reached from participants; the format of the report, the flow of information, and the content.

Report format:

Stuckey et al. (2015) suggested that having one document is better to ensure that everyone receives the same information. Their recommendation to manage large amounts of information is to split reports into sections and use signposting. Recchia et al. (2022) showed that output reports from high risk genes tests for breast cancer should focus on what the results mean for the individuals undergoing the assessment while also supporting clinicians in the appropriate management.

In light of this, we agreed to take a ‘one report’ approach, which addresses the individual undergoing the risk assessment and starts with a section covering the results that are most relevant and informative to them. The remaining sections of the report are consistently addressed to the individual undergoing the assessment but are introduced as potentially more relevant for healthcare professionals and provide the technical information clinicians need to discuss management options across different levels of care (i.e., referral to specialist care, further testing, etc.).

Report flow:

Research by Brockman et al. (Brockman et al. 2021) indicated that reports perform better if they can adapt to stakeholders needs and context, as people report different levels of interest in complex information as well as different preferences in risk presentation formats. To address this as much as possible, and in addition to presenting risk in different formats, we wanted the CanRisk report to be ‘dynamic’, covering the content that is relevant for specific personal and risk factor information entered into the tool (e.g. for male probands and those with contralateral breast cancer).

Report content:

Some of the more straightforward recommendations from the literature in terms of the content of reports included a preference for absolute risk (actual probability of an outcome) across stakeholders (Farmer et al. 2020; Lewis et al. 2022) and using the word ‘pathogenic variant’ over ‘mutation’ when presenting or discussing genetic information (Stuckey et al. 2015). These recommendations aimed to increase clarity and confidence in individuals undergoing risk assessments and were either already in the original report (absolute risk) or adopted for the new CanRisk report (variant) to improve its communication quality.

Development of a new report prototype

Based on results from steps 1, 2, and 3, the new report prototype included an introductory page to facilitate navigating the report. The information in the report was split into three sections, where section one compiled the information that is most relevant for the individual undergoing the assessment, and sections two and three compiled information that may be most relevant for healthcare professionals with increasing level of complexity. More details about the new report prototype are shown in Table 5.

Table 5 Structure of new report prototypeFirst round of structured feedback on the prototype

Thirteen people provided feedback on the initial prototype (Table 3). Overall, three (23%) people ranked this prototype as ‘excellent’ and ten (77%) as ‘good’. Most responses either ‘strongly agreed’ or ‘somewhat agreed’ with positive statements about the clarity, aim, relevance, and accessibility of each element in the report as well as the report’s overall signposting and cohesiveness. The structured feedback included a total of thirty-one statements (Table 6). Where elements received two or more (≥ 15%) neutral or negative evaluations (9/31, 29%), we reviewed these (bold text in Table 6) alongside the results from the open text boxes. This qualitative feedback provided recommendations on how to make the front page clearer and more concise by reordering certain content, simplifying text entries, and making the narrative voice consistent.

Table 6 Structured feedback questionnaire responses from round 1 (n = 13)Updating the new report prototype

Following the initial feedback on the prototype, we added information about how data were used to calculate the risk score, simplified text entries and word choices (i.e., ‘image’ over ‘icon’) and replaced ‘lifetime risk’ with ‘risk between the ages of 20 and 80’. We also changed the order of the information on genetic pathogenic variant probabilities to present general and positive information first and then introduce more detailed and complex information; explained what a ‘family tree’ is and added further details on the information used to make it; expanded the key (list instead of paragraph) for the summary of genetic tests and cancer diagnoses; explained that risk factors included in the model can increase or decrease risk; and reduced the amount of information and changed the presentation format (table instead of paragraph) of the ‘extra information’.

Second round of structured feedback on the prototype

Twenty people provided feedback on the revised prototype (Appendix 3). Eleven (55%) participants scored this prototype as ‘excellent’, eight as ‘good’ (40%) and one (5%) as ‘average’, indicating improved performance from the earlier prototype. Most participants either ‘strongly agreed’ or ‘somewhat agreed’ with positive statements about the clarity, aim, relevance, and accessibility of each element in the report as well as the report’s overall signposting and cohesiveness. Table 7 shows all results for the twenty-one elements (out of thirty-one as shown in Table 6) that received neutral or somewhat negative evaluations. Where elements received three or more (≥ 15%) neutral or negative evaluations (5/31, 16%) we reviewed these (bold text in Table 7) alongside the results from the open text boxes. This qualitative feedback helped us identify formatting errors, adjust the size and contrast of images and text, and improve signposting by making section headings clearer. Some of their recommendations helped us simplify text entries and improve explanations. We received several pieces of spontaneous encouraging feedback – for example: “I would have liked the information presented in that way when I was given my risk factor”. (Table 7)

Table 7 Statements with neutral and or somewhat negative evaluations from round 2 of structured feedback (n = 20)Finalising and publishing the new CanRisk report

The main changes incorporated in the final report included further improving formatting issues (i.e., font and image size, bold text) and simplifying text, particularly around the NICE risk categories. We also removed unnecessary details about the ‘average population’; added details about the NICE guideline referenced; fixed a format problem with the NICE categories table; and replaced the graph showing the NICE risk categories of individual undergoing the risk assessment compared to the population.

Although patient/public partners and healthcare professionals (supported by previous research (Farmer et al. 2020; Recchia et al. 2022)) were keen for the report to include ‘next steps’ for clinicians and more information about what people can expect in the future, it was decided that adding this information to the new CanRisk report was not consistent with the tool’s intended purpose. Furthermore, the international nature of the CanRisk tool would make adding this difficult, since management recommendations can vary from country to country and, sometimes, between regions. After completing the development of the last changes, the broader research team reviewed the protype and we decided to draw the co-design process to a close and finalise the report. Using current web technologies, the new report PDF is produced within the web browser and not on the server. The reports are dynamically customised based on the user’s input by tailoring the explanatory introduction and NICE guidelines text, excluding any optional sections, showing relevant reference population risks, and suggesting how the risk predictions might be improved with more data. The CanRisk website and the new report are available in French, Dutch, German, Portuguese, Italian and Spanish. To ensure readability and aesthetic layout the text was formatted for all possible risk calculation scenarios and languages. The newly designed CanRisk report is available at www.canrisk.org; see appendix 4 for a moderate risk example of the report in English.

留言 (0)

沒有登入
gif