A new trial monitoring plan (TMP) template for clinical trials: output from a Delphi process

A total of 31 monitoring plans were received from 52 UKCRC registered UCTs, 20 after the first email, 11 after a follow-up email. One further unit expressed interest in the study but had a monitoring plan known to be inadequate due to a recent inspection; they were working on improvements and chose not to share it yet. Another CTU explained they outsource monitoring to a Clinical Research Organisation (CRO) and do not have a designated plan. The remaining 19 CTUs did not reply. The full list of UKCRC registered CTUs at the time this study was done, highlighting those CTUs that shared their monitoring plans, can be found in Supplementary File 2.

The spreadsheet containing items extracted from the 31 monitoring plans contained over 800 items. Round 1 of Delphi survey consisted of 66 items. Approximately 10% (7/66) of the items that were included in the round 1 were ones where authors did not reach mutual agreement on when reviewing and classifying each item’s category. Six additional items were suggested by participants in round 1 (see Supplementary File 3). These additional items were reviewed by the authors for suitability and duplication and added to round 2. The authors decided to not omit any items from round 2 that were included in round 1, to be able to have a full analysis of the data [14]. Therefore, round 2 included all items from round 1 as well as additional items suggested by participants in round 1, making a total of 72 items in round 2.

Delphi participants

Demographic data was collected at the time of participants’ registration for the Delphi survey. The 47 Delphi participants across both rounds were from 25 different UKCRC registered CTUs and industry. The participants held various roles within the clinical trials field. Distribution of roles can be seen in Fig. 1.

Fig. 1figure 1

Role distribution of all 47 Delphi survey participants

The participant demographics in round 2 closely resembled those in round 1, except for the inclusion of three new participants: one clinical trial monitor, one quality assurance manager, and one trial manager.

Of the 47 participants in round 1, 43 (91%) fully completed the survey, while 4 (9%) participants partially completed the survey. In round 2, 37 (79%) participants fully completed the survey, while 3 (6%) partially completed the survey, and 7 (15%) round 1 participants did not engage in round 2. Additionally, round 2 was open to anyone who was interested in taking part in the survey, regardless of whether they participated in round 1. Therefore, in addition to the 37 participants who fully completed round 2, there were 3 new participants, making a total of 40 (93%) participants who fully completed round 2, and 3 (7%) participants who partially completed round 2.

On completion of both Delphi rounds, the number of items for which more than 70% of the participants responded with a Likert score of 7–9 (critical) increased by 18%, from 22/66 (33%) in round 1 to 37/72 (51%) in round 2 (Fig. 2). Additionally, median scores were compared as suggested by Trevelyan [14] to determine if agreement was reached amongst participants on individual items. Comparing the items’ median score, in round 1, 47/66 (71%) items had a median score of 7–9, and in round 2, 51/72 (71%) items had a median score of 7–9. There were no items for which 70% or more of the participants responded with a Likert scale of 1–3 rated ‘not important’ in either round.

Fig. 2figure 2

Distribution of the percentage of participants rating items with a Likert score of 7–9 (critical) in round 1 vs. round 2. The black line represents the threshold where 70% or more of participants rated items as critical

Furthermore, considering the interquartile range (IQR) ≤ 2 as suggested by Gracht et al. [15] as an indication of consensus across all items, the number of items with an IQR ≤ 2 in round 1 is 29, and in round 2, it is 56, which shows that the variance of response is reducing across the group of items (Fig. 3) [15]. Additionally, the number of comments in round 2 was lower than that in round 1, which is another indication of group opinion moving towards consensus [16].

Fig. 3figure 3

Comparing the interquartile range in items in round 1 vs. round 2. The black line indicates where the interquartile range is ≤ 2

The results of the Delphi study showed that the items with the highest number of participants ‘unable to respond’ were ‘checks for serious adverse events for medical devices’, ‘use of metrics in monitoring clinical trials’, and ‘trial oversight of vendor’. These items were discussed in more details at the consensus meeting. After round 2, the results and analysis were reviewed by the authors. It was concluded that with 79% of the items (86% of the original list of items in round 1) likely to stay in their critical categories (i.e. had been scored 7–9 by more than 70% of the participants), it was justifiable to not have another survey round as it was clear that the group was moving towards consensus. Additionally, to avoid participants attrition and fatigue [14, 15], it was decided to stop the Delphi survey at 2 rounds and move on to the consensus meeting where the results would be discussed, and participants would have a final opportunity to influence the development of the TMP template.

Consensus meeting

The consensus meeting took place in two half-day sessions on 6 September 2023 with 9 participants and on 8 September 2023 with 8 participants. The participants were from 9 UK CTUs and industry. Distributions of participants across various roles in clinical trials can be seen in Fig. 4.

Fig. 4figure 4

Role distribution of consensus meeting participants, consisting of 10 different people across the two sessions

At the meeting, 37 items that had reached consensus during the Delphi survey were presented, and participants were asked to make any comments, including the wording of the items or to voice any objections to their inclusion in the template. Additionally, 6 items that had been suggested by participants in round 1 were presented separately to emphasise that they had been rated in only one round (round 2). Of the 6 additional items, 3 had reached consensus for inclusion during round 2 (as part of the 37 items presented), and participants needed to vote on the other 3 items that had not. After discussion, participants in the consensus meeting voted on the 32 further items that had not reached the definition of consensus within the Delphi, regarding each item’s inclusion in or exclusion from the trial monitoring plan template. The voting resulted in 18 items being excluded, leaving 14 items to be included in the TMP template. Participants engaged in discussions regarding the inclusion of items in the TMP template considered factors such as their significance to trial monitoring, the ease of locating the information within the protocol, and whether these items were better suited for inclusion in the monitoring template or other trial-related documents, such as trial SOPs, data monitoring plans, or other working instructions. The authors conveyed to meeting participants that this template could be used in its current form when presented to the CTUs or that it could be customised to align with the specific standards and requirements of each CTU. It was also clarified that while some CTUs have a comprehensive TMP template, others may not. Consequently, participants were encouraged to consider this when voting for items.

The template was finalised based on the discussions held during in the consensus meeting. The template contains a comprehensive list of items that should be included in a trial monitoring plan template (Supplementary File 4).

留言 (0)

沒有登入
gif