Development and validation of a measure to assess patients’ perceptions of their safety in an acute hospital setting

Introduction

Traditionally, safety improvements have focused on learning from error through incident reporting systems, with little involvement from patients (McEachan et al 2014). Patients’ unique circumstances mean they are well-positioned to give feedback and can identify safety issues that staff may not notice (McEachan et al 2014, Lawton et al 2015, 2017, O’Hara et al 2018, NHS England and NHS Improvement 2019, Taylor et al 2019).

Previously, studies that have researched patient safety have concentrated on measuring the safety of the culture and climate, thereby exploring safety from the perspectives of clinical staff and the organisation. It is therefore unusual for research to report on safety explored from the patient’s perspective (Lawton et al 2015, Ricci-Cabello et al 2016). It is not possible to draw generalisable conclusions from those studies that have focused on patients’ views as the studies have not been large enough (Hassen et al 2017). Few researchers have looked at the effect of patients’ characteristics on their experiences of safety when receiving acute care.

A search of the literature found no studies in which patients were central in designing a questionnaire intended to measure safety using their perceptions. This study aimed to rectify that deficit and to develop and check the validity of a measure, the King’s Patient Safety Measure (KPSM), that assesses how patients perceive their safety when receiving acute care.

Background

We conducted this research in the UK in a significantly sized hospital providing acute and tertiary care to a community with widely diverse ethnicity and socioeconomic circumstances. We undertook the study in 2016, with fieldwork spread over 21 days in August. The NHS research ethics committee gave full approval for the research.

Participants were recruited from admissions to five acute and general medical wards at the hospital. Patients were eligible to participate in the study if they were more than 18 years old, had been admitted for acute elective or emergency medical care, and were due to be discharged within 48 hours.

Patients were ineligible if they could not communicate in English or had impaired cognitive function or acute mental health issues. The researcher (the lead author) did not access patients’ clinical notes as it was considered ethically inappropriate since she was not giving direct care – clinical administrators on participating wards identified and approached eligible patients. Informed consent was obtained from the participants.

Method

Rattray and Jones’s (2007) framework was used to provide a methodical way of developing a valid and reliable questionnaire. The framework consists of three steps.

Questionnaire development

This involves determining the rationale for the questionnaire and how it will answer the research question. It also involves consideration of the type of scale to be used and the response format. The questionnaire’s items are generated from a review of the literature.

The questionnaire was presented at a directorate research governance meeting where there were patient representatives. This helped frame the questionnaire layout and the questions asked. Patients were asked to rate their experiences of the care they had received. Items were drawn from empirical evidence concerning safety from the patients’ perspective – for example, questions focusing on patients’ experiences of and satisfaction with services. Account was taken of the National Inpatient Survey (Care Quality Commission 2014) when selecting items. This approach was informed by Rathert et al (2011), which used the validity of items in the Picker Patient Experience Questionnaire (Jenkinson et al 2002) to strengthen and determine items used in a new measurement tool.

Respondents would therefore respond to questions that resonated with them and were highly relevant to this research. Rattray and Jones’s (2007) approach of using open-ended questions early in the design stage was followed to generate other topics for the tool.

Pilot

This stage tests the content and validity of the tool. Ten inpatients from a range of demographic backgrounds (age, gender, ethnicity), who were more than 18 years old and who were due to be discharged within 48 hours were purposively selected.

Rattray and Jones’s (2007) framework recommends pilot testing questionnaires using cognitive interviews, to test face and content validity. Content validity can be determined by asking either an expert panel or a relevant group of participants – in our case, patients who had experienced care – whether the questionnaire captured important items. The Question Appraisal System (Willis and Lessler 1999) was used to look for those questions likely to be erroneous.

Respondents reported that the questionnaire was easy to understand, quick to complete and covered issues of safety that were significant for them. They all completed the questionnaire reflecting on their entire admission. They did not focus on one single event; however, some made comments about before-hospital experiences. We therefore modified the questionnaire to ask respondents to give answers only relevant to their current admission. The Likert scale used was reduced from ten to six ordinal responses to decrease the burden on respondents. Neutral responses to questions were removed, to encourage respondents to commit to either a positive or a negative response.

Cross-sectional study

This uses exploratory factor analysis to test the reliability of the tool. There is no definitive guidance on sample sizes for factor analysis, with rules proposed including 200 subjects or more or a ratio of ten subjects per item (Comfrey and Lee 1992). Parameters that remain unknown until the data have been analysed – for example, items per factor, communalities and item loading magnitudes – will affect the sample size required (Costello and Osborne 2005).

We considered a sample size of 150 to 200 would be achievable and large enough to establish whether the results of the study had statistical significance.

Statistical analysis

MPLUS v4.2 was used for exploratory factor analysis (EFA) and IBM SPSS v25 was used for all other statistical analysis. A polychronic correlation matrix for ordinal variables was calculated and pairwise correlations assessed.

EFA with promax oblique rotation of the ordinal responses was used to reduce the 13 items in the questionnaire to a smaller number of more general factors (unobservable latent variables). Factor structure was established using a scree plot; factors with an eigenvalue greater than one were retained (Walker and Almond 2010) in combination with a theoretical assessment. Mean factor scores were calculated by summing the responses of those items that loaded onto the factor(s).

A Cronbach’s alpha greater than or equal to 0.7 provides evidence of internal consistency (Tavakol and Dennick 2011). However, a high Cronbach’s alpha can indicate a level of item redundancy (Tavakol and Dennick 2011), with Streiner (2003) suggesting an upper limit for using Cronbach’s alpha of 0.90. In this case, the raw mean inter-item Pearson correlation should be used instead and lie in the range 0.15 to 0.50 (Clark and Watson 1995).

Factor scores and a general linear model (analysis of covariance) were used to determine whether patients’ perceptions of their safety differed according to their characteristics. A ‘missing data’ category was added to gender and the Index of Multiple Deprivation (IMD) (Department for Communities and Local Government 2015) to minimise the number of cases lost from the analysis. Factor means scores for the 13-item scale were calculated for all people with one or more non-missing values – 151 out of 158 responded to eight or more of the scales items, one person responded to five items, and six did not respond to any item.

We assessed our assumption that the model residuals – the difference between the observed and predicted values – were normally distributed, using a histogram of the standardised residuals and a quantile-quantile probability plot. A bootstrap analysis with 1,000 bootstrap samples was then conducted, as there was some evidence that the residuals departed from normality. However, the sample size of 158 was sufficiently large for this not to be a major concern (Kwak and Kim 2017).

The Pearson correlation coefficient between the factor score and the question ‘Please can you say how safe you felt during your stay at the hospital?’ was used to determine if it was a valid measure of patients’ overall rating of how safe they felt.

The null hypothesis of no difference/association was rejected if the type I error rate was lower than 0.05.

Qualitative analysis

ATLAS.ti was used to analyse the open-ended questions, guided by University of Surrey (2011). This involved coding the data using descriptive content analysis and a deductive approach. Categories in the coding frame were informed by existing research and theory about patients’ perception of safety. Responses from each question were analysed to develop the early coding scheme through to development of final codes.

Results

A total of 183 patients were approached, of whom 158 (86%) completed the questionnaire. Of those respondents, a total of 98 (62%) provided answers to one or more open-ended questions.

Patient characteristics

Tables 1a to 1e show important characteristics of the participants: age, gender, ethnic group, length of stay, mode of admission and IMD scores. The average age of participants was 56, there were similar numbers of female and male respondents, their mean length of stay was ten days, most respondents (91%) were emergency admissions and about two-thirds were of white British ethnicity. More than half of all participants (58%) were from the two most deprived quintiles.

Likert scale responses

Table 2 shows participants’ responses to each of the 13 items of the questionnaire. Patients were much more likely to agree than disagree with item statements. Patients most strongly agreed (70%) with item six – ‘I could have a member of my family or close friend for support when I wanted them’; they most strongly disagreed (13%) with item one – ‘I was allocated a bed straight way’.

Of the 78 pairwise correlations, 27 (35%) were strong, two (3%) were weak and the remaining 49 (63%) correlations were moderate to fairly strong. The two weakest polychoric correlations occurred between ‘I was allocated a bed straight away’ and ‘Staff were familiar with equipment’ (0.24) and ‘Staff were familiar with procedures’ (0.26). Correlation between ‘Staff were familiar with equipment’ and ‘Staff were familiar with procedures’ was 0.88. There was a high correlation between ‘Staff listened carefully to what I had to say’ and ‘Staff explained things in a way that I could understand’ (0.81), as well as ‘Staff explained things in a way that I could understand’ and ‘I had confidence in the staff treating me’ (0.80).

The scree plot did not suggest the presence of more than two factors, as only two factors had eigenvalues greater than one (Figure 1).

Factor one had an eigenvalue of 7.64 and consisted of items one, two, three, four, five, seven, eight and nine (Table 3). All these items apart from item one related to communication with healthcare professionals.

Factor two had an eigenvalue of 1.14 and consisted of items six, ten, 11, 12 and 13 (Table 3). These items related to the context in which care was delivered, having a family member or close friend for support, having enough staff on the ward, staff demonstrating they were familiar with procedures and equipment, and communication with staff regarding medication.

The correlation between the two factors was high (r=0.70). The two-factor solution did not convey any advantage over the simpler, single-factor solution and there were no strong theoretical grounds for opting for the former. All items for the single factor had loadings above 0.5. Cronbach’s alpha for the 13 items was high (0.91), providing further support for a single factor but suggesting an element of item redundancy. The mean inter-item Pearson correlation was 0.45, so within the 0.15 to 0.50 range. Items two to five were the most correlated of the 13 items.

Based on the general linear model, all covariate effects were not statistically significant at the 5% level. This demonstrated that patients’ characteristics did not influence how they perceived their safety. The correlation between mean safety score and patients’ overall rating of how safe they felt during their hospital stay was strong (r=0.65). This shows that the 13-item scale tapped into aspects of care that patients felt were important in making them feel safe.

Responses to open-ended questions

Patients felt safe when staff explained care to them in a way they could understand. Friendliness and kindness of staff was important to patients’ experience of feeling safe. Patients explained and recognised the effects of specialist expertise and multidisciplinary team working in making them feel safe. The importance of the hospital having enough staff for ‘constant visibility’ and ‘regular interaction’ was mentioned. Taking time to address patients’ medication queries was also cited as a feature of feeling safe.

Care that made patients feel unsafe was often the opposite of what made them feel safe. Patients recognised and described when communication between staff was unclear and how this made them feel unsafe. Staff being unaware of patients’ medication or past medical history were features of concern.

Patients had the opportunity to provide feedback about how staff responded and what staff could have done differently to make them feel safe. They said they wanted better communication, including staff listening more closely to them and providing reassurance.

Discussion

This study developed a patient-reported outcome measure solely with patients. Patients’ characteristics and type of admission did not influence their perceptions of safety, suggesting the KPSM has the potential to be applied widely in the hospitals with acute services. The study demonstrated that the questionnaire is a reliable and valid tool.

Further development is necessary to refine the KPSM. The questionnaire’s open-ended questions showed patients explained safety in ways that did not completely align with healthcare professionals’ perceptions. This experience of safety reflected their entire episode of care and, in some cases, experiences before admissions. This was not demonstrated in previous research.

It is important to assist patients in describing their experiences using a narrative approach, rather than breaking down feedback to component parts of their care pathways. The relationship between patients and clinical staff, especially responses to the open-ended questions, in which examples of how staff communicated and listened to patients, were crucial to patients. This was an important indicator to patients in making them feel safe.

Patients recorded they felt unsafe when they witnessed violence and aggression towards staff and other patients. This was mentioned in the open-ended question that asked patients to give an example of an incident that made them feel unsafe. Witnessing violence and aggression against other patients and staff had not been cited in previous studies. It therefore requires further research.

Limitations

The sample size of 158 met one possible criterion for factor analysis, having at least ten subjects per item. However, Comfrey and Lee (1992) advocated a sample size of at least 200, but preferably more. A much larger sample than 158, ideally recruited from multiple sites, would be required to validate the tool more thoroughly.

The psychometric testing undertaken provides evidence of the validity and reliability of the KPSM, but additional testing is required. The high Cronbach’s alpha suggests that some of the KPSM’s items might be redundant; however, the mean inter-item Pearson correlation was acceptable (Clark and Watson 1995).

It might be possible to remove some of the items relating to staff communication – listening, explaining, consistency and confidence – without invalidating the scale.

Content validity was determined during the pilot by asking patients if the questionnaire captured what was important. The development of content validity is typically determined by computing a content validity index, where content experts rate items relevant for the scale (Polit and Tatano Beck 2006). However, a content validity index was not determined for the KPSM.

Inpatient administration of the KPSM created the potential for response bias. Patients may have been reluctant to give honest answers, as they may not have wanted to respond negatively about staff who were caring for them while still an inpatient.

Different disease conditions could influence patients’ perceptions of safety. However, the researcher was not given ethical approval to access patients’ notes and therefore was unable to obtain information about patients’ diseases.

Conclusion

The KPSM comprises three sections. The first is targeted at areas for improvement and is generated by the 13-item scale. The second is a rating scale that enables respondents to express how safe they felt throughout their admission. The third is a robust measure that can provide a picture of safety before and after improvements have been made. Open-ended questions provide free-text information about how patients perceive safety and put their care into context.

As an early-warning system, the KPSM can identify possible harms. It gives patients real-time opportunities to express their perceptions, so can be used to improve care. The open-ended questions also enable patients to describe responses from staff when they have raised concerns and what staff should do differently to make them feel safe.

In addition, the KPSM allows for innovative, inter-professional learning and team-working through listening to feedback from patients, targeting safety improvement from the patients’ perspective. It is therefore a valuable educational tool for nurses in pre- and post-registration training.

This study demonstrated that the KPSM is a reliable and valid measure. Testing with a larger sample is needed to develop the instrument further. Additional studies could focus on patients who are younger, from more diverse backgrounds, whose first language is not English or who have a cognitive impairment/history of psychiatric illness.

Key points

• The King’s Patient Safety Measure (KPSM) is a validated tool appropriate for a diverse range of patients in acute settings

• The KPSM has the potential to act as an early warning tool, using feedback from patients

• The KPSM allows for innovative, inter-professional learning and team-working through listening to feedback from patients, targeting safety improvement from the patients’ perspective

Acknowledgements

Jane Sandall, a professor at King’s College London, is a National Institute for Health Research (NIHR) senior investigator and is supported by the NIHR Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care

留言 (0)

沒有登入
gif