The Association Between Patient Safety Culture and Accreditation at Primary Care Centers in Kuwait: A Country-Wide Multi-Method Study

Introduction

Patient safety remains a top priority within healthcare systems around the world.1,2 How well an organization performs in regard to patient safety, and the quality of the healthcare it delivers, is shaped by its culture concerning patient safety.3,4 This culture is defined by the staff’s shared basic assumptions, values, attitudes and behaviors that interact with the structure of the organization and influence the management of daily safety issues.5,6 Enhancing patient safety necessitates an evaluation of that culture to identify areas requiring improvement and to track changes over time.3,7 Researchers have taken different approaches to studying safety culture in healthcare settings, of which the questionnaire-based quantitative approach is the most common.7–9

Safety is an integral dimension of many different models of healthcare quality.10–12 Although the concept of quality varies in definition and assessment,13–15 the global consensus is that the performance of a healthcare organization is inextricably linked to the quality of its care.12,16,17 Other dimensions of quality that are frequently cited or evaluated in the literature include effectiveness, people-centricity, timeliness, efficiency, equitability and accessibility.13,15,16

In the healthcare sector, accreditation is a tool for assessing the quality of care and can serve as a leveraging strategy for its optimisation.16,18–20 The International Society for Quality in Health Care (ISQua) defines accreditation as:

A method used by healthcare organizations to accurately compare their level of performance to specified standards and to develop strategies for continually enhancing a healthcare system.21

The standards of many accreditation programs are designed to measure different quality dimensions whether implicitly22,23 or explicitly stated.24,25

In collaboration with Accreditation Canada, the Ministry of Health in Kuwait has developed accreditation standards for the country’s Primary Healthcare Centers (PHCs). A modified version of Accreditation Canada’s Qmentum program was used in the accreditation surveys. Primary care is provided at more than 100 PHCs across the country, whereas hospitals provide secondary, tertiary and more specialized care. The Ministry is the owner, funder, operator, and exclusive regulator of the public health system. Five health regions manage the different care levels, including primary care, for which they provide resources and logistical support to enable healthcare institutions to meet their objectives and adhere to Ministry regulations.7

The present study aims to examine the relationship between patient safety culture (PSC) in public primary care settings in Kuwait, as reported by the healthcare provider and staff, and the accreditation score in the same settings as rated by accreditation surveyors. Such studies are lacking in both the primary care settings and all healthcare settings in Kuwait. Moreover, there is a gap in what we know about the impact of PSC on PHC performance in accreditation. The main objective of this study is to assess the impact of PSC on how PHCs comply with accreditation standards generally and specific aspects of safety and quality standards. First, we compare PHC demographic groups (eg, PHC size, accreditation year, presence of in-house clinical support services and Health Region) in regard to the PSC and accreditation scores. Then, we examine the associations between PSC ratings and accreditation compliance.

Methods Setting

This was a multicenter study conducted in government primary care settings in Kuwait. To be representative of the nationwide public sector, the study was conducted in all PHCs which had undergone accreditation surveying. The 75 studied PHCs are grouped into the five health regions (coded 1, 2, 3, 4, and 5).

Study Design and Instrument

This was a multi-method study conducted with cross-sectional and retrospective quantitative approaches. For the cross-sectional approach, we circulated an adapted version of a validated and reliable tool developed by the Agency for Healthcare Research and Quality (AHRQ) to assess the PSC reported by providers and staff.26,27 In the retrospective approach, we reviewed and included scores on accreditation standards compliance.

The self-administered questionnaire—the Medical Office Survey on Patient Safety Culture (MO-SOPS)—was adapted to suit primary care settings in Kuwait. Then, the tool was translated into Arabic and later tested for reliability. A full description of the adaptation, the translation and the results was published previously.7

The survey includes 38 items, grouped into 10 composites, which assess various aspects of PSC. The composites are Teamwork, Patient Care Tracking/Follow-up, Organizational Learning, Overall Perceptions of Patient Safety and Quality, Staff Training, Owner/Managing Partner/Leadership Support for Patient Safety, Communication about Error, Communication Openness, Office Processes and Standardization, and Work Pressure and Pace. The survey also asked participants to provide an overall rating on patient safety, and to rate the degree to which their PHC is patient-centered, effective, timely, efficient and equitable. We refer to these last five PSC items as “quality measures”. All items were rated on five-point Likert scales for agreement.

To examine accreditation standards compliance, we reviewed the final accreditation reports that were issued to each PHC. Each accreditation standard typically contains more than one criterion. The total number of accreditation criteria is 450, some of which are designed to assess one of the following safety and quality dimensions: “Population Focus”, “Accessibility”, “Safety”, “Worklife”, “Client-Centered Services”, “Continuity of Services”, “Effectiveness” or “Efficiency”.

Each PHC was assessed against applicable accreditation criteria by a team of four to six external surveyors. The surveyors used a three-level scale to rate the degree to which they believe the PHC is implementing the required criterion (Not Implemented, Partially Implemented or Completely Implemented). A fourth option—Not Applicable—could be selected if a criterion did not apply to that particular PHC.

Sampling and Data Collection

Using a total population sampling technique (n =6916), we included all clinical, allied, administrative and managerial staff with at least one year of experience, whom we expected to have sufficient knowledge about their PHC and its operations to provide informed answers to the survey. We excluded staff with less than one year of experience at the center, those who work in a PHC which has not undergone an accreditation survey, those who have been relocated from another health region/center, and those on extended leave.

We collected the data in April and May 2018. According to personal preference, the participant completed a printed copy of either the English (Appendix 1) or the Arabic language version of the modified MO-SOPS questionnaire. Survey forms were distributed at each PHC by points of contact who facilitated data collection and ensured an acceptable response rate. We held a full-day methodology workshop to ensure a common and comprehensive understanding of the study protocol and the methods of data collection and communication. The safety and risk management team in each of the five health regions supervised data collection activities.

In addition, we examined the reports of accreditation surveys conducted with 12 months of MO-SOPS collection. We collected all accreditation criteria ratings for the 75 PHCs in a spreadsheet, and then identified the criteria that are explicitly stated to fall under safety or other relevant quality dimensions, namely, Client-centered Services, Effectiveness, Efficiency and Accessibility. The first three of these dimensions are matches with the PSC quality measures Patient Centered, Effective and Efficient, respectively. The accreditation standards glossary defined the fourth quality dimension, Accessibility, as the provision of timely and equitable services. Hence, we selected the accreditation criteria under the Accessibility dimension to be assessed against two PSC quality measures: Timely and Equitable. We also identified accreditation criteria that assess the timeliness of services. Thus, we created a separate quality dimension, Timeliness, and assessed the criteria under this dimension against the Timely quality measure arising from the PSC.

Data Management and Analysis

To ensure high data quality and integrity from the returned questionnaires, we checked a random sample (10%) for accuracy and completeness. At least one section had to be completed for any returned questionnaire to be included in the analysis. Our version of the MO-SOPS questionnaire showed internal consistency (Cronbach’s α = 0.886). The reliability of the composites (mean of Cronbach’s α = 0.602, range: 0.463–0.699) was acceptable as indicated in some literature.28 We coded the identities of the participants, their PHC and health region to maintain their anonymity.

In accordance with the AHRQ’s user’s guide,26 for each item, we combined the positive Likert scale ratings (Strongly Agree, Agree) to calculate the percentage of positive responses. Negatively worded items were reverse-coded in order to calculate a percentage of positive ratings for these also. We determined the score for a particular safety culture composite by calculating the average percentage of positive responses for all items included in that composite.

With regard to overall accreditation compliance, we calculated the percentage of criteria with a rating of Completely Implemented. When calculating the compliance for safety and other quality dimensions, we selected the criteria covered under the corresponding dimension. The compliance percentages used in this study were calculated as follows:

Percentage accreditation compliance= (Number of completely implemented applicable accreditation criteria)/(Number of all applicable accreditation criteria) ×100.

Data were processed in Excel (Microsoft 365) and then analyzed using SPSS 23 (α level= 0.05). The Kolmogorov–Smirnov and Shapiro–Wilk tests were used to check the normality of the data. The analysis of the quantitative data included univariate descriptive (means, standard deviations, frequencies, percentages) and bivariate (Student’s t-tests, ANOVA F-tests) analyses to examine the association between the PSC items (PSC composites and quality measures) and the accreditation standards group compliance, and to examine how trends in the PSC items and accreditation compliance differ across the PHC characteristics. Non-parametric tests (Mann–Whitney U-test, Kruskal–Wallis H-test, Spearman correlation) were used when indicated. In these analyses, we used the percentages of positive ratings. The analysis also included multivariate analysis (multiple regression) to help to predict the percentages of accreditation standards compliance. Such predictions can guide the development of actionable strategies. The independent variables included in the multiple regression analysis were those that were found to be statistically significant (p ≤ 0.05) in the correlational analysis, with a correlation coefficient ≥ 0.100. To obtain the best-fitting regression models, we included the statistically significant correlated PHC demographics. We also tested them for moderating effects on the PSC items.

Results

We distributed 6916 questionnaires among the 75 eligible PHCs; 6032 (87.2%) were returned. We screened the returned questionnaires for their eligibility and excluded a further 744, yielding a final response rate of 76.5% (n = 5288).

The results of the analyses used are shown below. A descriptive analysis of the sample demographics is reported first, followed by a comparison of the demographic groups in terms of mean PSC and accreditation scores. Next, the correlations between PSC and accreditation scores are explored before the section concludes by reporting the predictors of accreditation scores resulting from a regression analysis.

Socio-Demographics

Table 1 shows the frequencies and percentages of the socio-demographic and demographic data collected from the participants and 75 PHCs, respectively, in this survey. Because the current study examines the average PSC scores of PHCs and not individual responses, the participant socio-demographic data do not contribute to the current analysis and details are published elsewhere.7 However, we list this data simply to show the diversity and good representation of the sample.

Table 1 Participant (n = 5288) Socio-Demographic and PHCs (n = 75) Demographic Data

Regarding the PHC demographic data, more than a quarter of the participating PHCs (28%) are located in Health Region 1. Only one PHC in this survey is located rurally, whereas four-fifths (82.7%) are located in suburban areas. Settings having more than 100 members of staff comprised slightly less than three-fifths of the PHCs (58.7%). Regarding in-house support services, the majority of PHCs have a laboratory service either alone (77.3%) or with a radiology service (10.7%). Two-thirds of the PHCs (66.7%) were surveyed for accreditation in 2017.

Comparison of the PSC, Quality Measure and Compliance Scores of PHC Demographic Groups

Using Student’s t-tests and ANOVA F-tests, we compared the PHCs groups based on their demographic category (Health Region, location, size, year of accreditation) in terms of the mean scores of PSC, quality measure (Table 2) and accreditation compliance (Table 3). Mann–Whitney U-tests and Kruskal–Wallis H-tests were used for non-normally distributed data.

Table 2 Comparing Means of Average PSC Positive Percentage Across All Composites, Overall Rating on Patient Safety and Overall Ratings on Quality Measures Between Demographic Groups of PHCs (n = 75)

Table 3 Comparing Means of Percentages of Overall Accreditation Standards Compliance, Safety Standards Compliance and Accreditation Standards Compliance per Relevant Quality Dimensions Between Demographic Groups of PHCs (n = 75)

Table 2 shows that Health Region 3 had the lowest mean scores in five of the seven PSC items, whereas the highest mean scores were in Health Region 4. The differences between Health Regions were not statistically significant. PHCs with 50 or fewer staff were assigned higher mean scores for the PSC items; the differences between PHC size groups were statistically significant (p ≤ 0.05) for the average PSC positive percentage across all composites only. The PHCs that underwent accreditation surveying in 2017 scored the highest compliance percentages on average; however, differences were statistically insignificant. PHCs in suburban locations were assigned higher mean scores on average; these differences were statistically significant (p ≤ 0.05) in four instances.

By contrast, the scores of accreditation compliance (Table 3) show more statistically significant differences between the PHC demographic groups. Two demographic groups show statistically significant differences for the seven accreditation compliance percentages: PHC size and the accreditation year. PHCs with more than 100 staff and surveyed in 2018 had the highest mean scores for accreditation compliance, except for the Efficiency dimension regarding the accreditation year. Again, PHCs of Health Region 3 presented the lowest mean scores for compliance across all accreditation compliance categories. PHCs with in-house laboratory and radiology services showed higher compliance percentages. For the overall accreditation and overall safety compliance scores, there were statistically significant differences according to the category of health region (1, 2, 3, 4, 5) and the category of in-house service.

Association of Patient Safety Culture Items with Accreditation Standards Groups

As shown in Table 4, non-parametric correlation tests revealed correlations between PSC items (16 variables) and the relevant accreditation standards compliance groups (seven variables). We adopted Ratner’s interpretation guidelines29 to determine the correlation coefficient r, for which |0.000–0.300| is a weak correlation, |>0.300–0.700| is moderate and |>0.700–1.000| is strong. Statistical significance is indicated using adjusted α levels to avoid type 1 errors.30

Table 4 Association of Patient Safety Culture Scores with Overall Accreditation Standards Compliance, Overall Safety Standards Compliance and Accreditation Standards Compliance per Relevant Quality Dimensions (Client-Centered Services, Effectiveness, Timeliness, Accessibility and Efficiency)

All correlations were positive except for 15 instances, six of which involve the Teamwork PSC composite, although all 15 negative correlations were very weak (r = −0.010 to −0.107) and statistically insignificant. The highest correlation (r = 0.450) calculated was between the Effective quality measure and compliance with Overall Safety Standards. The Patient Care Tracking/Follow-up composite showed statistically significant weak or moderately positive correlations with three out of seven accreditation compliance groups, whereas six other composites showed no statistically significant correlations. The Effective measure is the only one that showed statistically significant correlations in all its tested relationships with accreditation compliance groups. Interestingly, its three correlation coefficients (0.401, 0.409 and 0.450) are the highest of all those reported here.

Collectively, there are 14 significant correlations among the 93 relationships examined. Overall Accreditation Standards compliance is significantly associated with only one of the 10 PSC composites, whereas it is significantly associated with three of the six PSC measures. Overall Safety Standards compliance had the most statistically significant correlations with half of the 16 safety culture items: four composites and four measures. Of the five quality dimensions, Effectiveness was the only one having a statistically significantly moderate-strength correlation with any PSC item, which were the Patient Care Tracking/Follow-up composite and the Effective measure.

Multiple Regression Analysis

The results of the multiple regression analysis are shown in Table 5. We list only the predictor variables included in the regression model. The table shows two dependent variable scores: “Overall Accreditation Standards Compliance” and the subset “Overall Safety Standards Compliance”. This is because some accreditation standards are designed to assess safety, which is expected to be affected by PSC. The values reported represent how much the accreditation compliance percentage increases or decreases as a result of a unit change in the predictor variable (unstandardized coefficient). The standardized coefficients allow the relative importance of each coefficient in the model to be compared. Empty cells indicate that a predictor was not included in the regression model for the corresponding dependent variable. Nominal variables—as PHC demographic data—were treated as dummy variables. Moderating effects of the PHC demographic data were tested but are excluded from the reported models for poor fitting.

Table 5 Primary Health Care Center Patient Safety Culture Composites and Demographic Data as Predictors of Overall Accreditation Standards Compliance and Overall Safety Standards Compliance Scores (Dependent Variables)

In this regression analysis, the PSC composites and quality measures accounted for 35% and 38% of the variability in the Overall Accreditation Standards compliance and Overall Safety Standards compliance, respectively. Of the 10 PSC composites, Communication about Error and Overall Perceptions of Patient Safety and Quality contribute to the prediction of the Overall Safety Standards compliance percentage. Of the five quality measures, only Timely contributes to the prediction of percentage compliance with Overall Accreditation Standards and Overall Safety Standards. Based on the standardized beta coefficients, the PSC items are 1.5-fold more important than PHC demographics for predicting compliance with Overall Safety Standards. The demographics have more importance regarding the percentage of Overall Accreditation Standards compliance.

With respect to the PHC demographics, the size of the PHC, the accreditation year and the Health Region contribute to the prediction of both accreditation standards compliance groups. Conducting the accreditation survey one year later can increase the compliance percentage of a PHC by a value between 8.2 (Overall Accreditation Standards) and 10.5 (Overall Safety Standards). The size of PHC has a comparable effect as the number of staff increases. For a PHC in Health Region 3, the predicted accreditation standards compliance percentages are expected to be lower than for a similar PHC in another Health Region by values of 7.5 (Overall Accreditation Standards) and 10.9 (Overall Safety Standards).

Discussion

The literature contains a plethora of evidence for the positive impact of healthcare accreditation programs on an organization’s performance,31–37 quality of care,31–38 PSC,33,34,38–40 and safety practices.31–37,40 Of particular importance is a study conducted in PHCs in Qatar—a comparable country to Kuwait in the region—which compared the results of two PSC assessments within a three-year interval.41 The study concluded that accreditation, among other interventions, resulted in a statistically significant increase in all PSC composites and almost all PSC items.

In contrast to all the literature reviewed, the current study was not intended to assess the impact of accreditation on PSC. Instead, we examined the impact of PSC on the accreditation performance of PHCs. We believe the relationship between accreditation and PSC is not unidirectional, although readers may infer that it is, based on the recurring approach of assessing the impact of accreditation on PSC. Accepting the relationship is bi-directional, we believe that studying the impact of PSC on accreditation is more rational for some reasons. Firstly, culture is established by shared beliefs, basic assumptions and values.6 Changing these deeply rooted components and their interaction product is a significant and lengthy challenge,42,43 especially within enormously complex systems.44,45 Improvements in culture require systematic, multifaceted and targeted interventions.46 Hence, it is somewhat bold to conclude that implementation of an accreditation program alone can result in a culture change. Another convincing reason for our study approach is that accreditation assesses the organization’s level of performance,21 and performance is a manifestation of the softer, less immediately visible aspects within an organization, that is, its culture.47,48 It is patently logical to predict performance—which is an outcome—based on evaluating attitudes and beliefs—which are inputs—rather than vice versa. In other words, culture improvement is harder than performance improvement and needs a longer time and a more systematic and deeper approach. Besides, culture is the soil, whereas performance is the fruit.

The PSC scores used in this study are based on a national assessment conducted in 2018 that included 94 PHCs.7 Ninety-one of the PHCs participating in that PSC assessment have undergone accreditation surveying since 2015. In the current study, we excluded the 16 PHCs surveyed in 2015 and 2016, because, although culture takes time to change, it can nonetheless. However, analysis of the relationship between PSC scores and accreditation compliance of all 91 PHCs—using the same statistical tests—resulted in comparable findings to those reported here.

We followed the recommended original AHRQ construct of the 10 composites. This did not result in the best model-fit of the constructs (ie, Cronbach’s α as low as 0.463). The decision was made for the purpose of publishing a study that could be benchmarked internationally. Nevertheless, some authors have identified comparable Cronbach’s α values as acceptable (0.45–0.98) or sufficient (0.45–0.96).28 Others defended their acceptance of a Cronbach’s α as low as 0.446 for a three-item composite by arguing that a small increase in the number of items would result in higher Cronbach’s α values.49 This is the case in the original AHRQ construct of the MO-SOPS, where all composites have either three or four items.27

PHC demographics (Health Region, specialist clinics, size and location) were found to have a significant effect on means of “average PSC score across all composites”.7 However, after the exclusion of 19 PHC in this study, only PHC size remains statistically significant (Table 2). Hence, we examined the PHC demographics effect on accreditation compliance percentages and their relationship with PSC scores. Comparing accreditation compliance percentages across the PHC demographic groups revealed that, on average, statistically significant higher compliance was reported for PHCs with greater size and in-house laboratory and radiology services (Table 3). The size of a PHC is defined by its number of staff. Greater numbers reduce the workload for individuals, allowing more time to be allocated to preparing for accreditation and to contribute to overall performance.50,51 PHCs with in-house laboratory and radiology services benefit from their routines in standardizing, documenting, and monitoring work and their more rigorous safety practices.33

With respect to the year of accreditation, there is statistically significant higher compliance reported for PHCs surveyed in years 2018 and 2019 when compared to year 2017. Two factors might explain the observations. Later-surveyed PHCs actively participated in a collaborative approach to learn from their earlier-surveyed peers.52 Moreover, the competency of accreditation surveyors can improve over time, enabling them to better recognize and evaluate evidence of compliance against the accreditation criteria. In addition, in 2017, the Ministry of Health issued surveyor and implementer guides, which might also have contributed to the improved surveying skills and implementation outcomes. Also, the surveyors themselves are PHC workers, which imposed on them a double role in accreditation, as both surveyor and implementer. When they started implementation at their own PHC, they executed important quality improvement projects. A final and more subtle factor is worth postulating: surveyors might have been subject to the halo effect, a type of rater drift that explains variation in ratings over time.53 In this scenario, surveyors become more subjectively appreciative of the efforts put in place rather than assess the accreditation measurable elements with complete objectivity. It is worth noting that the 2019 accreditation compliance percentage score was lower than the 2018 score. This is because the number of surveyed PHCs in 2019 are six only, and none of them is located in Health Region 4 which has the highest average compliance percentage score.

We observed a stronger correlation of PSC measures with overall accreditation and safety compliance percentages than with the quality dimension compliance percentages (Table 4). We were surprised to find that a PSC measure has a weaker mostly insignificant correlation with its related quality dimension. We assume that a lack of measurable elements under these quality measures, which should resemble items comprising PSC composites, affects the participants’ perception of how these measures should be assessed. Ultimately, participants might have arbitrarily rated the quality measures in PSC. By contrast, the surveyors had criteria that form the accreditation standards to guide them while they rate aspects and components of various dimensions of quality.

Research has indicated that teamwork is positively related to performance.54 Hence, the very weak correlation between the Teamwork composite and accreditation compliance was unexpected. In the current study, teamwork was the highest positive-rated composite and also had the smallest variation,7 which results in almost a steady level across all PHCs. By contrast, PHCs had lower mean percentages of accreditation compliance with greater variation. This could be a reasonable explanation for the observed correlation.

Another interesting finding is that although there were four PSC composites having statistically significant correlations with accreditation compliance percentages (overall accreditation and/or overall safety standards), only two could predict the safety standards compliance. This is because the regression analysis might exclude a significant individual factor in favor of a multivariate interaction to arrive at the best-fitting model.

It is worth stating that the magnitude of increase in the compliance percentage does not infer the importance or influence of a predictor. The outcome is affected by the range of the predictor. The PSC composites range from 0 to 100, whereas the demographics size and accreditation year range from 0 to 2. The other demographics were treated as dichotomous. Hence, a one-unit increase in the PSC composite can result in a 0.6 increase in the safety compliance percentage, whereas a one-year delay in accreditation can increase the compliance percentage by a value of 10.5.

In addition, the regression analysis shows that the PSC is more important than the demographics for predicting compliance with Overall Safety Standards. Literature indicated that culture has a substantial impact on different aspects of performance, whereas other organizational/structural characteristics have limited evidence of a significant effect on performance.55,56 Hence, one can claim that culture shapes performance more than any other organizational/structural characteristic.

Strengths and Limitations

This is the first study to examine the relationship between PSC and the accreditation scores in any healthcare setting in Kuwait. Moreover, it is the first nationwide study in the region. Globally, the majority of studies on PSC and accreditation have looked at how accreditation affects PSC. This is the first study to look at how PSC affects accreditation. It is also the first study to quantify that effect through the regression analysis, to help in predicting the accreditation compliance score.

The cross-sectional aspect of this study allowed different factors to be measured at a single time point, resulting in more reliable data that is less subject to the possible biases of case series and case reports.57 The multi-method design allowed the multiple interactions between various components and subjects to be investigated from numerous angles.

However, there are limits. The purpose of the study was to determine associations between variables, not to infer causation. The use of paper-based questionnaires has a cost to the environment and in the significant amount of time required for data entry and cleaning. Furthermore, participants could choose to skip questions, a large number of which were unanswered. Finally, more than half of the responses were from only two health regions. This is most likely because some health regions have more PHCs than others.

Practice and Research Implications

Of the four significantly correlated PSC composites, only Organizational Learning was found to be an area of strength.7 This indicates that there is a considerable opportunity to improve these composites, and eventually, to improve accreditation compliance. Also, accreditation compliance could be improved by increasing the number of staff to reduce the workload for individuals. Surveying skills and competencies should be sharpened by regular training and assessment. Moreover, the surveyors guide is to be revised to reduce subjectivity in the favor of objectivity, aiming at less biased survey processes.

Other healthcare settings, such as those providing secondary and tertiary care, and also private hospitals, should be studied to examine the impact of PSC on accreditation in different contexts. The minimal contributions of correlated PSC composites in regression modelling highlight the necessity for qualitative and mixed-methods research to further understand the relationship between PSC and accreditation. We also recommend exploring the best construct for the MO-SOPS questionnaire, by running factor analysis, to increase tool reliability. This might improve the explanatory power of PSC score to predict accreditation compliance.

In future investigations, we propose using electronic data collection, especially for large samples. We also recommend researchers use measurable elements to evaluate the quality measures covered by the MO-SOPS questionnaire.

Conclusion

Although organizational/structural characteristics of a PHC may affect its performance, culture is the principal player in this game.55,56 If a PHC strives to better the quality and safety of its clinical services, it should assess and improve the culture concerning patient safety. The culture of a PHC is reflected in the decisions taken on safety, and this has an impact on the PHC’s accreditation performance. Put simply, culture is the soil, whereas performance is the fruit. Our findings are expected to help healthcare leaders better comprehend the link between patient safety culture and accreditation compliance and develop realistic plans to improve culture and, eventually, performance.

Abbreviations

AHRQ, Agency for Healthcare Research and Quality; ISQua, The International Society for Quality in Health Care; MO-SOPS, Medical Office Survey on Patient Safety Culture; PHC, Primary Healthcare Centre; PSC, patient safety culture; SD, standard deviation; SE, standard error.

Ethical Approval and Consent to Participate

Ethical approval was granted by the Standing Committee for Coordination of Health and Medical Research in Kuwait (Ethical Approval Number: 581/2017). We confirm that all methods were performed in accordance with the relevant guidelines and regulations of the Standing Committee for Coordination of Health and Medical Research in Kuwait. Participating PHCs provided permission for the study to take place and PHC and respondent identities were kept confidential and coded to ensure anonymity. Participants provided voluntary verbal informed consent after receiving an explanation of the study’s value, benefits and risks, and after their questions were satisfactorily answered. Verbalisation of informed consent was approved by the Standing Committee for Coordination of Health and Medical Research in Kuwait. All project team members signed a non-disclosure agreement.

Acknowledgments

We thank the Ministry of Health in Kuwait for supporting this research. We also thank the administrators of the participating PHCs for facilitating this work. Special thanks go to all the accreditation surveyors and the health professionals who participated in this study. The authors wish to acknowledge the data entry personnel for their efforts.

Disclosure

The authors report no conflicts of interest in this work. No funding was provided for conducting this study.

References

1. Elmontsri M, Almashrafi A, Banarsee R, Majeed A. Status of patient safety culture in Arab countries: a systematic review. BMJ Open. 2017;7(2):e013487. doi:10.1136/bmjopen-2016-013487

2. World Health Organization. Global patient safety action plan 2021–2030: towards eliminating avoidable harm in health care. World Health Organization; 2021. Available from: https://apps.who.int/iris/handle/10665/343477. Accessed September19, 2021.

3. AL Lawati MH, Dennis S, Short SD, Abdulhadi NN. Patient safety and safety culture in primary health care: a systematic review. BMC Fam Pract. 2018;19(1):104. doi:10.1186/s12875-018-0793-7

4. Ghahramanian A, Rezaei T, Abdullahzadeh F, Sheikhalipour Z, Dianat I. Quality of healthcare services and its relationship with patient safety culture and nurse-physician professional communication. Health Promot Perspect. 2017;7(3):168–174. doi:10.15171/hpp.2017.30

5. Sammer CE, Lykens K, Singh KP, Mains DA, Lackan NA. What is patient safety culture? A review of the literature. J Nurs Scholarsh. 2010;42(2):156–165. doi:10.1111/j.1547-5069.2009.01330.x

6. Schein EH, Schein PA. Organizational Culture and Leadership. 5th ed. John Wiley & Sons Inc; 2017.

7. ALFadhalah T, Al Mudaf B, Alghanim HA, et al. Baseline assessment of patient safety culture in primary care centres in Kuwait: a national cross-sectional study. BMC Health Serv Res. 2021;21(1):1172. doi:10.1186/s12913-021-07199-1

8. Vaughn VM, Saint S, Krein SL, et al. Characteristics of healthcare organisations struggling to improve quality: results from a systematic review of qualitative studies. BMJ Qual Saf. 2019;28(1):74–84. doi:10.1136/bmjqs-2017-007573

9. Hogden A, Ellis LA, Churruca K, Bierbaum M. Safety culture assessment in health care: a review of the literature on safety culture assessment modes. Australian Commission on Safety and Quality in Health Care; 2017. Available from: https://www.safetyandquality.gov.au/sites/default/files/migrated/Safety-Culture-Assessment-in-Health-Care-A-review-of-The-literature-on-safety-culture-assessment-modes.pdf. Accessed September30, 2021.

10. IOM. Crossing the Quality Chasm: A New Health System for the 21st Century. IOM; 2001.

11. Bengoa R, Kawar R, Key P, Leatherman S, Massoud R, Saturno P. Quality of care: a process for making strategic choices in health systems. World Health Organization; 2006. Available from: http://www.who.int/management/quality/assurance/QualityCare_B.Def.pdf. Accessed August30, 2018.

12. World Health Organization. Handbook for national quality policy and strategy: a practical approach for developing policy and strategy to improve quality of care. World Health Organization; 2018. Available from: https://apps.who.int/iris/bitstream/handle/10665/272357/9789241565561-eng.pdf?ua=1. Accessed June15, 2020.

13. Upadhyai R, Jain AK, Roy H, Pant V. A review of healthcare service quality dimensions and their measurement. J Health Manag. 2019;21(1):102–127. doi:10.1177/0972063418822583

14. Endeshaw B. Healthcare service quality-measurement models: a review. J Health Res. 2020;35(2):106–117. doi:10.1108/JHR-07-2019-0152

15. Hannawa AF, Wu AW, Kolyada A, Potemkina A, Donaldson LJ. The aspects of healthcare quality that are important to health professionals and patients: a qualitative study. Patient Educ Couns. 2022;105(6):1561–1570. doi:10.1016/j.pec.2021.10.016

16. Beaussier AL, Demeritt D, Griffiths A, Rothstein H. Steering by their own lights: why regulators across Europe use different indicators to measure healthcare quality. Health Policy (New York). 2020;124(5):501–510. doi:10.1016/j.healthpol.2020.02.012

17. WHO Regional Office for Europe. European observatory on health systems and policies. Busse R, Klazinga N, Panteli D, Quentin W, editors. In: Improving Healthcare Quality in Europe: Characteristics, Effectiveness and Implementation of Different Strategies. WHO Regional Office for Europe; 2019. Available from: https://apps.who.int/iris/handle/10665/327356.

18. Bahadori M, Teymourzadeh E, Ravangard R, Saadati M. Accreditation effects on health service quality: nurse viewpoints. Int J Health Care Qual Assur. 2018;31(7):697–703. doi:10.1108/IJHCQA-07-2017-0126

19. Mosadeghrad AM. Hospital accreditation: the good, the bad, and the ugly. Int J Healthc Manage. 2021;14(4):1597–1601. doi:10.1080/20479700.2020.1762052

20. Ghadami L, Masoudi Asl I, Hessam S, Modiri M. Developing hospital accreditation standards: applying fuzzy DEMATEL. Int J Healthc Manage. 2021;14(3):847–855. doi:10.1080/20479700.2019.1702307

21. ISQua. Guidelines and Principles for the Development of Health and Social Care Standards. 5th ed. ISQua; 2018. Available from: https://ieea.ch/component/osdownloads/routedownload/accreditation/guidelines-and-principles-for-The-development-of-health-and-social-care-standards%2C-4th-edition-v1-2.html?Itemid=184.

22. Saudi Central Board for Accreditation of Healthcare Institutions. National Primary Healthcare Standards. 1st ed. Saudi Central Board for Accreditation of Healthcare Institutions; 2015. Available from: https://portal.cbahi.gov.sa/english/cbahi-standards.

23. Joint Commission International. JCI Accreditation Standards for Ambulatory Care. 4th ed. Joint Commission Resources; 2019. Available from: https://store.jointcommissioninternational.org/jci-accreditation-standards-for-ambulatory-care-4th-edition-english-version/?_ga=2.124872899.816960488.1641067901-1824539121.1641067901.

24. General Authority for Healthcare Accreditation and Regulation. GAHAR Handbook for Primary Healthcare Standards. 1st ed. General Authority for Healthcare Accreditation and Regulation; 2021. Available from: https://www.gahar.gov.eg/upload/gahar-handbook-for-primary-healthcare-standards-2021.pdf.

25. Accreditation Canada. Qmentum International: Primary Care Services Standards. 3rd ed. Accreditation Canada; 2018.

26. Sorra J, Gray L, Famolaro T, Yount N, Behm J. AHRQ medical office survey on patient safety culture: user’s guide. (Prepared by Westat, under Contract No. HHSA290201300003C). Agency for Healthcare Research and Quality; 2018:52. Available from: https://www.ahrq.gov/sites/default/files/wysiwyg/sops/surveys/medical-office/medical-office-survey-userguide.pdf. Accessed November17, 2020.

27. AHRQ. Medical office survey on patient safety culture: items and composite measures. Agency for Healthcare Research and Quality; 2013:6. Available from: https://www.ahrq.gov/sites/default/files/wysiwyg/sops/surveys/medical-office/MO_Items-Composite_Measures.pdf. Accessed September30, 2021.

28. Taber KS. The use of Cronbach’s alpha when developing and reporting research instruments in science education. Res Sci Educ. 2018;48(6):1273–1296. doi:10.1007/s11165-016-9602-2

29. Ratner B. The correlation coefficient: its values range between +1/−1, or do they? J Target Meas Anal Mark. 2009;17(2):139–142. doi:10.1057/jt.2009.5

30. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Statis Soc. 1995;57(1):289–300. doi:10.1111/j.2517-6161.1995.tb02031.x

31. Tabrizi JS, Gharibi F. Primary healthcare accreditation standards: a systematic review. IJHCQA. 2019;32(2):310–320. doi:10.1108/IJHCQA-02-2018-0052

32. Alaradi LK. Assessing the impact of healthcare accreditation from the perspective of professionals’ in primary healthcare centres: a mixed methods case study from Kuwait. PhD. University of Glasgow; 2017. Available from: https://eleanor.lib.gla.ac.uk/record=b3304082. Accessed January3, 2022.

33. Alhaleel AS. Implementing a national accreditation programme in Kuwaiti hospitals: understanding the impact, facilitators and barriers using a multiple methods approach. PhD. University of Glasgow; 2018. Available from: https://eleanor.lib.gla.ac.uk/record=b3324075. Accessed January3, 2022.

34. Saut AM, Berssaneti FT, Moreno MC. Evaluating the impact of accreditation on Brazilian healthcare organizations: a quantitative study. Int J Qual Health Care. 2017;29(5):713–721. doi:10.1093/intqhc/mzx094

35. Ghareeb A, Said H, El Zoghbi M. Examining the impact of accreditation on a primary healthcare organization in Qatar. BMC Med Educ. 2018;18(1):216. doi:10.1186/s12909-018-1321-0

36. Desveaux L, Mitchell JI, Shaw J, Ivers NM. Understanding the impact of accreditation on quality in healthcare: a grounded theory approach. Int J Qual Health Care. 2017;29(7):941–947. doi:10.1093/intqhc/mzx136

37. Araujo CAS, Siqueira MM, Malik AM. Hospital accreditation impact on healthcare quality dimensions: a systematic review. Int J Qual Health Care. 2020;32(8):531–544. doi:10.1093/intqhc/mzaa090

38. Katoue MG, Somerville SG, Barake R, Scott M. The perceptions of healthcare professionals about accreditation and its impact on quality of healthcare in Kuwait: a qualitative study. J Eval Clin Pract. 2021;27(6):1310–1320. doi:10.1111/jep.13557

39. El-Sherbiny NA, Ibrahim EH, Abdel-Wahed WY. Assessment of patient safety culture among paramedical personnel at general and district hospitals, Fayoum Governorate, Egypt. J Egypt Public Health Assoc. 2020;95(1):4. doi:10.1186/s42506-019-0031-8

40. Al Hamid A, Malik A, Alyatama S. An exploration of patient safety culture in Kuwait hospitals: a qualitative study of healthcare professionals’ perspectives. Int J Pharma Pract. 2020;28(6):617–625. doi:10.1111/ijpp.12574

41. El Zoghbi M, Farooq S, Abulaban A, et al. Improvement of the patient safety culture in the primary health care corporation – Qatar. J Patient Saf. 2021;17(8):e1376–e1382. doi:10.1097/PTS.0000000000000489

42. Toff N. Culture change is not easy but not that difficult. BMJ. 2013;346:f1365. doi:10.1136/bmj.f1365

43. Verrier ED, Verrier ED. Easier said than done. J Thorac Cardiovasc Surg. 2019;157(2):678–679. doi:10.1016/j.jtcvs.2018.08.014

44. Lipsitz LA. Understanding health care as a complex system: the foundation for unintended consequences. JAMA. 2012;308(3):243–244. doi:10.1001/jama.2012.7551

45. Bielecki A, Nieszporska S. Analysis of healthcare systems by using systemic approach. Complexity. 2019;2019:e6807140. doi:10.1155/2019/6807140

46. Davies HTO, Mannion R. Will prescriptions for cultural change improve the NHS? BMJ. 2013;346:f1305. doi:10.1136/bmj.f1305

47. Calciolari S, Prenestini A, Lega F. An organizational culture for all seasons? How cultural type dominance and strength influence different performance goals. Public Manage Rev. 2018;20(9):1400–1422. doi:10.1080/14719037.2017.1383784

48. Mannion R, Davies H. Understanding organisational culture for healthcare quality improvement. BMJ. 2018;363:k4907. doi:10.1136/bmj.k4907

49. van Griethuijsen RA, van Eijck MW, Haste H, et al. Global patterns in students’ views of science and interest in science. Res Sci Educ. 2015;45(4):581–603. doi:10.1007/s11165-014-9438-6

50. Shao SC, Chan YY, Lin SJ, et al. Workload of pharmacists and the performance of pharmacy services. PLoS One. 2020;15(4):e0231482. doi:10.1371/journal.pone.0231482

51. van den Hombergh P, Künzi B, Elwyn G, et al. High workload and job stress are associated with lower practice performance in general practice: an observational study in 239 general practices in the Netherlands. BMC Health Serv Res. 2009;9(1):118. doi:10.1186/1472-6963-9-118

52. Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review. BMJ Qual Saf. 2018;27(3):226–240. doi:10.1136/bmjqs-2017-006926

53. Wilson M, Case H. Objective Measurement: Theory into Practice. Wilson M, Engelhard G, eds. Vol. 5. Ablex Publishing Corporation; 2000.

54. Schmutz JB, Meier LL, Manser T. How effective is teamwork really? The relationship between teamwork and performance in healthcare teams: a systematic review and meta-analysis. BMJ Open. 2019;9(9):e028280. doi:10.1136/bmjopen-2018-028280

55. Hu SH, Wang T, Ramalho NC, Zhou D, Hu X, Zhao H. Relationship between patient safety culture and safety performance in nursing: the role of safety behaviour. Int J Nurs Pract. 2021;27(4):e12937. doi:10.1111/ijn.12937

56. Ridgely MS, Ahluwalia SC, Tom A, et al. What are the determinants of health system performance? Findings from the literature and a technical expert panel. Joint Commission J Qual Patient Safe. 2020;46(2):87–98. doi:10.1016/j.jcjq.2019.11.003

57. Johnson LL. Chapter 17 - design of observational studies. In: Gallin JI, Ognibene FP, Johnson LL editors. Principles and Practice of Clinical Research. Academic Press; 2018:231–248. doi:10.1016/B978-0-12-849905-4.00017-4

留言 (0)

沒有登入
gif