Generating good evidence in orthopedics



   Table of Contents   COMMENTARY Year : 2020  |  Volume : 22  |  Issue : 2  |  Page : 260-265

Generating good evidence in orthopedics

Vikas Kulshrestha1, Munish Sood2
1 Senior Advisor Ortho and HOD of Department of Orthopaedics, Command Hospital Air Force, Bengaluru, Karnataka, India
2 Classified Spl Ortho and trained in Arthroscopy, Department of Orthopaedics, Command Hospital, Chandimandir, Haryana, India

Date of Submission27-Jun-2020Date of Acceptance10-Jul-2020Date of Web Publication02-Sep-2020

Correspondence Address:
Lt Col Munish Sood
Department of Orthopaedics, Command Hospital, Chandimandir - 134 107, Haryana
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None

Crossref citationsCheck

DOI: 10.4103/jmms.jmms_83_20

Rights and Permissions


Today, there is increasing demand for quality medical care to be made available for a large population at a reasonable cost. In a society with limited health-care infrastructure and budget, it has become imperative to evolve scientifically proven clinical care pathways. No country is willing to accept infructuous expenditure on treatment modalities with ambiguous patient outcomes. Hence, evidence-based medicine has been introduced into most health-care systems. When it comes to orthopedics, a serious concern is that, the existing literature has extremely poor quality of evidence. There are very few best practice guidelines, which are supported by high quality clinical studies. In this review article, we have made an attempt to bring out the recurring lacunae in orthopedic research papers. Following which we have also given tips on how to plan, design, and conduct a high quality clinical trial. We have made an attempt to explain commonly required knowledge of statistical tools. In the end we have briefly described, how to prepare a protocol, execute the study, analyze the results, and write the final article to get published.

Keywords: Clinical trial, evidence-based medicine, orthopedics


How to cite this article:
Kulshrestha V, Sood M. Generating good evidence in orthopedics. J Mar Med Soc 2020;22:260-5
  Introduction Top

Quality of research in surgical practice has always been debated.[1] An assessment of orthopedic journals found that only 11.3% of the published articles were graded as Level I, representing the best level of evidence according to the Oxford Centre for Evidence-based Medicine.[2] These alarming results have triggered much discussion about the difficulties in assessing surgical procedures. The large volume and tremendous variation in cost and quality of orthopedic care has invited tremendous state interests to standardize and optimize the surgical outcome. As clinicians we are most interested in knowing what may be the best for our patient. Such questions haunt us every day and we seek good quality evidence. There needs to be a paradigm shift from a culture of cost containment to that of quality improvement, we need to move toward a patient-centered, evidence-informed practice model that reflects local population needs and contributes to optimal patient outcomes.[3] Unfortunately, like the rest of surgical practice even orthopedic literature lacks good quality evidence.[4] We keep reiterating the need to implement “Best Practices” developed around “Evidence based Clinical Care Pathways.” But who is responsible for the quality of evidence? We as clinicians are responsible for generating high quality evidence to fill the knowledge gap and guide patient care to optimize outcomes.[5] This paper reviews the main pitfalls and reviews the steps of conducting a high quality clinical trial.

  Where Do We Go Wrong? Top

Orthopedics literature is fraught with common errors committed in planning, designing, executing, and analyzing the studies.[6] Some of the important ones effecting quality of evidence generated are:

Improper selection of research question

Frequently, the question is a repeat of earlier studies and does not address the gap in knowledge. Many times, it is not clinically relevant and thus a wasted effort.

Weak study designs

There are a few Level I robust clinical studies performed in the field of arthroplasty. There are a number of poorly designed cohort studies and observational data, which dilutes the quality of evidence by introducing bias.[7]

Poorly defined primary objective with multiple secondary objectives

Many researchers have used the study sample to test multiple outcome objectives with inadequate power to test them. Usually, the null hypothesis is not clearly stated. This also introduces problems of multiple testing like increase in alpha error.

Flawed inclusion/exclusion criterion

Flawed inclusion/exclusion criterion of patients which impair generalizability of the outcomes. Imposing criterion of patient selection, which selects a cohort of patients, that may not be representative of true patient population.

Inadequate sample size

One study of the British Orthopaedic literature showed that more than 90% of the studies published were underpowered. That means that most of these studies did not have the adequate sample size to detect the difference in outcomes that may be statistically and clinically relevant.[8],[9]

Lack of validate outcome assessment tool and errors in measurement methods

Most studies have failed to bring out the error rates and values of their outcome assessment methods and assessors. Inter-assessor variability and test-retest validity has not been elaborated. This brings in imprecision in data and thus dilutes the difference in outcomes thus favoring equivalence or nonsuperiority. Many studies have used inadequate blinding measures.[10]

Lack of description of exact statistical assumptions made

Most studies miss out on stating the level of significance (called the alpha level) that is acceptable and the exact statistical tests methods used. However, rarely do you see the authors stating what they used as their beta value (Type II error), which indicates their chance of a Type II error (usually beta is 0.2 or less). The reciprocal of beta (1 minus beta) is converted to a percent and termed as the power of a study (usually 80%). Sometimes, researchers often do not state the directionality of the testing that they perform, namely whether they are using a one-tailed or two-tailed analysis.[8]

Inadequate bias control

Most of the studies in arthroplasty literature are not randomized and many have used inadequate blinding measures. A large number of cohort studies have been poorly designed in that they have not reported sufficient baseline parameters and demographics, which could be used for adjusted statistical analysis to minimize the effects of confounders.[11]

Nonadherence to CONSORT guidelines

Most of the studies have failed to follow CONSORT methods of reporting. Thus critical appraisal of the quality of clinical trials is not possible, as the design, conduct, and analysis are not accurately described in the report. Far from being transparent, the reporting of trial is often incomplete, compounding problems arising from poor methodology.[12]

Failure to report missed follow-ups and dropouts

Most studies have not highlighted the effect of dropouts and missed data on the outcome analysis. They have not mentioned reasons for dropouts.[13] How the researcher addressed the missed data and dropouts have not been elaborated. Dropouts and missed follow-ups could exaggerate the treatment effect and bring out spurious associations.[14]

Nonreporting of intention to treat/perprotocol analysis

Most studies have not elaborated how they dealt with treatment cross overs where a patient randomized to one group due to any reason crossed over to the other groups. Such cross overs could dilute difference in effect. In intention to treat (ITT) analysis, we analyze the case as per the group, he was assigned to irrespective of the treatment he got.[15] Although ITT analysis mitigates the treatment effect, it is preferred method of analysis to be used when we are looking for superiority of treatment. Where as in equivalence or nonsuperiority trials, we prefer per-protocol analysis where the case is analyzed as per treatment instituted.

Unplanned subgroup analysis

Many studies start off with a primary objective and during the data analysis phase looking at some possible treatment effects in a subgroup have gone on to use the same data set to look for significant treatment effect in the subgroup. Since the sample size and power calculations are not done in an a priori manner to perform subgroup analysis the same should only be reported as an exploratory finding, which need to be tested in later studies with proper sample size.[16]

Nonreporting of limitations and weaknesses

Many studies have not devoted enough time in discussing the possible limitations and weakness of their work. Doing so would place their study in the right perspective and depict its external validity.

The use of unscientific language

Most researchers have not used correct scientific language when describing their results. Specifically, a single study never proves that a hypothesis is true; it can only reject the null hypothesis. While most people are not comfortable using such cautionary language, this is the correct scientific language. This understanding begins with studying a good statistical textbook, which focuses on clinical research design.[17]

  How to Conduct a High Quality Clinical Trial? Top

It is only in the second half of 20th century when, with the advent of theory of hypothesis testing and other principles derived from probability mathematics, well-designed clinical trials became common. Planning, designing, and conducting a clinical trial are a team effort and require the expertise of clinician, statistician, epidemiologist, data managers, and research assistants. Clinician alone cannot conduct a trial. There are clinical research organizations that offer these services at a cost or you could develop your own team. Let's walk you through the key elements required for a good clinical trial:

Defining study question

You need to systematically study literature to identify the gap in knowledge and need to narrow it down to a concrete, researchable issue. For this you need frank opinions from your colleagues and the team, you may also like to have a brain storming session. Next is to define the patient population, the intervention planned, the comparison group and the outcome measures planned to be studied. Few very important issues you need to keep in mind are is the study novel, based on the inclusion criterion can you enroll enough consenting patients, what are your funding needs, what is your time line, what are the ethical issues and what would be the rough sample size required.

Developing background of the study

This section should explain the potential research significance of the proposed study. State the novel ideas and potential contributions that may result from your study. How the outcomes of the study may contribute to the field of health care by improving quality of patient care. Here, one also needs to identify factors, which may confound or complicate the study question, which dictates the study design, study population, and the study instruments such as questionnaires.[18]

Defining aims

Once the study question is formulated we need to clearly define primary aim and it should be restricted to one or two important ones around which the hypothesis is built and tested to obtain answers. It may have one or two secondary aims, which would be explored without seeking statistical answers. The complete protocol is centered on the primary aims including sample size calculations and data analysis. It will be necessary to have adequate power (i.e., a large enough sample size) to answer the primary aims. On the other hand, it is not necessary to power the study to answer secondary aims. Therefore, it is recommended that questions that are of secondary interest or may require a sample size that you cannot obtain should be secondary aims.

Developing methodology

Study design

Choosing the right design is vital to be able to address the aims of the study. There are two main categories of comparative study designs: experimental (i.e., randomized control trial) and observational (i.e., cohort and case-control). Descriptive designs, such as case-series, are also informative in certain situations but have significant limitations when attempting to determine treatment superiority.[17] A randomized study provides the strongest evidence in support for safety and effectiveness of a treatment. A randomized study provides the strongest evidence in support for safety and effectiveness of a treatment. Since regulatory controls are not so strict in surgical practice many times in arthroplasty like orthopedic surgery, the surgeon and the patient have very strong preferences for a modality of treatment thus making randomization impossible. To overcome this sometimes, one can resort to post randomization consent design. However, the next best option, which is also considered Level I evidence, is a well-designed prospective cohort study.

Once randomized ITT analyses is the best way to assure confounding will not play a role here.[15] The price paid, however, is typically an attenuation of any observed associations between treatment and outcome. Any treatment effect found in an intent-to-treat analysis is likely to be a conservative estimate of efficacy. Whereas if the trial is an equivalence trial or an noninferiority trial, it is advisable to analyze per-protocol.

While the RCT is considered the “gold standard” of all study designs, the cohort study is often referred to as the “gold standard” of observational studies because of the ability to establish a temporal relationship between the treatment and the outcome of interest.[19] In other words, the treatment clearly precedes the outcome. Since other factors than treatment alone (i.e., prognostic factors) can also influence the outcome, an imbalance between treatment and control groups with respect to these factors may result in a biased outcome. Furthermore, these factors often influence which treatment the patient receives. As a result, cohort studies can lead to misleading results if these factors are not carefully identified and controlled for, either overestimating or underestimating the treatment effects. However, a well-designed prospective cohort study, which carefully documents the confounders and adjusts for them in the analysis can also provide robust Level I clinical evidence.

The remaining two designs are the case control studies and case series. Case–control studies are used to investigating rare outcomes in a retrospective manner from outcome to treatment and comparing with those who did not have the outcome. Case series are descriptive study without any hypothesis or control group. It looks at effect of a treatment in terms of safety and efficacy without deriving any strong inferences, which could be generalized to patients in other settings.

Blinding

All efforts should be made to blind the patient, physicians and the assessors to the treatment arm of the group. This is done to minimize bias in outcome assessment.[20] Blinding in surgical practice is very difficult. It is impossible to blind the surgeon or the patient to the intervention and some have suggested sham surgery, which has serious ethical issues. However, when the main outcome is subjective, such as pain or range of movement, special attention should be paid to blinding the outcome assessor. For most outcomes, a blinded outcome assessment is feasible. However, with patient-reported outcomes (PROs), if the patients cannot be blinded, there is a risk of ascertainment bias. However, this can probably be limited if the individual interviewing the patients is independent and blinded.[19]

Subject selection

In defining the study population due care should be taken to choose a subset of patient which is truly representative of the population likely to receive the treatment under investigation. All efforts should be made to include maximum patients and minimize the exclusion criterion. Exclusion criteria may improve the feasibility of the study but at the cost of generalizability so should be used sparingly.[17]

Intervention

Surgical interventions are complex and consist of several components, each of which influences the effect of the treatment. The assessment of treatment involves the evaluation of the effect of surgery, of pharmacological treatment, anesthesia, rehabilitation, orthoses, and rest. Consequently, these interventions must be described and standardized to be administered consistently to all patients and reproduced in clinical practice. There may also be a discrepancy between the intended intervention as described in the protocol and that which is actually administered. The surgeons must be aware that adherence to the planned procedure is important in order to allow an adequate application of the results in practice. A systematic review of surgical RCTs showed that if the intended surgical procedure was described in most articles (87.3%), other important components such as the management of anesthesia (35.4%), preoperative (15.2%), and postoperative care (49.4%) were lacking.[21] Furthermore, the description of the actual operation, which was performed, was given in less than half of the reports.

The expertise of the surgeon and the volume of work carried out in the particular center have considerable influence on the success of a surgical treatment.[22] Surgical expertise and the volume of work may introduce bias and have a considerable influence on their external validity. The variation of expertise in one arm of the treatment compared with another implies a bias against the more difficult procedure. Some studies have focused on the qualifications of the surgeons, the years in practice, specific training before participating in the trial, the number of procedures performed and the learning curve or rate of complications. These factors need to be kept in mind while analyzing the study to make appropriate adjustments and also while interpreting results of the trials.

Data collection

Maximum baseline data need to be recorded. Prognostic variables are those that may be associated with the outcome but are not necessarily the treatment interventions being evaluated. These should be recorded up front, to explore their association with the outcome later. These are especially important for prognostic studies that seek to identify those patients at a greater risk for a poor prognosis. Predictor variables may also be potential confounding variables, which accentuate the importance of measuring them. Confounding variables are the factors independently associated with both treatment and outcome, which are likely to be responsible for observed association. Unlike in randomized study, in prospective cohort studies, they need to be diligently recorded and their effect carefully controlled using statistical tools also called adjusted statistical analysis.

Selecting outcome measures

It is very important to select validated outcome measures, which directly measure the desired aims. They should be relevant to patients and more objective in nature. Emerging PRO measures are doing a better job of measuring aspects of patients' lives that patients consider important.[23] Furthermore, they are generally more carefully developed and tested. It is increasingly recognized that traditional clinician-based outcome measures need to be complemented by measures that focus on the patient's concerns to evaluate interventions and identify whether one treatment is better than another.

Follow-up plan

A detailed follow-up plan which includes the time point and activities to be conducted at each time point of follow-up should be documented. All patient contact details should be maintained and active follow-up should be done using phone calls and mailers to ensure minimum dropouts. Not more than 10% missed follow-up and dropouts should be allowed. Each missed follow-up and drop out should be recorded with details of reason for the same to be able to be able to exclude any association between the outcome of the study and the drop out.[14]

Statistical plan

Sample size calculation

It is possible that one treatment is truly superior to another, yet the study may result in statistically nonsignificant results. This will occur if the sample size is too small to detect differences in treatment effectiveness. This is a problem with the power of the study and that is why it is important to describe the sample size calculations to justify recruitment goals.[24] There are various statistical soft wares available to calculate sample size to achieve desired power for a given level of error. However, you should add 10% for dropouts and another 20% if it is a nonrandomized study to improve group comparability.

Descriptive statistics

The descriptive baseline data of the study population are important to record and analyze to ensure comparability of the two groups and it also gives the reader an idea of the study population and whether the results could be generalized to your population of interest.

Analytical statistics

These tests are chosen as per the type and distribution of data and they help to prove or disprove the hypothesis. They answer our research questions by ruling out chance associations. We generally use a threshold value of the chance association “alpha error” as 5% (P < 0.05). In applying the statistical tools, it also important categorize outcomes as categorical (nominal or ordinal) or continuous data (outcome scores). A common method for comparing categorical outcomes is the Pearson's Chi-square test.[25] The exact Chi-square test and Fisher's exact test are used when testing small samples (e.g., <10 subjects per group). Common methods for comparing continuous outcomes include Student's t-test and Pearson's product moment-correlation coefficient. However, if the data are not normally distributed Wilcoxon–Ranksum test is used. While conducting prospective cohort studies more elegant multivariate methods are used which allow to adjust for potential confounding factors and to assess other risk factors simultaneously with the treatment of interest for dichotomous outcomes. These include Logistic regression, Cox regression, and Negative binomial regression. For continuous outcomes, we use linear regression and analysis of variance. Exact statistical plan needs to be specified in the study protocol.

Finalizing study protocol and study manual

This should be done in three stages beginning with a study outline followed by a detailed protocol and finally the complete manual, which includes, detailed instructions for all study procedures. This manual should include the step-by-step process for enrolling and following patients, entering and managing data, and monitoring the process. Copies of all study materials including study protocol, consent forms, and questionnaires should also be included in the manual. Procedures for maintaining confidentiality and quality assurance and control should be covered. Proper informed consent forms and data handling procedures need to be formulated for approval by IRB.

Prestudy preparation

While you wait for the IRB approval you should prepare subject folders which would include all the documents required for enrolling a patient which includes the informed consent form, screening sheet, Case Record Form, follow-up forms, etc., Creating the study database before collecting your data is very important. This allows defining each variable, giving it a variable name that will be part of database, and ultimately end up in statistical analysis program. It also allows to begin data entry during the study process, which will allow to work out the deficiencies early on rather than later and will ultimately save time in the end. The most common approach to creating a simple database is to use a spreadsheet program and then transfer the data to statistical software for analysis.

Study conduct

Enrollment

It is imperative that the staff understands the importance of the study and the process by which patients are introduced to and consent to the study. The subject should be made fully aware of the time commitment involved and the need to follow and comply with the protocol. The study compliance is critical to the success of a study. Too many dropouts and your data are no longer valid. Besides a thorough explanation of the follow-up process, it is also important to point out the benefits of the study participation. The subject should receive a copy of the informed consent, which typically has this contact information. They should also receive a copy of the study protocol synopsis and any other patient education material pertinent to your study including a checklist of required follow-up visits.

Data collection, entry, correction and cleaning

With the necessary patient folders ready, all these data activities should go on with the progress of the study and should not be delayed. Data queries should be generated concurrently and resolved from time to time.

Lost to follow-up and withdrawal

If a subject consistently misses their follow-up visits, the study coordinator should try and make contact by phone first and certified letter second to determine the root of the problem. It may be that the patient has had scheduling conflicts, a change of address, a significant change in condition, or in some cases passed away. If all attempts fail the subject should be withdrawn from the study. This should be documented. A subject may also be withdrawn for failure to adhere to the protocol, adverse reaction that requires withdrawal, or even death. A subject may also voluntarily withdraw. If a subject is withdrawn, an attempt to have a final follow-up visit should be made.

Study closure and retaining of records

At the termination of the study when all follow-ups are complete, all the records need to be safely stored for a period as decided by the institution. The study completion intimation needs to be sent to the IRB. The research assistants need to have access to data set for resolving data queries, which arise before final report is prepared. The study closure intimation report should also be made available to the subjects.

Data analysis and report writing

Once you have executed your study successfully, the fun part begins! That is “the time to discover the truthiness of study questions. Is treatment A better than treatment B? Does the outcome depend upon what group of patients who received the treatment? Are the patients really better off receiving this new technique? Is it possible that the old way is the best way? Is it possible that it does not matter what technique you use?”[26] The motivating factors for researcher are finding out the answers of study questions. Once you have these answers you can begin your manuscript for submission.

Publication

You have come to the end of a long journey. The study was conceived, planned, executed, analyzed and reports have been written. Now, it is time to select a journal and submit your manuscript for publication. It is only through publication that you will be able to help change or improve the way patient is treated today. Whether your results are what you expected or not, publishing is very important. Avoid the temptation not to submit results that are negative or counter to your hypothesis. This is known as publication bias.[27] Before submitting for publication, it is important to have your peers review your manuscript. Select most appropriate journal for the audience you want to reach. If rejected, do not be discouraged. If you are not given the opportunity to re-submit, accept this decision and move on to another journal. Expect criticism and improve your paper accordingly. Persevere and be patient. Consider this as a part of the process and continue with more research!

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 

  References Top
1.Hampton T. Experts debate need to improve quality and oversight of continuing education. JAMA 2008;299:1003-4.  Back to cited text no. 1
    2.Obremskey WT, Pappas N, Attallah-Wasif E, TornettaP3rd, Bhandari M. Level of evidence in orthopaedic journals. J Bone Joint Surg Am 2005;87:2632-8.  Back to cited text no. 2
    3.Greenhalgh T, Howick J, Maskrey N; Evidence Based Medicine Renaissance Group. Evidence based medicine: A movement in crisis? BMJ 2014;348:g3725.  Back to cited text no. 3
    4.Chaudhry H, Mundi R, Singh I, Einhorn TA, Bhandari M. How good is the orthopaedic literature?. Indian J Orthop 2008;42:144-9.  Back to cited text no. 4
[PUBMED]  [Full text]  5.Stevens KR. The impact of evidence-based practice in nursing and the next big ideas. Online J Issues Nurs 2013;18:4.  Back to cited text no. 5
    6.Calver MC, Beatty SJ, Bryant KA, Dickman CR, Ebner BC, Morgan DL. Users beware: Implications of database errors when assessing the individual research records of ecologists and conservation biologists. Pacific Conservation Biol 2013;19:320-30.  Back to cited text no. 6
    7.Hartung DM, Touchette D. Overview of clinical research design. Am J Health Syst Pharm 2009;66:398-408.  Back to cited text no. 7
    8.Sexton SA, Ferguson N, Pearce C, Ricketts DM. The misuse of 'no significant difference' in British Orthopaedic literature. Ann R Coll Surg Engl 2008;90:58-61.  Back to cited text no. 8
    9.Montané E, Vallano A, Vidal X, Aguilera C, Laporte JR. Reporting randomised clinical trials of analgesics after traumatic or orthopaedic surgery is inadequate: A systematic review. BMC Clin Pharmacol 2010;10:2.  Back to cited text no. 9
    10.Poolman RW, Struijs PA, Krips R, Sierevelt IN, Marti RK, Farrokhyar F, et al. Reporting of outcomes in orthopaedic randomized trials: Does blinding of outcome assessors matter? J Bone Joint Surg Am 2007;89:550-8.  Back to cited text no. 10
    11.Agabegi SS, Stern PJ. Bias in research. Am J Orthop (Belle Mead NJ) 2008;37:242-8.  Back to cited text no. 11
    12.CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials. BMJ 2011;34313.  Back to cited text no. 12
    13.Suresh K, Thomas SV, Suresh G. Design, data analysis and sampling techniques for clinical research. Ann Indian Acad Neurol 2011;14:287-90.  Back to cited text no. 13
[PUBMED]  [Full text]  14.Weichung Joseph Shih. Problems in dealing with missing data and informative censoring in clinical trials. Curr Controll Trials Cardiovascular Med 2002;3:4.  Back to cited text no. 14
    15.Sandeep K. Gupta. Intention-to-treat concept: A review. Perspect Clin Res 2011;2:109-12.  Back to cited text no. 15
    16.Deeks JJ, Higgins JP, Altman DG, Cochrane Statistical Methods Group. Analysing data and undertaking meta-analyses. Cochrane handbook for systematic reviews of interventions. Retrieved from www.training.cochrane.org/handbook Wiley Online Library 2019 Sep 23:241-84.  Back to cited text no. 16
    17.Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008.  Back to cited text no. 17
    18.Vavken P, Heinrich KM, Koppelhuber C, Rois S, Dorotka R. The use of confidence intervals in reporting orthopaedic research findings. Clin Orthop Relat Res 2009;467:3334-9.  Back to cited text no. 18
    19.Boutron I, Ravaud P, Nizard R. The design and assessment of prospective randomised, controlled trials in orthopaedic surgery. J Bone Joint Surg Br 2007;89:858-63.  Back to cited text no. 19
    20.Chess LE, Gagnier J. Risk of bias of randomized controlled trials published in orthopaedic journals. BMC Med Res Methodol 2013;13:76.  Back to cited text no. 20
    21.Jacquier I, Boutron I, Moher D, Roy C, Ravaud P. The reporting of randomized clinical trials using a surgical intervention is in need of immediate improvement: A systematic review. Ann Surg 2006;244:677-83.  Back to cited text no. 21
    22.Doro C, Dimick J, Wainess R, Upchurch G, Urquhart A. Hospital volume and inpatient mortality outcomes of total hip arthroplasty in the United States. J Arthroplasty 2006;21:10-6.  Back to cited text no. 22
    23.Poolman RW, Swiontkowski MF, Fairbank JC, Schemitsch EH, Sprague S, De Vet HC. Outcome instruments: rationale for their use. J Bone Joint Surg. Am. 2009;91(Suppl 3):41-9.  Back to cited text no. 23
    24.Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: A quantitative tool. Int J Soc Res Methodol 2015;18:669-84.  Back to cited text no. 24
    25.Parsons NR, Price CL, Hiskens R, Achten J, Costa ML. An evaluation of the quality of statistical design and analysis of published medical research: Results from a systematic survey of general orthopaedic journals. BMC Med Res Methodol 2012;12:60.  Back to cited text no. 25
    26.Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. Int J Surg 2012;10:28-55.  Back to cited text no. 26
    27.Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB. Designing Clinical Research. 3rd ed.. Philadelphia: Lippincott, Williams and Wilkins; 2007. p. 51-63.  Back to cited text no. 27
    
  Top  

留言 (0)

沒有登入
gif