Table of Contents Table of Contents
Previous Page  817 844 Next Page
Information
Show Menu
Previous Page 817 844 Next Page
Page Background

Guideline groups should follow well-defined methodo-

logical rules to assess the studies in these situations. RCTs

should be appraised for their internal and external validity

using established tools

[51] .

The conflicting SR/MA should

be appraised in the same fashion to determine the

methodological quality of the review, the quality of the

studies included, inconsistency within the studies, unex-

plained heterogeneity, and the likelihood of publication bias

using tools such as AMSTAR

[28,29]

and DART

[30]

. In some

cases, the discrepancy may be due to errors in the MA in

applying study eligibility criteria or even data extraction

[52]

, so there is a need for an SR/MA protocol and strict

quality control.

When MAs include many small underpowered studies,

especially combined with likely presence of publication

bias, there is immediate concern for overinflation of, or

completely erroneous, effect size measurement. In addition,

when a great degree of heterogeneity exists in the MA that

cannot be easily accounted for, the results may be highly

unreliable. In this regard, IPD MAs provide a better platform

for assessing and explaining heterogeneity than aggregate

data MAs do.

Two examples were discussed in this manuscript to

illustrate the assessment process. In the case of MET for

ureteric stones, a large, high-quality RCT

[8]

contradicted

many well-established MAs that pointed to a benefit of this

therapy. Analysis of a representative MA

[36]

revealed the

inclusion of many small RCTs, poor internal validity,

significant study

[2_TD$DIFF]

heterogeneity and likely publication bias.

When such MA concerns are present, a single high-quality

RCT may be considered as having the higher LE. For

guideline organizations, this process can be used to justify a

change in recommendations based on methodologically

sound principles.

Radical versus partial nephrectomy provides a more

complex example. The MA

[9]

included only a single RCT,

which was the study in conflict with its own results. The

other studies included were all retrospective, which in

general provide a lower LE. Risk of bias was poorly assessed,

and significant study heterogeneity was present. It is

important to reiterate that combining observational studies

in general, and even comparative nonrandomized studies

with RCTs in an intervention MA, may produce unreliable

results and is not considered valid. In light of all this, the

single RCT

[10]

in this circumstance might provide more

guidance than the MA if it was of significantly high quality.

However, this RCT also had some methodology concerns, so

the comparison is not so simple.

Instead of automatically assigning a higher LE to SR/MAs

that conflict with RCTs, these examples have shown that the

quality of the evidence and the RoB of studies included in

SRs/MAs should be assessed to determine which source

provides the better evidence.

Although non-RCTs can be included in SRs, we have

emphasized that only RCTs should be included in interven-

tionMAs. RCTs are not required for MAs

[3_TD$DIFF]

of prognostic factors

and

[8_TD$DIFF]

the accuracy of diagnostic tests, however, the studies

included in these MAs should preferably be prospective in

nature and based on a protocol to minimize RoB.

Despite the availability of MAs and RCTs, and in cases

where high LE does not exist, wemay still not knowwhat the

best treatment is. The GRADE system, which takes into

account the quality of evidence (high, moderate, low, very

low) for critical outcomes, provides strengths of recommen-

dations (strong, weak) for or against a treatment to aid

clinicians in their practice when consensus is not possible

[42,53]

. A decision curve approach, which takes into account

a patient’s values and preferences, may also be used to help

choose between the different treatment options.

7.

Conclusions

New or existing RCT data can lead to conflicts with MA data.

In this paper, we present examples of and explore reasons

for such conflicts. Guidance is provided to guideline

developers on how to interpret conflicting data in such

circumstances to help assess which source is more reliable.

For guideline organizations both within and outside

urology, having a well-defined and robust process to deal

with such conflicts is essential to improve guideline quality.

Author contributions:

Richard J. Sylvester had full access to all the data in

the study and takes responsibility for the integrity of the data and the

accuracy of the data analysis.

Study concept and design:

Sylvester, N’Dow.

Acquisition of data:

Sylvester, Lam, Marconi, S. MacLennan, Yuan, Van

Poppel, N’Dow.

Analysis and interpretation of data:

Sylvester, Canfield, Lam, Marconi,

S. MacLennan, Yuan, G. MacLennan, Norrie, Omar, Bruins, Hernandez,

Plass, Van Poppel, N’Dow.

Drafting of the manuscript:

Sylvester, Canfield, Lam, Marconi,

S. MacLennan, Yuan, G. MacLennan, Norrie, Omar, Bruins, Hernandez,

Plass, Van Poppel, N’Dow.

Critical revision of the manuscript for important intellectual content:

Sylvester, Canfield, Lam, Marconi, S. MacLennan, Yuan, G. MacLennan,

Norrie, Omar, Bruins, Hernandez, Plass, Van Poppel, N’Dow.

Statistical analysis:

None.

Obtaining funding:

None.

Administrative, technical, or material support:

None.

Supervision:

Sylvester, N’Dow.

Other:

None.

Financial disclosures:

Richard J. Sylvester certifies that all conflicts of

interest, including specific financial interests and relationships and

affiliations relevant to the subject matter or materials discussed in the

manuscript (eg, employment/affiliation, grants or funding, consultan-

cies, honoraria, stock ownership or options, expert testimony, royalties,

or patents filed, received, or pending), are the following: None.

Funding/Support and role of the sponsor:

None.

References

[1]

Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. Br Med J 1996;312:71–2

.

[2] Oxford Centre for Evidence-based Medicine. Levels of evidence.

Oxford, UK: CEBM; 2009.

www.cebm.net/oxford-centre-evidence- based-medicine- levels-evidence-march-2009/

.

[3]

Kjaergard LL, Villumsen J, Gluud C. Reported methodologic quality and discrepancies between large and small randomized trials in meta-analyses. Ann Intern Med 2001;135:982–9

.

E U R O P E A N U R O L O G Y 7 1 ( 2 0 1 7 ) 8 1 1 – 8 1 9

817