Meta-analysis

ImprimirCitar

Meta-analysis is a set of statistical tools, which are useful for synthesizing data from a collection of studies. Meta-analysis begins by collecting estimates of a certain effect (expressed in an index of effect size, such as standardized mean difference, hazard ratio, or correlation) from each study. Meta-analysis allows these effects to be assessed in context: if the effect size is consistent, the treatment effect can be considered strong and the effect size is estimated with greater precision than with a single study. If the effect size varies, that variation can be described and potentially explained.

The term meta-analysis, as such, was initially applied in the social sciences and psychology. Starting in the 1980s, it began to be increasingly applied in medicine, and from the 1990s, articles describing meta-analysis results were very common in medical journals.

History

The term "meta-analysis" was coined by Gene V. Glass in 1976, being the first modern statistician to point out that the main interest of it was "what we have called the meta-analysis of the scientific research & # 34;. Even if this allowed him to be widely recognized as the founder of the modern method, it was not until the 1990s that the practice of meta-analyses began to figure, but not always, as important components of a systematic review process. Statistical theory around meta-analysis was greatly improved by the work of Nambury S. Raju, Larry V. Hedges, Harris Cooper, Ingram Olkin, John E. Hunter, Jacob Cohen, Thomas C. Chalmers, Robert Rosenthal, and Frank L. Schmidt.

Advantages

Conceptually speaking, a statistical approach is used to combine the results of multiple studies. Therefore, its advantages are the following:

  • The results of the study can be generalized to a wider population;
  • The accuracy and accuracy of the estimates can be improved to greater use of data. This, in turn, can increase the statistical power to detect an effect;
  • The inconsistency of results between studies can be evaluated and analyzed. e.g., heterogeneity due to sampling error is valued, and if in part that heterogeneity is influenced by genuine heterogeneity among the studies involved;
  • The assumptions can be compared with the combined estimates;
  • Moderators may be included to explain the variation between studies;
  • Attention can be paid and analysed bias in publication.

Potential difficulties

Meta-analysis of several small studies does not predict the results of a single large study. Some have argued that a weakness of the method is that sources of bias are not controlled for by the method: a good meta-analysis of studies poorly designed will still lead to bad stats. This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called 'best evidence synthesis'. Other reviewers would include weaker studies, and add a study-level predictor variable reflecting the methodological quality of the studies to examine the effect of study quality on effect size. However, others have argued that the best The approach is to preserve information about the variation in the study sample, casting as wide a net as possible, and methodological selection criteria introduce unwanted subjectivity, defeating the purpose of this approach.

Publication bias: the underlying problem

A projected funnel graph without the background problem.
A projected funnel graph with the background problem.

Another potential pitfall is reliance on what is available from published studies, which can lead to exaggerated results due to such bias, as studies that show negative or insignificant results are less likely to be published. For any given area of research, you cannot know how many studies have been hidden or discarded.

This problem results in the distribution of effect sizes that are skewed, skewed, or totally isolated; creating a common logical reasoning error, in which the importance of published studies is overestimated, while other studies are not even published. This should be seriously considered when interpreting the results of a meta-analysis.

This can be visualized with a funnel plot, which is a scatterplot of sample size and effect size. For a certain level of effect, the smaller the study, the greater the chance of finding it by chance; at the same time, the larger the effect level, the less likely it is that a larger study could turn out to be as positive. In case many negative studies were not published, the remaining positive ones would give rise to such a funnel plot in which the effect size is inversely proportional to the sample size, that is, a significant part of the effect shown is due to the possibility that it does not balance in the diagram due to the absence of unpublished negative data. In contrast, when most studies were published, the effect was shown to have no reason to be biased by study size; Therefore, it results in a symmetric funnel graph. So, if there is no publication bias, there would be no relationship between sample size and effect size. A negative relationship between sample size and effect size would imply that studies that found significant effects would be more likely to published and/or sent for this purpose. There are several procedures available that attempt to correct the drawer problem by identifying it, such as guessing on the distribution wick of study effects.

Methods to detect publication bias have been controversial as they tend to have low impact in detecting it, and may even lead to false assumptions under certain circumstances. A joint method to analyze publication bias has been proposed to debunk false assumptions and suggest that 25% of meta-analyses in psychology could have publication bias. However, potential low-impact issues remain controversial and estimates of bias could be lower than the actual amount.

Agenda-driven bias

The most serious mistake in meta-analysis (H. Sabhan) often occurs when the person(s) conducting a meta-analysis has an economic, social, or political agenda, such as legislative approval or disapproval. People with these types of agendas might be more likely to misuse meta-analyses because of their biases.

Development

  1. Formulation of the problem
  2. Search for literature
  3. Selection of studies (input criteria):
    • Based on quality criteria, e.g. random assignment and blind clinical trial requirements;
    • Selection of specific studies on a very specific topic, such as cancer treatment;
    • Decide whether non-published studies are included to avoid publication bias (fundament).
  4. Decide which dependent variables or summary measures are available. For example:
    • Differences (dissentful data);
    • Averages (continuous data);
    • La g Hedges is a popular measure for continuous data that is standardized to eliminate scale differences, but incorporates a variation index between groups:
    δ δ =μ μ t− − μ μ cσ σ ,{displaystyle delta ={frac {mu _{t}-mu _{c}{sigma}}},} where μ μ t{displaystyle mu _{t} is the average treatment, μ μ c{displaystyle mu _{c}} It's the average control, and σ σ 2{displaystyle sigma ^{2}} the joint variance.
  5. Selection of model

For reporting guidelines, see the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) articles.

Methodology and assumptions

The methods

In general, there are two types of evidence that can be distinguished when conducting a meta-analysis: baseline data from each participant (DIP) and aggregate data (AD). Whereas baseline data represents raw information from study centers, factual aggregates are more common and available (eg, from the literature) and typically represent global estimates, such as odds ratios or relative risks. This distinction has increased the need for different methods when testing is desired, leading to the development of one- or two-stage methods; in one-stage, the baseline data is simultaneously modeled while representing the grouping of participants within the studies; In contrast, two-stage methods synthesize the aggregate data from each study and consider study loads here. By reducing the initial data to aggregated data, the two-stage methods can even be applied when the initial data is available; which presents an alternative course of action when performing the meta-analysis.

Although one-stage and two-stage methods are believed to yield similar results, recent studies have shown that such methods can sometimes lead to different conclusions.

Assumptions

Fixed effect

The fixed-effects model offers a weighting of serial estimates: the inverse of the variance of each estimate is often used as the study weight, so that studies with larger samples tend to contribute more to the weighted mean than smaller ones. studies with smaller samples. Consequently, when the studies in a meta-analysis are dominated by one large one, the findings in smaller studies are virtually ignored. Most importantly, this model assumes that all included studies are identical: they study the same population, use the same variable and outcome definitions, etc. This assumption is typically unrealistic because all research is prone to be influenced by various sources of heterogeneity; thus, treatment effects may differ based on regional settings, dosage levels, study conditions.

Random effects

A common model for synthesizing heterogeneous studies is the random effects model; this is just the weighted mean of effect sizes from a group of studies. The weight applied in this weighting process with a random-effects meta-analysis is done in two steps:

  1. Step 1: Obtain sampling variances.
  2. Step 2: Calculation of specific variance, due to the heterogeneity of studies. The weight of each study is obtained by adding the sampling variation of that study (other than for each study) and the specific variation (only for all studies). The weight is obtained as the reverse of the sum of those variances. Thus, for study i weight would be: weight(i) = 1/ [var(i) + var(specific)]. The specific variation of the random effects model is obtained by assessing the magnitude in which observed variability exceeds the expected variability by mere sampling (e.g. by the method of moments).

In an extreme case in which the specific variance is very large, it can happen that the weight is highly conditioned by that variance, resulting in negligible in practical terms the sample sizes of the studies. In this way, the weighted mean will be very close to the simple, unweighted arithmetic mean. At the opposite extreme is the case where the estimate of the specific variance gives the value zero. In this case, the weights will be equal to the inverse of the sampling variances and the result will be identical to that obtained under the fixed effect model. The fixed effect model is a special case of the random effects model, which occurs when the specific variance is equal to zero.

The extent of this investment depends solely on two factors: precision heterogeneity, and effect size heterogeneity:

On the other hand, the most widely used method to estimate the specific variance and take heterogeneity into account is the DerSimonian-Laird (DL) method, or method of moments, proposed in 1986. Subsequently, other methods have been proposed, such as Restricted Maximum Likelihood (REML), an iterative and computationally more intensive method. However, a comparison between these two models (and others) showed that there are few practical differences and DL is quite adequate in most scenarios.

Meta-regression

Meta-regression is a tool used in meta-analysis to examine the impact of moderator variables on the study of effect size using regression-based techniques. Meta-regression is more effective at this task than are standard regression techniques.

Application in modern science

In Medicine, a meta-analysis is a study based on the structured and systematic integration of the information obtained in different clinical trials, on a specific health problem. It consists of identifying and reviewing the controlled studies on a certain problem, in order to give a synthetic quantitative estimate of all the available studies. Because it includes a larger number of observations, a meta-analysis is more powerful than the clinical trials it includes. The two main methodological problems with meta-analyses of clinical trials are:

  • The heterogeneity between the trials included, in terms of clinical and socio-demographic characteristics of the populations in each trial, the clinical evaluation methods applied, the dosage, pharmaceutical form or drug dosage pattern evaluated, etc.
  • The possible bias of publication, derived from the fact that not all clinical trials actually performed have been published, by negative or unanticipated results.

A clinical meta-analysis is mainly based on an integration or recycling between the information already obtained and being able to obtain a greater analysis.

The first clinical meta-analysis was carried out by Karl Pearson in 1904, in an attempt to overcome the problem of the low statistical power of studies with small sample sizes; If the results of a group of similar studies are analyzed, a more accurate assessment of the effects can be reached.

In Statistics, a meta-analysis refers to the set of methods focused on contrasting and combining the results of different studies; in the hope of identifying patterns among study results, sources of disagreement among study results, or other interesting relationships that may emerge in the context of multiple studies. In addition, meta-analysis can also be applied to a single study in cases where there are many cohorts that have not gone through the same selection criteria or where the same research methodologies have not been applied to all of them in the same way. or under the same demanding conditions. In these circumstances, each cohort is treated as an individual study and meta-analysis is used to draw conclusions from the entire study.

In its simplest form, it is accomplished by identifying a common measure of effect size; of which a weighted average could be the output data in a meta-analysis. Weighting could be related to sample sizes within individual studies.

More often, there are other differences between those that need to be enabled; but the overall goal of a meta-analysis is to more strongly estimate the true effect size, as opposed to a less precise one derived in a single study under a given simple set of assumptions and conditions.

Further Reading

  • Thompson, Simon G; Pocock, Stuart J (2 November 1991). «Can meta-analysis be trusted?». The Lancet 338 (8775): 1127-1130. PMID 1682553. doi:10.1016/0140-6736(91)91975-Z. Archived from the original on November 22, 2011. Consultation on 17 June 2011.. Explores two contrasting views: does meta-analysis provide "objective, quantitative methods for combining evidence from separate but similar studies" or merely "statistical tricks which make unjustified assumptions in producing oversimplified generalisations out of a complex of disparate studies"?
  • Wilson, D.B., Lipsey, M. W. (2001). Practical meta-analysis. Thousand Oaks: Sage publications. ISBN 0-7619-2168-0
  • O'Rourke, K. (2007) Just the history from the combining of information: investigating and synthesizing what is possibly common in clinical observations or studies via likelihood. Oxford: University of Oxford, Department of Statistics. Gives technical background material and details on the "An historical perspective on meta-analysis" paper cited in the references.
  • Owen, A. B. (2009). "Karl Pearson's meta-analysis revisited." Annals of Statistics, 37 (6B), 3867–3892. Supplementary report.
  • Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results. United Kingdom: Cambridge University Press. ISBN 0-521-14246-6
  • Bonett, D.G. (2012). Replication-extension studies, Current Directions in Psychology, 21, 409-412.
  • Bonett, D.G. (2010). Varying coefficient meta-analysis methods for alpha reliability, Psychological Methods, 15, 368-385.
  • Bonett, D.G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences, Psychological Methods, 14, 225–238.
  • Bonett, D.G. (2008). Meta-analytic interval estimation for bivariate correlations, Psychological Methods, 13, 173-189.
  • Stegenga, Jacob (2011). «Is meta-analysis the platinum standard of evidence?». Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4): 497-507. doi:10.1016/j.shpsc.2011.07.003.

Contenido relacionado

Scientific method

The scientific method is a methodology to obtain new knowledge, which has historically characterized science, and which consists of systematic observation...

Error analysis

When performing calculations using more or less complex algorithms, it is inevitable to make mistakes. These errors can be of three...
Más resultados...
Tamaño del texto:
Copiar