Zachary Steel and colleagues in New South Wales (Australia) and the World Health Organization in Geneva (Switzerland) have recently published a review and meta-analysis of the relationships between “torture and other potentially traumatic events” and PTSD and depression among displaced, war-affected populations (that’s a mouthful, as is their 20-word title, and that’s not including a five-word subtitle) in the Journal of the American Medical Association (JAMA). For those who aren’t in academic medicine (and God bless you), publishing in JAMA is one of the premier acknowledgements of scholarship in the field.
A term you should know: “meta-analysis” sounds pretty fancy, and in some ways it is, in some ways it isn’t. It is fancy in that you can take a bunch of articles on a similar theme, consolidate the findings, and, through the application of a few relatively simple statistics, infer certain conclusions about the particular theme of interest. Consolidating articles means getting a larger number of subjects, referred to your “n” (for “number”), and a bigger n is better for something statisticians call “power.” (Really.) So, for intstance, Steel et al. (2009) identified 161 studies (from an initial pool of 5904 abstracts published between 1980 and May 2009) reporting results from 181 surveys comprising 81,866 refugees.
The non-fancy part of meta-analysis is that you need to pick articles that are really similar to one another. And, given the pressures of distinguishing oneself in academia, no two studies are really that similar. Essentially the problem is that meta-analysis claims to condense studies that are often uncondensable. However, with this grain of salt, I think we can use meta-analysis to make some important inferences about a field.
The biggest finding (to me anyway) from Steel et al. is that studies that were carried with higher levels of methodological rigor (here they mean sampling procedures that better approximated the population of interest) report, on average, lower rates of torture, PTSD, and depression than studies using less representative samples. So, in those studies using probability sampling methods, the PTSD prevlence rate among refugees was 26.6%, wheras among those without probability sampling techniques it was 37.2%. Given that the rates reported in the literature vary so widely (e.g., the 0-99% PTSD prevalence range), it’s nice to have some summary numbers split by whether or not the studies were solid epidemiological studies or not.
The prevalence rate for torture among all studies was 21%, and, not surprisingly, torture was the single greatest predictor of PTSD and depression. What exactly was meant by torture was not really spelled out. In general, the authors referred to “torture and other potentially traumatic events,” which probably means other war-related trauma. There didn’t seem to be any attempt to describe which defintions of torture were applied in the studies (if you read the article and see something else, please let me know).
Of note was Steel et al.’s use of the Political Terror Scale (or “PTS”), an index of political repression (created in the 1980’s) used by Amensty International and the U.S. State Department (odd bedfellows), used here as a covariate. The PTS accounted for a “modest” (their words) amoung of the variability in mental health outcomes, but I think the fact that thye used it deserves praise and further study. One of the major objections practicitioners have to research is that researchers don’t do a good job of measuring the overwhelming breakdown of communities affected by war and political violence, and using the PTS is an attempt to do that. For more on the PTS, see the PTS website; you can check to see how Amnesty International and the US Sate Department rate different countries.
In summary, Steel et al. is a great contribution to our field. The message that the different rates of torture and subsequent mental health outcomes is due in part to the varying quality of sampling procedures is an important one. I think this might have to do with many authors’ assumptions that they must report “prevalence rates” even when they aren’t setting out to do a prevalence study. I remember how the reviewers for a paper I published in Journal of Traumatic Stress on west and central African refugees endorsement of PTSD symptoms (Rasmussen et al, 2007) thought I should report rates of PTSD in addition to the factor model that was the focus of the manuscript. Luckily I argued my way out of it, explaining that I was using a sample of patients at a clinic for refugees, not a representative sample of refugees in general, and therefore the idea of a prevalence rate was not really applicable. Funnily enough, later a manuscript submitted to the same journal that I was asked to review cited my paper as evidence of high rates of PTSD among refugees! The point: If you’re not doing epidemiology, don’t imply that you’re doing epidemiology.