Archive for the 'psychometrics' Category

Looking for graduate school applicants for research in forced migration, trauma and stress at Fordham University

Fall is graduate school application time, as many programs have application deadlines in October, November and December. I have recently moved to Fordham University’s Department of Psychology, and will be looking for graduate student applicants to the Clinical Psychology Division for the 2013 cohort. If you read this blog you know my experience and general research interests, so you know what kind of student researchers I am looking for. Current research projects include comparing the social networks of forced and voluntary immigrants and the health and mental health implications of network differences, measuring trauma and stress in different culturally-defined subgroups, and community-based participatory research with immigrant populations in general. If those are topics that interest you (and you want to get a PhD in Clinical Psychology), follow the links on the Clinical Psychology website and apply.

Deadline for 2013 applicants is Wednesday, December 5, 2012.

If you are not sure you want to commit to a PhD, but know that you are generally interested in psychology, program evaluation and related skills, please visit Fordham University’s MS in Applied Psychological Methods page. Fordham’s APM program is a relatively new course of study that draws heavily on it’s well-respected Psychometrics and Applied Developmental Psychology divisions within the Department of Psychology. Admissions are “rolling,” meaning that you can apply at any time and start the following semester. Students can be full- or part-time.


Two press pieces on the science (and anti-science) of PTSD in the military

The past weekend saw two articles in the popular press concerning PTSD among U.S. soldiers that are worth a read. First, the Seattle Times reported that the Army’s new PTSD screening guidelines fault the established screening tests designed to root out PTSD fakers. Why would anyone fake PTSD? The Army pays an average of $1.5 million in disability benefits per soldier (over his/her lifetime) with PTSD. It is estimated that 22 percent of returning soldiers have PTSD. The difference between those who have it and those who may be faking is no small chunk of change.

In part because of pressure by Senator Patty Murray (D-Washington) to investigate screening at the Madigan Army Medical Center (in Tacoma, Washington), the Army Surgeon General has issued new guidelines that criticize the use of the Minnesota Multiphasic Personality Inventory — or, MMPI, as most psychologists know it. The MMPI is one of the most venerated of psychological screening questionnaires, holding up pretty well in over 60+ years of research (it has been revised several times, most recently in 2008). At issue in this case is the MMPI’s “lie scale,” which has been shown to detect malingerers of various stripes in multiple studies.

According the the Seattle Times, the Army’s new policy “specifically discounts tests used to determine whether soldiers are faking symptoms of post-traumatic stress disorder. It says that poor test results do not constitute malingering.” Technically this is true; malingering scale scores on any normed test like the MMPI are associated with higher or lower probabilities of malingering, and not absolute certainty. Still, by throwing out the best tool they have to detect whether soldiers are malingering or not, what the Army really seems to be doing is trying to avoid appearing callous by relying on scientific methods.

Ironically, the same guidelines include empirically-based treatment improvements regarding medication:

The document found “no benefit” from the use of Xanax, Librium, Valium and other drugs known as benzodiazepines in the treatment of PTSD among combat veterans. Moreover, use of those drugs can cause harm, the Surgeon General’s Office said. The drugs may increase fear and anxiety responses in these patients. And, once prescribed, they “can be very difficult, if not impossible, to discontinue,” due to significant withdrawal symptoms compounded by PTSD, the document states.

Score one for research on meds, zero for research on screening questionnaires (or maybe: psychiatrists one, psychologists zero).

The second article of note wasn’t a report, but an editorial. Writing in the New York Times’ Sunday Review, Weill Cornell Psychiatrist Richard Friedman builds the case that one possible reason for the increase in cases of PTSD among returning soldiers from Afghanistan and Iraq is an increase in stimulants prescribed to them on the battlefield. With the help of the Freedom of Information Act, Dr. Friedman found that military spending on stimulants increased 1,000 percent over five years.

Stimulants do much more than keep troops awake. They can also strengthen learning. By causing the direct release of norepinephrine — a close chemical relative of adrenaline — in the brain, stimulants facilitate memory formation. Not surprisingly, emotionally arousing experiences — both positive and negative — also cause a surge of norepinephrine, which helps to create vivid, long-lasting memories. That’s why we tend to remember events that stir our feelings and learn best when we are a little anxious.

Since PTSD is basically a pathological form of learning known as fear conditioning, stimulants could plausibly increase the risk of getting the disorder.

Dr. Friedman goes on to explain the neurochemistry behind the proposed interaction of stimulants and trauma, review new research showing ameliorative effects of beta-blockers, and (appropriately) call for more transparency and more research on the topic.

The HESPER: WHO’s measurement answer to the problem of identifying needs within displaced populations

The World Health Organization recently released the Humanitarian Emergency Settings Perceived Needs Scale (HESPER), a measure that they hope will operationalize the IASC Guidelines on Mental Health and Psychosocial Support in Emergency Settings and encourage rapid assessment of perceived needs in disaster settings. Longtime disaster mental health and psychosocial researcher Mark van Ommeren was the lead on the project, which means that it was developed with the highest level of rigor given the needs, which include some flexibility. A large advisory group that reads (with a few exceptions) like a who’s who of international disaster mental health and psychosocial intervention provided regular input, and the HESPER was tested in sites as various as Sudan, the UK, Jordan, the Palestinian Territories, Haiti and Nepal. Overall the psychometrics reported look good, particularly given the diversity of locations. There are sections on individual needs and community-level needs on a surprising number of domains, a welcome relief from the unidimensional individual-level norms.

What may be the best thing about the HESPER guide is the presentation. Van Ommeren and company have provided not only the measure and the methods used for development of the measure, but also sections on training local administrators, appropriate sampling, a mock interview transcript that reads true, and even a section on how to present HESPER findings to organizations. Too often I have seen an disaster relief NGO get a measure that may be valid or may not, administer it haphazardly, and then be unsure of how to meaningfully present findings. In addition, there’s an “Other things to consider” section which includes the things that you don’t usually think about but are blatantly obvious on the ground — the dilemma of raised expectations that often come about just by asking about problems, for instance.

And then there’s this:


The HESPER Scale may be used by anybody in its current form for non-commercial purposes. Should you wish to make any modifications to the scale, or translate the scale into another language, you will need to get permission from WHO Press (for contact details, see inside cover page). Currently the HESPER Scale (i.e. Appendix 1 only) is available in English, French, Spanish, Arabic, Nepali, and French / Haitian Creole. Word files of the different HESPER Scale language versions are available upon request.

The WHO provides their measures for free and welcomes further development of these types of rapid assessments.

Response scale hijinx

My friend and colleague John Wilkinson, who worked with me to develop a response scale for distress among Darfur refugees some years ago (see blog entry), directed me to a humorous blog entry on response scales.


The blog entry includes new interpretations for each level, including the following for the third face from the right (my favorite):

4: Huh.  I never knew that about giraffes.

I read through the entry, then remembered where I’d seen the scale: I’m a little embarrassed to say that I actually have the scale that the blog makes fun of posted on my refrigerator door in my apartment. For the record, it’s the Wong-Baker Faces Pain Rating Scale. I use it to tell myself what kind of day I’m having. Even psychologists need help with their emotions sometimes. (Say what you will.)

See the whole entry at Hyperbole and a Half. (Warning: The blogger uses some language that may be offensive to some.)

“Blue cloth” psychological symptom measure

Phnom Penh, Cambodia. An NGO here involved in mental health work around the country uses an innovative technique to measure severity of psychological symptoms, one that might make Western psychologists uneasy. “Blue cloth” is, as you might imagine, a blue cloth, a large one maybe 5 ft x 5 ft, with psychological symptoms (headaches, trouble sleeping, irritability) listed down the left side and five different colored pockets arranged in columns to the right, one column for each “level” of symptom severity (e.g., the pockets in the first column respresent no experience of the symptom, whereas the pockets in the last column represent extreme severity–see photo below). Participants in support groups each take turns placing rocks into the pockets that represent how severe the symptoms have been in the past 30 days. NGO staff record each participant’s placement of rocks in order to keep a record of how distressed participants are. The process is repeated 6 months later, and comparing the two profiles is how the NGO measures symptom change over the course of treatment (hopefully the rocks are in the less severe pockets at 6 months).

All this is done in front of other group members. The Director told me that some visiting Western psychologists were nervous about this given the obvious breach of confidentiality within the group. He brought the issue back to the groups. In the alcohol abuse groups members responded that doing the exercise in front of the rest of the

Blue cloth psychological symptom severity measure

Blue cloth psychological symptom severity measure

group was better because they could serve as checks on each others’ reports when they might be inclined to minimize or lie outright. To the visiting Westerners the group setting was a clear threat to the reliability of the data, but to the group members privacy was the threat to reliability. The Director reported that members of other groups were somewhat confused about the Westerners’ objections; weren’t the group members supposed to share these sorts of things in group therapy? Blue cloth became a tool for discussion of symptoms as much as a measure of them.

Undoubtedly public reporting of symptoms within a support group is not the same thing as using a symptom measure that is confidential. But is the information gained from it necessarily less reliable? Is it less clinically valid? Of course, these are empirical questions, to be answered in some clinically-relevant research utopia to come…

The “This person must be really sick” symptom severity scale

The last project I worked on in Chad (an evaluation of an innovative psychosocial program) invovled a large-scale survey to identify prevalence of distress among Darfur refugees living in the eastern part of the country. Doing survey research in the Sahel presented a number of challenges, including an employee pool with little experience doing surveys (remedy: lots of researcher basic training), a population suspicious of people asking any questions whatsoever (remedy: lots and lots of explanations and staff drawn from the camps themselves), and sand getting into laptop computers and scanners (remedy: scotch tape, everywhere). My personal favorite, however, concerned the visual scale we used in order to help survey respondents express their level of distress for each symtpom (e.g., “Have you had trouble sleeping? How much has this bothered you: not at all, a little bit, moderately, a lot, extremely”). Here is a picture of what we came up with:

Our visual scale for women participants

Our visual scale for women participants

You can see it is arranged from right to left, as Darfuris are from cultures that are Arabic-speaking and Arabic is oriented from right to left. The woman on the far right is carrying only a few sticks on her head and she represents no distress, the next one a little bit more, and so on until she just can’t carry the sticks anymore–extreme distress (the figure on the far left). We engaged a great graphic artist (known in New York for her comic books) to make these images, using photos from Darfur and consultation with people who had done work with the population. We made one set for women (pictured) and one set for men. Others (e.g., Paul Bolton’s group) have used this type of visual scale with people carrying things in other projects in the region. In general, pilot testing with this scale seemed to work pretty well. Everyone got that one was worse than the other, and could generally point to how badly they were feeling given a specific item.

A couple weeks into the project (the whole survey took three months, over 1500 respondents for a two hour protocol), I was sitting in the room with an interviewer and a participant, and after a few items using the scale the participant started chuckling to himself. The interviewer smiled, repeated the question, and the man started shaking his head. The interviewer asked him a question, and the man answered by pointing to the figure on the ground, the most extreme case to the far left. I asked the interviewer what the man had said.

“He says that this person must be very sick, or else he would be able to carry the wood. That’s really not a lot of wood. And he says that this person [pointing to the figure furthest to the right] must be really lazy. He should be carrying more wood.”

We thought we were measuring participants’ symptom severity, really were asking how lazy or weak they were. Hm.

March 2019
« Apr