Some Social Scientists Are Tired of Asking for Permission - NYTimes.com

If you took Psychology 101 in college, you probably had to enroll in an experiment to fulfill a course requirement or to get extra credit. Students are the usual subjects in social science research — made to play games, fill out questionnaires, look at pictures and otherwise provide data points for their professors’ investigations into human behavior, cognition and perception.

But who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.

The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.

The change received little notice until a March opinion article in The Chronicle of Higher Education went viral. The authors of the article, a professor of human development and a professor of psychology, interpreted the revision as a license to conduct research without submitting it for approval by an institutional review board.

That is, social science researchers ought to be able to decide on their own whether or not their studies are harmful to human subjects.

The Federal Policy for the Protection of Human Subjects (known as the Common Rule) was published in 1991 after a long history of exploitation of human subjects in federally funded research — notably, the Tuskegee syphilis study and a series of radiation experiments that took place over three decades after World War II.

The remedial policy mandated that all institutions, academic or otherwise, establish a review board to ensure that federally funded researchers conducted ethical studies.

“One of the problems with the regulations is not every case is a difficult case and needs to go to an I.R.B.,” said Zachary Schrag, professor of history at George Mason University and author of “Ethical Imperialism,” about an institutional review board.

“Like behavioral economics experiments — you’re talking about giving people Hershey’s kisses to find out how hard it is for them to give up chocolate or how hard they will work to get the chocolate.”

Among like-minded academics, there was much 140-character fist-pumping on Twitter over the end to what they perceived as review board nit-picking and delays getting studies approved.

The problem is that the Office for Human Research Protections, in its revised rules, did not specify exactly who gets to determine what is and is not a benign behavioral intervention. Although there is a suggestion that someone other than the researcher should make that call, the office does not mandate it.

“Researchers tend to underestimate the risk of activities that they are very comfortable with,” particularly when conducting experiments and publishing the results is critical to the advancement of their careers, said Tracy Arwood, assistant vice president for research compliance at Clemson University.

A previous version of the revised Common Rule, which prompted more than 2,100 comments, called for a web-based decision tool that researchers could use to determine whether their research was exempt. But such a tool, which many thought left too much to the individual researcher’s personal judgment, did not make it into the final rule.

A vocal proponent of diminishing the role of institutional review boards is Richard Nisbett, professor of psychology at the University of Michigan and co-author of the opinion piece in The Chronicle of Higher Education.

Social science researchers are perfectly capable of making their own determinations about the potential harm of their research protocols, he said. A behavioral intervention is benign, he said, if it’s the sort of thing that goes on in everyday life.

“I can ask you how much money you make or about your sex life, and you can tell me or not tell me. So, too, can a sociologist or psychologist ask you those questions,” Dr. Nisbett said.

“There’s no such thing as asking a question of a normal human being that should be reviewed by an I.R.B., because someone can just say, ‘To heck with you.’”

His own research, he said, involves “showing people a fish tank and asking them what they saw.” Hardly the stuff of emotional trauma, he thinks.

But research subjects, many of them students, may not feel like they can just walk away from a teacher’s experiment. Recall the Milgram study at Yale, in which visibly distraught subjects obeyed orders to administer what they thought were electric shocks to yelping actors.

A decade later, in the 1970s, there was the Stanford prison experiment, in which arbitrarily labeling student subjects prisoners or guards quickly led to “Lord of the Flies” type cruelty.

And then there was the research that involved humiliating and emotionally tormenting 22 undergraduates at Harvard University over three years starting in 1959. (One of those students was a young Ted Kaczynski, who later became the Unabomber.)

Dr. Nisbett countered that those examples were outliers. And in the case of the Milgram study, he said, “I think it should definitely have been approved even if people would have known that it was going to cause substantial psychic pain to some subjects, because the knowledge gain is precious.”

Administrators of institutional review boards said that it only took one bad study to ruin an institution’s reputation, finances and eligibility for government funding.

“There’s a lot at stake beyond assessing the potential risk to subjects,” said Rebecca Armstrong, director of research subject protection at the University of California, Berkeley. “We try to be as flexible as we can, but institutionally you sort of arrive at what fits into minimal risk and create review processes accordingly.”

Already at many universities, researchers who think their studies pose minimal risk to subjects need only get a signoff from a review board staff member. They do not have to submit their proposals for approval by the full review board — usually made up of colleagues, at least one member of the community and sometimes also students.

Ultimately, review board administrators and board members said the revised federal rules were a baseline for oversight, and they must determine what was appropriate for their institutions. But they are feeling increased pressure from resident researchers who, like Dr. Nisbett, think that the revised federal rules now allow self-regulation.

“There seems to be a major paradigm shift going on away from the original goal of the I.R.B. to protect human subjects and toward the convenience of researchers in the name of so-called efficiency,” said Tom George, a lawyer and bioethicist who serves on the institutional review board at the University of Texas at Austin. “I find that of deep concern.”

Not all researchers are pushing for diminished review board oversight, however. Many said they appreciated it.

“It is a little more work and some could find it onerous, but I still find it a worthy process because you get questions and suggestions that make you feel more confident that subjects are protected,” said Nathaniel Herr, an assistant professor of psychology at American University, who also serves on the school’s institutional review board.

Besides, he added, “It just takes one scandal to make people doubt all research and not want to participate, which would harm the whole field.”

Correction: May 22, 2017

An earlier version of this article misspelled the surname of a professor of psychology at the University of Michigan in two instances. As the article correctly noted elsewhere, he is Richard Nisbett, not Nesbitt.

https://mobile.nytimes.com/2017/05/22/science/social-science-research-institutional-review-boards-common-rule.html?_r=0&referer=https://t.co/mEqFjmi9s5?amp=1