Evidence Based?

Once I start reading a book, it has to be utterly abysmal before I decide to give up on it. The book I’m reading now is putting my resolve to the test.

Unknown

The book is called The Optimistic Child. It’s written by an eminent psychologist named Martin Seligman. Those of you who took Intro. to Psychology in college may be familiar with him as the man who first described the phenomenon of learned helplessness, which is an insightful and immensely helpful model of depression. As psychological theories go, it’s pretty great.

So when I read that Dr. Seligman has been at the forefront of a school of psychology called Positive Psychology, I was intrigued and wanted to learn more. I looked over the list of books he’s written, read the reviews, and decided to buy The Optimistic Child. I work with kids. I’m the father of a kid. It would be great if they were all optimistic. I want to help them become more optimistic. What could be bad?

Well… the book. The book could be bad.

Because it’s Martin Seligman doing the writing, I have to assume that the studies he used as the basis of the book were impeccably designed and implemented. But some of the techniques he recommends for parents to use with their kids left me scratching my head. I found them way too mechanistic and contrived to be practical. Besides, if I tried getting any of the kids I work with to react to stories featuring characters named “Gloomy Greg” or “Hopeful Heather,” they’d never want to talk to me again.

There’s a distinct possibility that my negative reaction to the book has less to do with it being a bad book (though it is) than it does with the fact that it so perfectly embodies our current societal fixation with all things evidence based. This fascination with quantitative research in the realm of human learning, emotions, interactions, and overall development is rooted in well-intended quests for “best practices” for our kids. But in striving to quantify everything, we’ve lost sight of the big picture.

I wasn’t always like this. When I was in graduate school, I was a believer. Why should anyone pay money or place their child’s mental health in the hands of someone who provides untested treatments? If there’s no valid, reliable research on an intervention, how can we know it’s effective? All we have to go on is the word of the practitioner and their self-interest has to leave them irretrievably biased.

In many respects, I still stand by those concerns. Psychological and educational research has greatly expanded our knowledge and has led to the creation and implementation of all kinds of new tools we can use to help our kids learn and grow. What concerns me now is that quantitative research has become the be-all and end-all in determining what services we’re going to collectively support and which ones we’re not. This wouldn’t be such a bad thing if it weren’t for some of the shortcomings inherent in conducting research in these domains. Here’s an example:

As a requirement for earning my master’s degree, I had a choice between doing a thesis or taking comprehensive exams, which were widely considered to be a joke. When I was in grad school, most of my behavior reflected one of two possible conclusions about my psyche: 1) I wanted to push myself as hard as I could to show myself exactly what my capabilities were or 2) I kept giving in to a relentless masochistic streak that pushed me in new directions, each one more labor intensive and torturous than the last. It was probably a combination of the two. At any rate, I did a thesis. It involved recruiting and administering surveys to over 120 subjects, re-learning statistical analysis, teaching myself a computer language to analyze said statistics, and making countless revisions based partly on my own perfectionism and partly on an indecisive and forgetful thesis advisor. Shortly after I turned it in, I asked that same professor, who I greatly admired despite his quirks, about another research idea I had. At the time, I had just started working at The Academy of Physical and Social Development, a program that was the seed for what became my current practice, Academy MetroWest. I thought it would make for an interesting study if I could look at outcomes of kids’ participation in the program. My professor agreed that it was a good idea but cautioned that intervention research is really difficult to pull off successfully. You have to develop detailed protocols that dictate exactly how counselors should respond in any number of different circumstances. Then you have to take steps to assure that the counselors you’re observing actually follow the protocols. If you don’t, there’s no way to prove that what you’re measuring is the intervention you want to measure, rather than just the effectiveness of particular counselors who are all responding to kids in different ways.

I never pursued the idea. The challenge was too daunting. However, that conversation I had with my professor stayed with me and led me to think more skeptically about research in my field. Even if I had written up that protocol and had the authority to get our staff to follow it, would I have been measuring what I claimed to be measuring? I don’t think so.

One of my counseling heroes is Carl Rogers. He was the psychologist behind Client-Centered Therapy, one of the most widely used and accepted models of counseling. Rogers’ ideas went a long way in describing and mapping out the mechanisms at work within the counseling relationship. In his 1961 classic On Becoming a Person, Rogers writes:

“It has been found that personal change is facilitated when the psychotherapist is what he is, when in the relationship with his client he is genuine and without ‘front,’ openly being the feelings and attitudes which at that moment are flowing in him. We have coined the term ‘congruence’ to try to describe this condition.” 1

With Rogers’ model in mind, it’s difficult to imagine how a counselor who is following a  research protocol is being congruent, which Rogers described as one of the most important aspects of counseling. So, my study would have measured something reliably but I’m not really sure what.

But there’s another problem with this type of research and this is really where the heart of the problem is: Some things just don’t lend themselves to being quantified or measured. If I were to have undertaken that study, I would have examined just how big an impact the services delivered had on the two stated goals of the program – enhancing self-image and social skills. Social skills are comparatively discrete and, to some extent, lend themselves to being quantified and measured. Skill based programs like Social Thinking are frequently studied because, with some reliability and validity, you can measure the frequency with which program participants use the specific skills being taught. But what about self-image? There are questionnaires out there that purport to measure self-image. They’ve been through reliability and validity research and the people conducting those studies claim that they’re a credible way of measuring a person’s sense of self-worth. I just don’t buy it. For one thing, self-image is such an abstract concept that, on its surface, it seems crazy to try to attach numbers to it. Even if you accept the notion that a questionnaire could measure self-image in the moment, it doesn’t account for a phenomenon that I, and many other clinicians, have experienced on more than one occasion. I’ve worked with plenty of kids over the years who left the program without appearing to have internalized anything that we were working on. Then, a few years later, I’ll get a visit, a phone call, or an email from that kid saying that he kept having a hard time through his teenage years but then, at some point, he remembered something I used to tell him during groups and it occurred to him that I was right. He then used that realization to help him get his act together and have success. That type of outcome is no less important than any other but it’s not likely to be reflected in any quantitative outcomes.

It’s this same dynamic that makes me cringe when I think of how obsessed we’ve become with standardized testing as THE way to measure learning and achievement and when I see the zealotry in our society attached to pursuing anything that claims to be evidence based. So when you read about some intervention that did not stand up to empirical examination, it’s important to ask if that finding has more to do with the nature of the intervention or the nature of research itself. Or maybe I’m just being “Gloomy Greg.”

1) Rogers, Carl R. (1961). On Becoming A Person, Boston, MA, Houghton Mifflin Company.