research

How you don’t know you

It feels right to start grad school with the suggestion that people know less than they think they do. More specifically, when tested on our own experiences or abilities, we often get things wrong.

That issue came up peripherally during my Wednesday morning lecture. Our main focus was working memory  — the memory system that temporarily stores information for use in real time, like if someone tells you her number and you repeat it in your head until finding your phone, where you can save it.

Working memory is a meaty topic with countless applications. But I found myself fixating on a different point, one raised by an example my professor gave to illustrate how technology might strain working memory. Studying the staff in neonatal intensive care units, researchers found that the senior physicians, junior physicians and nurses varied in their comfort levels with computer technology used on the job. But one thing they all had in common was inaccurately reporting how much they looked to computers for patient information. They said one thing, while the people observing them found a different story. Thus their accounts of their own behavior, or self-reports, did not reflect reality.

There’s long been a debate in the social sciences over the value of self-report. Self-report is used any time researchers ask study participants to provide information about themselves for analysis. The prompts could be anything from “How many units of alcohol do you consume in the average week?”, to “How generous of a person do you think you are compared to others?”, to “Agree or disagree: I experience more conflict within myself than with other people.” In thorough research, a lot of thought goes into developing questions and answer choices that will produce the most accurate results possible. Ideally, those results are also compared with other measures.

It seems neither feasible nor necessary to throw out the method of self-report just because it’s not always reliable. But critical looks at self-report are full of great examples of people being bad at it.

In a paper published this past May, researchers found that subjects used Facebook significantly less than they thought they did. (Though I keep wondering if that’s a fluke. Or else college students are so obsessed with Facebook that they think they’re on it even when they’re not. That’s my highly scientific theory.)

And an influential paper published 36 years ago probed subjects’ errors in figuring out their own motivations, thought processes or reactions in a range of tasks. They reported being moved by literary passages that didn’t actually influence their responses to the text. When presented with nightgowns and pantyhose, they preferred the option placed farthest to the right but insisted their choice had nothing to do with position.  Their sleeping patterns changed after they took fake sleeping pills (which highlights how the placebo effect reflects a lack of self-awareness. People think it’s the pill working, but since the pill is is medically worthless that can’t be true). The examples go on.

While these examples don’t come close to capturing all types of self-report — what about, say, explaining major life decisions like picking a career or spouse? — they challenge some of those mundane ways in which we think we know ourselves. I usually feel like I know why I loved a movie or bought a certain pair of pants. But when do I really? And when am I fooling myself?

Then, of course, there’s that big strike against the accuracy of self-report: illusory superiority, or the above average effect. It’s well-documented that people tend to rate themselves as above average in different scenarios. Those ratings can’t match reality, because most people being above average would contradict the whole concept of “average.” Yet by and large, we claim to trump most of our fellow man at, among other things, driving cars and taking tests — with other interesting patterns thrown in. One set of studies found that students who scored above average on a test were in fact more likely to underrate their performance, whereas the students who did poorly overestimated their scores.

The paper argues that these patterns occur because the skills required to evaluate performance are tied to the skills needed to do well on tests. For my part, I’m curious whether there might be a bookended range in which people prefer to view themselves: above average, but not so above average that the claim seems unrealistic. I wonder if we take care to paint ourselves fictions that everyone can find reasonable to believe.

My speculations aside, these findings are concrete reminders of how easy it is to feel like we know things when we don’t — to believe our (maybe just reasonable enough!) fictions. I may never look at pantyhose the same way again.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s