Data Collection as Social Exchange

The process of collecting data is often a process of social exchange. This means that conveying information from one person to another is done in exchange for desired goods or services (like money, future patient referrals, or authorship on articles) or some sentiment (like pride, loyalty, duty, or altruism). The social exchanges between a doctor and a patient in a hospital, or an interviewer and a person in her home, can be quite different from those involved in acquiring access to particular computer databases or hospital record systems. But they are still social exchanges: people with different backgrounds, interests, and motivations are coming together to communicate across these differences. It is easy but perilous for health scientists, particularly epidemiologists, to forget this.

The social exchanges between interviewers and their respondents resemble familiar everyday social interactions in some respects. Some people participate in surveys because they feel they need to. This motivation is rapidly disappearing given informed consent and overload from market surveys: residents of the United States appear increasingly unwilling to participate in person-to-person interviews of any sort (Atrostic et al. 1999). Some participate in interviews because they hope to get something from it: the pleasure of having someone really listen to their opinions, the sense that they are helping others learn or benefit from their knowledge or condition, a way to fill up an otherwise dull day in a hospital or a waiting room. No matter what their original motivation is, when they participate in a survey, respondents react to the words and sentiments of the interviewer. They may want to obscure or minimize characteristics or practices deemed sensitive, such as household income or personal hygiene; others such as sexual practices or consumption of illicit substances may be minimized in some contexts and exaggerated in others. Respondents may also overemphasize qualities or practices that are valued, like church attendance, time spent with family, or healthy behaviors (Ross and Mirowsky 1984). At least where interviewing is a familiar form of data collection, much of what takes place in an interview involves the respondent's attempts to figure out what the "right answer" would be and to calibrate how to make the interviewer see them in a particular (positive) light.

Table 4.1. Comparison: Knowledge/Attitude/Practice (KAP) Survey versus 24-hour recall versus observation among 247families in Bangladesh

Observ. Observ.

(# Discordant Assessments/# Comparisons between Methods) Percent

Feces taken out of the home 21/58 20/58

Caretaker washes after defecating 37/95 14/98

Caretaker washes after touching feces 17/67 14/60

Source: Extracted from Tables 2 and 3 of Stanton et al. 1987:220.

A. Are Interviews the Best Way to Measure Sensitive Behaviors?

Asking about Hygiene Behaviors in Bangladesh

A study in rural Bangladesh was undertaken by an interdisciplinary group including a physician, epidemiologist, anthropologist, and statistician (Stanton et al. 1987). It was designed to compare the accuracy of observations, interviews, and 24-hour diaries as techniques to measure the presence of sensitive hygiene practices so that the appropriate (most effective) method could be chosen. Mothers' hygiene behaviors were observed and coded by researchers, and those same behaviors were assessed by personal interviews and recorded by mothers in diaries. Then specific behaviors were compared across methods. Table 4.1 summarizes some of the results.

The denominators in the results extracted show the total number of instances where two methods gave information about the same behavior, whereas the numerator shows the number of pairs for which the two methods gave different (discordant) results. Thus the first column, first row shows that for the behavior "feces taken out of the home" there were 58 instances where data about this behavior could be compared between interview and observation of mothers, and 21 of those comparisons had interviews that gave one assessment of a behavior while observations gave a different assessment. Thirty-six percent of the comparisons had discordant assessments. A discordant result is one where a mother was observed to take feces out of the home but did not report this at interview, or she was observed not to take feces out of the home but reported doing so at interview. These hygiene behaviors seem fairly difficult to record with any accuracy in an interview survey or guided recall of the past 24 hours, which suggests that they can only be measured accurately using observations. But note that the relative accuracy of methods varies across behavior: for "feces taken out of the home" the survey and 24-hour recall were each inaccurate about one third of the time when compared with observation. They were each inaccurate about one-quarter of the time for "caretaker washes after touching feces." But for "caretaker washes after defecating" the survey was inaccurate in almost 40% of comparisons, whereas the 24-hour recall was inaccurate in 14% of comparisons. So under some circumstances 24-hour recall might be an efficient alternative to observation for this last category of behavior.

To try to reduce the measurement error caused by changes in the way a question is asked or the context in which it is asked, researchers use a standardized interview format. But this type of standardization can interfere with the quality of the data collected. Two anthropologists analyzed a series of survey interviews videotaped for research purposes by the General Social Survey and the National Health Interview Survey researchers. They wrote that the interview is "a standardized procedure that relies on, but also suppresses, crucial elements of ordinary conversation" (Suchman and Jordan 1990). The presence of an interviewer and a respondent in a survey interview implies that a conversation between two people will take place, but many of the components of ordinary conversation are not allowed to take place at all. For example, the survey format gives the interviewer all control on speaking and topic, it standardizes the presentation and content of questions by prohibiting redesign of questions by the interviewer, it places specific limits on the forms answers can take, and it limits the interviewer's ability either to detect or to repair respondent misunderstandings. These analysts argue that the strategy of standardization in interviews "mistakes sameness of words for stability of meaning" (1990:233). They mean by this that the many strategies to standardize interviews can create bored and impatient respondents who censor their responses or fail to make themselves understood. They suggest that research designers rethink how interviewer and respondent work together during an interview, possibly by making interviews visually available to both parties, and definitely by encouraging interviewers to discuss meanings and to clarify their questions with respondents. One of the commentators adds that it might make sense for interviewers and respondents to complete a standardized interview or questionnaire following a lengthy conversation between them.

We face a paradox here: significant amounts of research time are spent trying to figure out how to increase response rates to interview requests.

For example, the following questions have received research attention in the United States in just the past few years: Do response rates go up if interviewers are the same sex and ethnicity as respondents? [Yes.] Do mail response rates go up if people are sent a token amount of money along with a questionnaire? [Yes, by as much as 15%.] Do they go up if an informational pamphlet about the study is sent to them along with the questionnaire? [No, not at all.] Does quality of communication with study participants influence their willingness to stay in a long-term follow-up study? [Yes, a great deal.] Does requesting biological specimens reduce people's willingness to participate? [Only a little.] Do people generally feel positive about participating in epidemiologic studies? [Yes, they feel they are adding to human knowledge and helping to prevent disease, although some have qualms about providing personal information.] Yet despite these efforts to increase respondent participation and understand motivations and causes of nonparticipation, the actual process of collecting the data still poses significant impediments to fluid and easy communication. This is one argument for using more ethnographic techniques to accompany standard interview practices.

B. Sensitivity of Topic and of Interviewer Influence on Respondent Accuracy: The World Fertility Survey in Nepal

Two anthropologists wanted to understand whether women in rural Nepal responded differently to sensitive questions from an outside interviewer working for the World Fertility Survey than they did to sensitive questions from an ethnographer who had lived in their midst for one year (Stone and Campbell 1984). The anthropologists did the comparison, and they found that women did respond differently. The ethnographers concluded that the World Fertility Survey was "fully or partly unintelligible" to 80% of the respondents. For example, women interpreted a question about whether they had heard of abortion as a question about whether they themselves had had an abortion; they interpreted a question about whether they knew where to go to get family planning services as asking whether they themselves went to get those services there. The ethnographers concluded that women knew far more about family planning services than the survey suggested.

But not all information on the World Fertility Survey was equally sensitive, and a portion of the sensitive questions still yielded accurate answers: although contraception was sensitive and private and yielded inaccurate data, information about deaths of children and fertility history was sensitive but not private and yielded accurate information. This is attributable partly to the context of the World Fertility Survey interviews themselves: rather than being undertaken in a private area, both interviewer and respondent were surrounded by curious onlookers, and seemingly private events were held up to public scrutiny. It seems that the World Fertility Survey designers did not imagine that this social context would surround data collection, or they did not think it would be a hindrance. They could have done the work required to understand what the social context of the interview would be like. They also could have done the fieldwork required to understand which topics were likely to be sensitive and to yield inaccurate information. Their failure to do so resulted in statistically reasonable but invalid results.

It isn't only survey respondents who try to influence how data will appear: researchers also participate in social exchanges that influence their use of their data, and they also have beliefs about what constitutes a "correct" answer. Scientists developed the "double blind" strategy (which masks the identity of a study group both to participants and to the research team) because they learned that researchers also make both subtle and crude attempts to influence study outcomes (see Day and Altman 2000). The range of researcher influence goes from subtle and unconscious changes in question content or searching some hospital records more carefully than others to outright faking of lab reports and painting false colors on mouse fur. And it extends also to data analysis. One story I have heard in various versions concerns a statistician who wanted to show his collaborators the force of their prior expectations. He showed them a graph that displayed the results they supported and asked them to review why this happened. They readily did so. Then he confessed that he had mislabeled the graphs, and the data actually showed results opposite to their expectations. At this point a few of his collaborators refused to accept the results, even though they had supported the methods when those results appeared to agree with their expectations.

Data collectors in the field also influence data quality and accuracy. Here, too, the concept of data collection as social exchange can help explain systematic errors. Another bias identified by epidemiologists and other designers of surveys is called "interviewer bias," where specific types of interviewers differentially question and probe different types of respondents. A related bias in such exchanges is that of "reporting bias," where respondents are differentially willing to reveal sensitive information about themselves to different types of interviewers. For example, male interviewers produce less accurate information from respondents (both male and female) than do female interviewers, and "White" interviewers in the United States produce less accurate information about "non-Whites." The ideal interviewer for most household surveys in the

United States seems to be a middle-aged woman matched to respondents by ethnicity and primary language.

Sometimes data collectors seek to influence outcomes; other times they seek to maximize income while minimizing work. Survey researchers have to build in various cross-check and validation procedures to make sure that interviewers are not sitting under a tree or in a coffee shop making up data themselves. Though the following quote concerns surveys in developing countries, it is also potentially relevant for interviews in the United States:

We have devised the following descriptive definition: A rural Third World Survey is the careful collection, tabulation, and analysis of wild guesses, half-truths, and outright lies meticulously recorded by gullible outsiders during interviews with suspicious, intimidated, but outwardly compliant villagers. (Chen and Murray 1976:241)

Examples ofsocial exchange to this point have been concerned primarily with exchanges between individuals rather than organizations. But the theme of exchange also is relevant to organizations and teams. It takes another form when researchers have to change their research designs to gain access to particular types of patients or types of research environments. This is especially true for epidemiological and social science researchers who do not provide patient care, since they must somehow obtain access to records or human respondents. Almost any social science researcher who has had to obtain permission to work with a group of patients can tell stories about how the study design changed in the process of negotiating or maintaining that access to patients. For example, DiGiacomo's efforts to participate in an epidemiological study of diagnostic delays in cancer were repeatedly frustrated by her colleagues (1999:438). Timmermans (1995) was forced to stop his observational study of resuscitation efforts in a hospital emergency room eight months early, and he also was compelled to invest considerable time in forming political alliances, exploring legal options and negotiating with the hospital's Institutional Review Board (IRB). The hospital IRB was promoting a particular vision of quality scientific research that did not respect qualitative data, and it protected the reputation of its parent institution and the medical profession more generally in the face of what it interpreted as unfounded criticism. And Casper's study of fetal surgery (1997) was shut down early when surgeons uncomfortable with her politics refused her further access to their patients. These are just a few examples of the ways that organizations influence the methods and findings of both staff and guest researchers.

Pregnancy Nutrition

Pregnancy Nutrition

Are You Expecting? Find Out Everything You Need to Know About Pregnancy and Nutrition Without Having to Buy a Dictionary. This book is among the first books to be written with the expertise of a medical expert and from the viewpoint of the average, everyday, ordinary,

Get My Free Ebook


Post a comment