English Filipino French German Spanish

Chapter 02 - Studying Marriage and Families

This book has been updated. The new version of this chapter is located at FreeSociologyBooks.com

Scientific Sociology

One of the most remarkable traits that August Comte mandated for Sociology was a core of scientific rigor. He proposed the concept of Positivism, the scientific-based sociological research that uses scientific tools such as survey, sampling, objective measurement, and cultural and historical analysis to study and understand society. Although the current definition of positivism expands far beyond Comte's original vision, sociological scientific methodology is used by both government and industry researchers and across higher education and the private sector. Comte was originally interested in why societies remain the same (social statics) and why societies change (social dynamics). Most sociological research today falls within these broad categories. Sociologists strive for Objectivity, which is the ability to study and observe without distortion or bias, especially personal bias. Bias-free research is an ideal that, if not present, will open the door to extreme misinterpretation of research findings.

Sociological science is both different from and similar to other scientific principles. It differs from chemistry, biology, and physics in that it does not manipulate the physical environment using established natural science theories and principles. It's similar to chemistry, biology, and physics in that statistical principles guide the discovery and confirmation of data findings. Yet sociology has no universal social laws that resemble gravity, E=MC2, or the speed of light. This is because chemistry, biology, and physics have the luxury of studying phenomena that are acted upon by laws of nature. Sociologist study people, groups, communities, and societies, which are all comprised of agents, people who use their agency to make choices based on their varied motivations (Google "Anthony Giddens, human agency, 18 January 1938, British sociologist").

Sociologists Perform Survey Research

Sociologists study people, who chose, decide, succeed, fail, harm others, harm themselves, and behave in rational and irrational ways. I've often explained to my students that if I took an ounce of gasoline and placed a burning match upon it, the gas would have to burn. The gas has no choice, just as the flame has no choice. But, if someone placed a burning match on your arm, or the arm of your classmate, you or they might respond in any number of ways. Most would find the experience to be painful. Some might enjoy it, others might retaliate with violence, and yet others might feel an emotional bond to the one who burned them. Sociologists must focus on the subjective definitions and perceptions that people place in their choices and motivations. In fact, sociologists account for human subjectivity very well in their research studies. The most common form of sociological research of the family is survey research.

Surveys are research instruments designed to obtain information from individuals who belong to a larger group, organization, or society. The information gathered is used to describe, explain, and at times predict attitudes, behaviors, aspirations, and intended behaviors. Types of surveys include political polls, opinion surveys, national censuses, paper surveys, verbal interviews, online surveys, and audience voting -- both call-in voting (like American Idol votes) and polls. The National Study of Families and Households, the General Social Surveys, and other large-scale surveys that address family issues are common.

Polls are typically surveys that collect opinions (such as who one might vote for in an election, how one feels about the outcome of a controversial issue, or how one evaluates a public official or organization. The Census Bureau (www.census.gov) by Constitutional mandate must count its entire population every 10 years (as was done in the 2010 U.S. Census). Population is the entire membership of a country, organization, group, or category of people to be surveyed (e.g., U.S. population = 305,000,000). A sample is only some portion of the population but not all of it (e.g., a U.S. Census Bureau's American Community Survey of 35,000 U.S. Citizens; see www.census.gov/acs/www). Surveys can poll a certain category of people on a one-time basis: a Cross-Sectional Survey is a survey given once to a group of people. Surveys can also ask the same people to fill out a survey two or more times over an extended period of time: a Longitudinal Survey is a survey given to the same people more than once and typically over a set of years or decades.

Table 1. Hypothetical University Student Body Population ABC University with 10,000 students

Females= 5,000/50%Males=5,000/50%
African American=1,000/10% African American=1,000/10%
Hispanic=1,000/10% Hispanic=1,000/10%
Asian=1,000/10% Asian=1,000/10%
Caucasian=1,000/10% Caucasian=1,000/10%
Other Races=1,000/10% Other Races=1,000/10%

 

Look at Table 1 above and we'll use this hypothetical ABC University student body population to better understand sampling. One of the most important issues when doing survey research is to ensure a good scientific sample. A Random Sample is a portion of the population that is drawn in such a way that every member of the population has an equal chance of being selected for the survey (e.g., the ABC University registrar's office uses computer software to randomly select 1 out of every 10 students for a survey about student opinions in favor of or against getting a football team). A Representative Sample is a sample drawn from the population, the composition of which very much resembles that of the population. Typically this is obtained via a stratified random sample.

A Stratified Random Sample is a portion of the population that is drawn in such a way that every member of the population and important sub-categories of the population have an equal chance of being selected for the survey, yielding a sample that is demographically similar to population (e.g., using the demographic table above, ABCU would sample 1 out of 10 students, for 1,000 total students. They would also want half of those students to be female and half male. They would also want to select for the racial groups. The easiest strategy to do this would be for the registrar to program the computer to select only the female students' files. Then they would have the computer select only the African American female students' files and select 1 out of 10 students until they have 100 selected. They would repeat this for all other racial groups and then do the same for the males. Ideally, every student would respond to the request to take the survey, and they would have a 1,000 student sample that was half female and half male, with all five racial groups represented equally (see Table 2 below for example). This is both ideal and hypothetical, but it's typical of the goal sample takers have of a stratified random and representative sample, and the closer they get to this ideal the better the sample).

Table 2. The Hypothetical ABCU Sample Composition of 10,000 Students  (this never happens in the real world)

Total Student Body Numbers/ProportionsSample Student Body NumbersSample Student Body ProportionsPercentage Comparison of Population and Sample Proportions
Females 5,000/50% 500 50% 100% representative
Males 5,000/50% 500 50% 100% representative
African American=2,000/20% 100 Females/100 Males 10% Females/ 10% Males 100% representative
Hispanic=2,000/20% 100 Females/100 Males 10% Females/ 10% Males 100% representative
Asian=2,000/20% 100 Females/100 Males 10% Females/ 10% Males 100% representative
Caucasian=2,000/20% 100 Females/100 Males 10% Females/ 10% Males 100% representative
Other Races=2,000/20% 100 Females/100 Males 10% Females/ 10% Males 100% representative

 

A Convenience Sample is a portion of the population that is NOT scientifically drawn, but is collected because they are easy to access (e.g., a group of ABCU students waiting at a bus stop, a group of ABCU students who respond to a radio talk show web poll, or a group of ABCU students who have children and bring them to the campus daycare center). Convenience samples yield weak results. Or as one of my mentors, Dr. Tim Heaton (BYU), once said, “If you start the presentation of your research results with 'We didn't really do good science, but here's what we found . . .' then few will stick around or care about what you found.”

It is also important to consider a few other scientific principles when conducting survey research.  You need an adequate number of respondents. Sample Size is the number of respondents who are designated to take the survey (30 minimum in order to establish statistical confidence in the findings). You also have to obtain a relatively high Response Rate, which is the percentage of the original sample who successfully completed the survey. For example, at ABC University, if we set out to survey 1,000 of the student body of 10,000 students but we only got 200 students to take the survey, then our response rate risks being too low. A response rate of 200 out of 1,000 students is a 20 percent response rate. If 750 out of 1,000 students responded, that would be a 75 percent response rate. A sample of only 200 would likely not yield enough diversity in responses to get a broad understanding of the entire student body's reaction to the issue.

With a high enough response rate and a good scientific sample, one could feel comfortable comparing the sample's results to what the entire student body population might have said, had they all been surveyed. Generalizability means that the results from the sample can be confidently assumed to apply to the population (as though the population itself had been studied). Also important is the quality of the survey itself as a scientific instrument. Valid Survey Questions are questions that are accurate and measure what they claim they'll measure (e.g., Note the difference between these two questions for the hypothetical football team survey: “Every campus needs a football team” versus “This campus would benefit from a football team.” The first lacks validity because it isn't really getting the answer needed for the study, it's seeking an opinion about campuses and football teams in general). Reliable Survey Questions are survey questions that are relatively free from bias errors that might taint the findingsIn other words, reliable survey questions are consistent.

Components of Good Surveys

There are 2 types of survey questions. Open Survey Questions  are questions designed to get respondents to answer in their own words (e.g., “What might be the benefits of having a football team? ________________________________ ” or “What might be a negative consequence of having a football team? ________________________________ ”).

Closed Survey Questions are questions designed to get respondents to choose from a list of responses you provide to them (e.g., “About how many college football games have you ever attended? __1 __2 __3 __4 __5 __6 __7 __8 __9 __10+”). Likert Scale Questions are the most common response scale used in surveys and questionnaires.  These questions are statements that respondents are asked to agree or disagree with (e.g., “Our campus would be deeply hurt by a football team”). The respondents choose from the scale below for their answer:

1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree

Demographic Questions are questions that provide basic categorical information about the respondent, including age, sex, race, education level, marital status, birth date, birth place, income, etc. In order to run statistical analysis on survey results, one must enter the data into Excel, Statistical Packages for the Social Science (SPSS), or Statistical Analysis Software (SAS).  Most statistics are run on numbers. By converting responses into numbers, most results can be analyzed. For example, on the Disagree/Agree scale above, one would use the number 1 to record a response of Strongly Disagree. Words can be analyzed using content analysis software.  Content Analysis is the counting and tabulating of words, sentences, and themes from written, audio, video, and other forms of communication. The goal of content analysis is to find common themes among the words. For example, if an open-ended question such as this were asked, “What might be a negative consequence of having a football team?” then the results would be carefully read with tabulations of common responses. When we asked this question to our university students in a random sample, the worry about the high expenses required to fund the team and program was one of the most common negative consequence reported.

A few specific types of data can be analyzed using statistical measures. Nominal Data are data that have no standard numerical values. These is often referred to as categorical data (e.g., “What is your favorite type of pet? __Reptile __Canine __Feline __Bird __Other”). There is no numerical value associated with Reptile that makes it more or less valuable than a Canine or Other type pet. Other examples include favorite color, street addresses, town you grew up in, or favorite ice-cream flavor. Ordinal Data are rank-ordered data that have standard numerical values. These are often referred to as numerical data (e.g., “How many movies have you seen in the last two weeks? __0 __1 __2 __3 __4 __5”). Ordinal data has the assumption that seeing two movies took twice as much effort as seeing just one movie, and seeing four movies was twice the effort of seeing just two. The values are equally weighted. The same could be said about how many A's you earned last semester, how much you get paid per hour at work, or how many cars your family drives -- they are numerical values that can be compared and contrasted. Ratio Data are data that are shown in comparison to other data. For example, the Sex Ratio is the number of males per 100 females in a society. The sex ratio in the U.S. was reported as follows on 5 February 2009: Alaska 107/100; U.S. Total 97.1/100; Rhode Island 93.6/100 (these were 2006 estimates from http://1.usa.gov/dRd8l4). Ratios provide comparative information, and we can see that in 2006 Alaska had more males than females -- 7 extra per 100 females. Rhode Island had nearly 7 fewer males per 100 females.

All of the examples above of football-team-related questions are considered variables. Variables are survey questions that measure some characteristic of the population (e.g., if married students were more financially strapped than single students, one might find that they were more or less supportive of a football team based on their perception of how adding a football team might hinder or support their personal needs. Marital status as a consideration when comparing the findings of the survey becomes a variable in its own right). Two types of variables are measured: dependent and independent. Dependent Variables are survey variables that change in response to the influence of independent variables. The dependent variable would be desire or opposition for a football team. Independent Variables are survey variables that when manipulated will stimulate a change upon the dependent variables (e.g., by considering the marital status of responding students, one might find differing support/opposition to an ABCU football team).

When basic statistics are performed on data, we often call them measures of central tendency (Mean, Median, or Mode). Consider this list of numbers, which represents the number of movies 9 separate ABCU students had seen in the last two weeks:

0
1
1
1
3
4
4
5
8

Mean is the arithmetic score of the sum of all the numbers divided by the total number of students (e.g., 27÷9=3). Median is the exact midpoint value in the ranked list of scores (e.g., 0, 1, 1, and 1 fall below, and 4, 4, 5, and 8 fall above the number 3, thus 3 is the median). Mode is the number which occurs the most in a list of numbers (e.g., 1 occurs the most, so the mode is 1). Extreme Value is an especially low or high number in the series (e.g., 8 movies in 2 weeks takes an inordinate amount of time for an average student. Notice that if you removed the 9th  student's score and averaged only the remaining scores the mean is 2.375. Extreme values can throw the mean way off. If you'd like to learn more about survey research, take a research methods class. Chances are you will enjoy taking on the role of statistical detective. Here is an overview of simple questions to see if you are building a good survey.

  1. What do you want to accomplish in this survey?
  2. Who will your survey serve?
  3. Who is the target audience for the survey?
  4. How will the survey be designed?
  5. How will you obtain a sample for the survey?
  6. How will the survey be administered?
  7. How big should your response rate be to give your results credibility?
  8. How will the data be analyzed?
  9. How will the results be presented?
  10. Are humans or animals going to be at risk of harm in the survey?

Components of a good survey include clear purpose for taking the survey, clear understanding of desired outcomes of survey, good research supporting development and design of survey, appropriate sampling technique when collecting survey, reliability and validity in the survey and its questions and design, and clear and accurate presentation of survey findings that are appropriate for the type of survey used.

Can you figure out what might be wrong with these survey questions?

1.    Have you ever attended a college football or basketball game? __Yes __No

2.    Are you in favor of spending all ABCU's money on an expensive football program?  __Yes __No

3.    Are you not opposed to supporting a football program? __Yes __No

4.    I think ABCU's administration pays too much attention to community service.

1 Strongly Disagree 2 Disagree 3 Don't know 4 Agree 5 Strongly Agree

5.    It would be fiducially incompetent to initiate the cost-to-benefit ratio projections for a football team.

1. Strongly disagree  2. Disagree  3. Don't know  4. Agree  5. Strongly agree

  1. Here are the answers, or the problems with the above questions:
  2. 1. Double-barreled question: asks two questions in one and you can't clearly answer.
  3. 2. Biased question: uses emotionally laden language which might change the response.
  4. 3. Double negative: creates confusion.
  5. 4. Irrelevant question for the survey about student interest in a football team.
  6. 5. Too many technical words that the average person would not understand: creates confusion.

And here are better versions of the same questions

1.    Have you ever attended a college football game? __Yes __No
2.    Have you ever attended a college basketball game? __Yes __No
3.    Are you in favor of ABCU spending student fees on a football program?  __Yes __No
4.    Are you in favor of a football program? __Yes __No
5.    I think ABCU's administration should hold forums with students about the issue of a future football program.
1. Strongly disagree  2. Disagree  3. Don't know  4. Agree  5. Strongly agree
6.    I am concerned about a new football program being too expensive.
1. Strongly disagree  2. Disagree  3. Don't know  4. Agree  5. Strongly agree

Which response categories are useful for which survey question? It depends on the question!

1. __Yes  0 __No
4. Excellent 3 Good 2 Fair 1 Poor
5. Very likely  4. Somewhat likely  3. No preference  2. unlikely  1. Very unlikely
0. Never  1. Seldom  2. Often  3. Regularly
1. Strongly disagree  2. Disagree  3. Don't know  4. Agree  5. Strongly agree
1. Strongly disapprove  2. Disapprove  3. Don't know  4. Approve  5. Strongly approve
3. Better  2. About the same  1. Worse

When doing sociological research, it helps if you understand the SMART Paradigm

Samples

Methods

Attitude of skepticism

Researcher bias

Thorough understanding of literature

Samples have to be random and representative. If not, the results are fairly worthless. One of my graduate school professors explained that if you start a report with, “We didn't really do good scientific sampling, but here's what we found . . .” most people won't care about your findings because they know your science was weak. I compare it to this hypothetical incident. Your car is broken down late at night in a dangerous part of town. A passerby stops to help and says, “I don't know how to fix cars, but I'll go ask those people hanging out at the bus stop.”  He returns 10 minutes later and explains that three of the people there once had their cars break down and every time it was their spark plugs, so he recommends you change your spark plugs. Believe me, I know this is a cheesy example, but it conveys the point. Asking three people at a bus stop is a convenience sample of people (not even of mechanics). True, it does look and feel like a survey, but it is a terrible sample.

I watch this all the time on TV news stories where a few people on the street give their opinions, on internet polls where people who visit certain websites give their opinions, and on radio talk shows where votes from those who are selected to comment on the air are treated as though they somehow represent all people everywhere. Smart people always check the sample for representativeness and random selection.

Methods typically include experiments, participant observations, non-participant observations, surveys, and secondary analysis.

Experiments are studies in which researchers can observe phenomena while holding other variables constant or controlling them. In experiments, the experimental group gets the treatment and the control group does not get the treatment.

Even though sociologists rarely perform experimental surveys, it is important to understand the rigors required to execute this type of research. In this example, let's assume that researchers are testing the affect of a drug called XYZ. XYZ may help to completely repel an outbreak of herpes among herpes sufferers. But how can you discern whether improvement was the result of the medicine or simply a result of the patient being in the study? We'd need some form of control/controls. In the diagram below you can see how scientists might administer an experimental study if they took 300 patients and randomly assigned them to three groups: Group A, which was an inert gum-only control; Group B, which was the gum and sugar control (yes, sometimes two control groups are needed); and Group C, which was the experimental XYZ-laced gum.

Figure 1.  Experimental Research Design

© 2005 Ron J. Hammond, Ph.D.

Let's assume that the patients chewed their respective chewing gums for 11 months, then the medical results were gathered. Look at the diagram below to see a set of hypothetical results. Group A, the control gum-only group, showed a 5 percent improvement. Group B, the control-gum and sugar group, showed a 7 percent improvement. Group C, the experimental/treatment group, showed a whopping 27 percent improvement. Now, one study like this does not an FDA-approved drug make. But the results are promising. Interestingly, this is a pharmaceutical, medical study -- not a sociology study. Almost all experiments are very tightly controlled, and many transpire in laboratories or under professional clinical supervision, but sociologists rarely study in laboratories.  Scientists who do perform experiments can make causal conclusions. In order to establish cause, three criteria must be met: a correlation, time ordering (one preceded the other), and no spurious correlations. Below I will explain that these three criteria are not met in the cases of education and crime. Causation means that a change in one variable leads to or cause a change in another variable, (e.g., XYZ chewing gum causes fewer herpes outbreaks).

Figure 2.  Example of a Drug-Related Experimental Research Design

© 2005 Ron J. Hammond, Ph.D.

Sociologists do perform studies that allow for correlation research conclusions. There are three types of correlations: Direct Correlation, which means that the variables change in the same direction (e.g., the more education you have the more money you make); Inverse Correlation, which means that the variables change in opposite directions (e.g., the more education you have the less criminal activity you get caught doing); and Spurious correlation, which is an apparent relationship between two variables that indicates their relationship to a third variable and not to each other (e.g., the more education you have, the higher your family's standard of living, and the lower your likelihood of participating in criminal activities). In other words, there are other correlated factors simultaneously at play that influence criminal behavior.

Sometimes sociologists perform Field Experiments, which are studies that utilize experimental design but are initiated in everyday settings and non-laboratory environments. For example, a sociologist might manipulate the levels of lighting to study how factory work performance is impacted (Google "Hawthorn Effect"). Sociologists sometimes use a few other methods.  Participant Observation is a research method in which the researcher participates in activities and more or less assumes membership in the group he or she studies. Content Analysis occurs when the researcher systematically and quantitatively describes the contents of some form of media. Secondary Analysis is the analysis of data that have already been gathered by others.  Family Research studies tend to be survey studies, clinical observations, participant observations, secondary analysis of existing data, or qualitative interviews of family members.

One of the largest social surveys taken in the United States has been the General Social Surveys collected almost every year since 1972. These surveys have provided 27 national samples with over 50,000 survey takers and thousands of variables as of 2008 (see http://en.wikipedia.org/wiki/General_Social_Survey; retrieved 5 February 2010). These large volumes of data and variables allow researchers to study the family on a scale that most could never attain if left to fund and collect the data for themselves. I recently published an article about the financial plight of elderly widowed women in the U.S. The married women had much higher financial resources than the unmarried women. In general, women had fewer resources than men (see Hammond et al., 2008, "Resource Variations and Marital Status among Later-Life Elderly," JACS Vol. 2(1), pp. 47-60). By the way, my four co-authors on that article were senior students in our department here at UVU.

Researchers all over the world are using surveys to study the family. In Great Britain, the Family Resource Survey began in 1992 and has provided much-needed insight into the needs and functioning of these families (go to http://www.natcen.ac.uk for family research studies online). In China, a U.S. team of researchers performed a survey research study called the National Health and Nutrition Survey (retrieved 5 February 2009 from http://en.wikipedia.org/wiki/China_Health_and_Nutrition_Survey). Numerous family and health data were collected for study. In Iraq, a medical family survey was conducted by the World Health Organization and Iraqi officials wherein over 9,000 households were surveyed (see http://en.wikipedia.org/wiki/Iraq_Family_Health_Survey). The focus here was on the ravages that the ongoing war had taken on families and social networks.

Clinical observation studies typically take place in counseling, medical, residential treatment settings, or community centers. Perhaps two of the most prominent clinical researchers of the family have been Doctors Judith Wallerstein and John Gottman. Doctor Wallerstein studied children of divorce over the course of 25 years and has made a thorough study of the impacts that divorce has had on these children and their adult marriages and life experiences (see research-based books: The Good Marriage (1995); Second Chances (1996); Surviving the Breakup (1996); and The Unexpected Legacy of Divorce (2000).

Dr. John Gottman studied couples in depth by videotaping them in clinically controlled apartments, or “love labs,” where he observed their daily interaction patterns and carefully analyzed the footage of their interaction patterns. His research led to the “Four Horsemen of Divorce” and the classification of four aspects of deeply troubled marriages: Defensiveness, Stonewalling, Criticism, and Contempt (see research-based books: The Relationship Cure (2002); Why Marriages Succeed or Fail (1995); Seven Principles (2007); and Ten Lessons to Transform Your Marriage (2007).

Participant observations are much less common than surveys and clinical studies. They basically are studies in which the researcher lives in, belongs to, or participates in the social familial experience being studied. I read of one researcher who sat on a chair in the home of parents of newly adopted children with disabilities and watched the parents make the adjustments of incorporating the new family member into the family system. This and similar studies tend to take many hours and yield lots of information about a very narrow and specific research question.

The National Survey of Families and Households was conducted in the early 1990s and interviewed 13,000+ families in depth for survey information (Google “Bumpass and Sweet NSFH”). This massive data set now exists in electronic form and can be analyzed by anyone seeking to look at specific research questions that pertain to many different aspects of the family experience in the U.S. at that time. When a researcher analyzes existing data, it is called Secondary Analysis. This would apply to a research project examining any of the above-mentioned surveys, the U.S. Census, or even the Population Reference Bureau's world data available free at www.prb.org.

Finally, family members can be interviewed through in-depth qualitative interviews designed to capture the nuances of their experiences. This is what Dr. Judith Wallerstein did when she wrote the book The Good Marriage (1995). She carefully interviewed 50 happily married couples who were considered by those around them to have a really good marriage. Her work was published in an era of family research that was flooded with studies about divorce and family dysfunction. The Good Marriage began, in my estimation, a turn of events that made it more acceptable to study the positive functioning and side of family experiences in the U.S.

Just for fun I've added an interesting survey my students and I developed to study dating patterns here at UVU in 2006. Some of my students were interested in why we are drawn to those we date and which factors lead us toward staying together or breaking up.