Social Research Methods/Indexes, Scales, Typologies
Indices, Scales and Typologies
Quantitative data analysis requires the construction of two types of measures of variables--indices and scales. These measures are frequently used and are important since social scientists often study variables that possess no clear and unambiguous indicators--for instance, age or gender. Researchers often centralize much of work in regards to the attitudes and orientations of a group of people, which require several items to provide indication of the variables. Secondly, researchers seek to establish ordinal categories from very low to very high (vice-versa), which single data items can not ensure, while an index or scale can.
Although they exhibit differences (which will later be discussed) the two have in common various factors.
Both:
- are ordinal measures of variables
- can order the units of analysis in terms of specific variables
- are composite measures of variables (measurements based on more than one one data item)
Indices are a sum of series of individual yes/no questions, that are then combined in a single numeric score.
They are usually a measure of the quantity of some social phenomenon and are constructed at a ratio level of measurement. More sophisticated indices weigh individual items according to their importance in the concept being measured (i.e. in a multiple choice test where different questions are worth different numbers of points). Some interval-level indices are not weight counted, but contain other indexes or scales within them (i.e. college admissions that score an applicant based on GPA, SAT scores, essays, and place a different point from each source).
Index Construction
edit- Item Selection
- Face validity (or logical validity)
- Unidimensional--a composite measure should represent only one dimension of a concept.
- General or Specific--the nature of the items you include will determine specifically or generally the variable that is measured.
- Variance--to guarantee variance you can a)select several items the responses to which divide people about equally in terms of the variable b)select items differing in variance
- Examination of Empirical Relationships
- Bivariate relationship: a relationship between two variables.
- Multivariate relationship: a relationship between two or more variables.
- Index Scoring
- Determine the desirable range of index scores
- Determine whether to give each item in the index equal or different weights
- Make sure to standardize the weights--items should be weighted equally unless there are reasons not to.
- Handling Missing Data
- You may decide to exclude them from the construction of the index and analyses
- Treat missing data as one of the available responses
- Analyze missing data and interpret their meaning
- Assign missing data a middle value or mean value
- Assign values to the proportion of the variables scored.
- Index Validation
- Item analysis
- an assessment of whether each of the items included in the measure makes an independent contribution or merely duplicates the contribution of other items in the measure
- External validation
- tests the validity by examining its relationship to other presumed indicators of the same variable.
--Trp26 (talk) 18:01, 22 April 2010 (UTC)
Scales
- In order to discuss a scale, we must define it. A scale is a measure of the intensity of an attitude or emotion. Specifically, scales exist in the ordinal level of data. Usually scales are constructed using the ordinal level of measurement, which organizes items in an order in order to determine degrees of favor or disfavor, but does not provide any meaning of distance between degrees.
- The Likert scale is one of the most commonly used scales in the research community. The scale consists of assigning a numerical value to intensity (or neutrality) of emotion about a specific topic, and then attempts to standardize these response categories to provide an interpretation of the relative intensity of items on the scale. Responses such as “strongly agree,” “moderately agree,” “moderately disagree,” and “strongly disagree” are responses that would likely be found in a likert scale, or a survey based upon the scale.
- The semantic differential scale is similar to Likert scaling, however, rather than allowing varying degrees of response, it asks the respondent to rate something in terms of only two completely opposite adjectives.
- An example of a scale used in real-life situations is the Bogardus Social Distance Scale. This scale, developed by Emory Bogardus is used to determine people’s willingness to associate and socialize with people who are unlike themselves, including those of other races, religions, and classes.
- Thurstone scaling is quite unlike Bogardus or Likert scaling. Developed by Louis Thurstone, this scale is a format that seeks to use respondents both to answer survey questions, and to determine the importance of the questions. One group of respondents, a group of “judges,” assign various weights to different variables, while another group actually answers the questions on the survey
- Guttman scaling, developed by Louis Guttman, is the type of scaling used most today. Guttman scaling, like the Thurstone scale, recognizes that different questions provide different intensities of indication of preferences. It is based upon the assumption that the agreement with the strongest indicators also signifies agreement with weaker indicators. It uses a simple “agree” or “disagree” scale, without any variation in the intensities of preference.
- There are two misconceptions of scaling, one of which is the combination of data into a scale is influenced by the observation of the sample of the study. Thus the data of one scale from a sample may not comply with another scale. Therefore that combination of data can be scaled multiple times because it was originally was able to earlier in the study. A second misconception pertains to specific scales. By this, given items or data may aid in determine what constitutes as a scale opposed to a scale itself.
Scales versus Indices
- In general, scales are considered to function better than indexes, due to the fact that scales usually consider intensity of the questions they ask and feelings they measure, despite the fact that both are ordinal measures.
One example of a weighted index is the Bureau of Labor Statistics' Consumer Price Index (CPI), which represents the sum of the prices of goods that a typical consumer would purchase. When computing this index, the goods are weighted according to how many of them are purchased in the general population (relative to other goods), so that items purchased with greater frequency will have a greater impact on the value of the index.
Sampling
Why sample?
- In most cases, studying the entire population may not be possible
- Sampling allows researchers to gather information from a smaller, more manageable subset of the population. That information can be used to represent the greater population.
How to sample:
- To sample researchers must first designate a target population about which generalizations will be made
- The target population is the pool of cases that a researcher wants to study.
- Target populations are turned into practical lists of potential subjects using a sampling frame
Nonprobability Sampling
- any technique in which samples are selected in some way not suggested by probability theory.
- Nonprobability sampling is usually the only method that is practical for field research and comparative historical research
- Types of nonprobability sampling include:
Convenience Sampling: a type of nonprobability sampling in which the sample population relies on available subjects
- Does not allow for control over representativeness
- Only justified if less risky methods are not available
- Researchers must be very cautious about generalizing when this method is used
Purposive or Judgmental Sampling: a type of nonprobability sampling in which the units to be observed are selected on the basis of the researcher's judgment about which ones will be the most useful or representative
- useful method when studying:
- small subsets of a population
- two-group comparison
- deviant cases
Snowball Sampling: a nonprobability sampling method in which each person interviewed may be asked to suggest additional people for interviewing
- often used in field research, as well for the study of special populations
- ex: linked websites, specific populations
- however, this may bias the sample
Quota Sampling: a nonprobability sampling method in which units are selected into a sample on the basis of pre-specified characteristics, so that the total sample will have the same distribution of characteristics assumed to exist in the population being studied
- similar to probability sampling, but has some problems:
- quota frame must be accurate
- selection of sample elements may be biased
'Hidden Populations: groups in society that usually fall through the cracks of traditional probability sampling methods
- Include: drug addicts, hacker communities, the homeless, illegal immigrants, migrant workers, college students, etc.
- can be stigmatized or otherwise difficult to find
- often reached using various forms of snowball sampling
- In targeted sampling cases are gathered from a specific community through chain referrals with pre-set quotas for known strata
- In respondent-drive sampling monetary rewards are offered to respondents who bring in additional subjects from the population of interest
Probability Sampling
- in general, samples selected according to probability theory
- often used for large-scale surveys
- Probability sampling when done properly will offer a truer representation of the population at hand.
- if all members of a population were identical in all respects, there would be no need for careful sampling procedures. (however, this is rarely the same)
- a sample of individuals from a population must contain the same variations that exist in the population
- Probability samples are typically more representative than other types of samples because biases are less common
- This is very difficult to accomplish, and often is not 100% accurately done.
- Probability theory permits researchers to estimate the accuracy or representativeness of a sample
- EPSEM (Equal probability of selection method) samples are samples where every member of the population has an equal chance of selection for the sample.
Sampling Bias
- Sampling bias occurs when the sample is not classic or representative of the larger population.
- This isn't always done purposefully. Often, factors such as a researcher's location, ease of access to the population, and personal comfort level towards approaching random strangers, have an influence on bias.
Sampling Designs
- Simple random sampling:form of probability sampling where cases are assigned numbers and a set of random numbers is generated using a random number generator.
- Systematic sampling:form of probability sampling where every nth number on a list is included in sample.
- Stratified sampling:form of probability sampling where cases are divided into meaningful groups of interest (genders, races, etc) and a random sample is taken from each group.
- Multistage cluster sampling: 'natural' groups (ex: cities) are initially sampled with smaller subsets (city blocks) sampled thereafter.
Sampling is Important
- Poor sampling reduces the validity of using study results to make population inferences.
- Bigger sample size + more stratification = more representative results
- For small populations a large sampling ratio is necessary.
- 6-800 cases are usually sufficient no matter size of population.
Recording and Analyzing Samples
- To preface, Probability theory is a branch of mathematics that provides the tools need to do accurate research--mathematical sampling methods, statistical analysis, and methods of finding parameters of populations. Probability theory uses sample distributions to accomplish this.
- Results of research are typically graphed as dots, with the mean each sample taken being represented as one dot on the x-axis. As research is repeated, means of samples often are duplicated, so their dots are simply placed on top of their duplicates. The number of samples with certain means is represented on the y axis. As more and more people are surveyed, the graphs get taller and taller, until usually one true mean is left standing alone in the middle.
- Parameters are usually determined by sample surveys.
- A sampling error--known in statistics as the standard deviation--can be calculated by taking the square root of: parameters P and Q, multiplied by each other, divided by the number of cases in each sample. This is an important number to know because it gives the researcher an idea of how far around the population parameter the numbers are going to be distributed.
•68% of sample estimates will fall within one standard deviation either above or below the parameter. •95% of sample estimates will fall within two standard deviations either above or below the parameter. •99.9% of sample estimates will fall within three standard deviations either above or below the parameter. •If one of the parameters is 1.0 or 0.0, the standard deviation will be 0. •Standard error decreases as sample size increases.
- A confidence interval is the range of values within which a population parameter is estimated to lie.
- A confidence level is the probability that a population parameter is within a certain confidence interval.
•These numbers are generally determined by making one's best guess, then giving or taking a reasonable number (For instance, if you feel that 20% of a population is going to posess a certain characteristic, you can initially set your confidence interval to be between 10 and 30%.)
- Throughout all of this, the size of the population is barely relevant.
•When the sample is too small, a finite population correction is calculated. This is represented by the square root of: The population size minus the size of the sample, DIVIDED BY The population size minus one.
Populations and Sampling Frames
- A sample frame is a list of elements composing a population from which a sample is selected (ex. a census block)
- A sampling frame must obviously coinside with the population being studied.
- They aren't perfect--an aspect or two are inevitably left out.
- All elements should have equal representation in the sample frame.
- Findings based on a sample can be interpreted as a representation of the elements of the sampling frame.
Types of Sampling Design
Simple Random Sampling: "Basic sampling method" usually used in social research. The researcher usually acquires a sampling frame, then randomly selects numbers. This is done by computer if possible.
Systematic sampling: A list of potential subjects is acquired, and every kth element in the total list is chosen for inclusion. The first subject should be randomly chosen. It turns out to be about the same as random sampling, but more tedious to do and therefore less commonly used. Systematic sampling utilizes: •a sampling interval, the distance between selected subjects (population size / sample size) •the proportion of subjects in the sample : potential subjects in the population
Stratified Sampling: Modification to sampling that involves dividing the population into homogeneous strata before forming samples to increase representativeness within groups. It presents the advantage of reducing the sampling error with its homogeneity, but also presents the disadvantage of increasing sampling error with its smaller sample sizes. It enhances the presentation of whatever variable is being used to divide the groups. Stratification works with simple random, systematic, or cluster sampling. One accomplishes this by simpling whichever methods they had planned on using, but within the startified groups. •Stratification variables are the characteristics that are being used to stratify the sample.
Multistage Sample Clustering: Involves the steps of listing, then sampling. Cluster sampling is accomplished when one takes advantage of different subpopulations in order to achieve a sample. For instance, subjects are pulled from one specific neighborhood to answer questions about city government. Elements are selected from their location, then analyzed for characteristics that would make them good subjects for specific research. Appropriate subjects are then selected according to the data. •This can produce a more biased sample. Sometimes, such as in medical research, this is desired. •The problem arises that as the number of clusters increases, the number of elements must also increase. Normally, elements are homogenized more in these cases. •Researchers are often limited to a maximum number of subjects. •Ideally, researchers desire a large number of clusters and a the smallest possible acceptable number of elements.
Weighting Samples
- By default, most members of a population were not selected completely by random.
- Weighting is the act of assigning different "weights" to different members of a sample that had different chances of being slected.
- Often, that can mean skewing how many people are selected from a certain area to ensure that the appropriate ratios of characteristics are maintained.
Terms
•Representativeness: the sample's distribution of characteristics remains true to the population it is sampling
•EPSEM: sampling where every member of a population has an equal chance of selection
•element: any piece of a population--can be a member, a location, or a measurable characteristic
•study population: contains all necessary characteristics for study; sample is selected from here
•random selection: sampling method, every member of a population has an equal chance of selection
•sampling unit: person or group of people considered for selection
•parameter: variable factor within a population
•statistic: a description of a variable within a sample
•sampling error: degree of error expected by not studying an entire population
•confidence interval: range of values within which a population parameter is estimated to lie.
•confidence level: probability that a population parameter is within a certain confidence interval
•sampling frame: list of people that possess the qualities eligible for research
•weighting: assigning different weights to subjects displaying different probabilities of selection
A typology is the classification of observations in terms of their attributes on two or more variables.
- Often, individuals may seek to put variables into an organized format. This is where typologies come into play. Typologies consist of the sets of categories created by the intersection of multiple variables.
Other Important terminology: Correlation: An empirical relationship two variables such that 1. changes in one are associated with changes in the other or 2. particular attributes of one variable are associated with particular attributes of the other. Correlation in and of itself does not constitute a casual relationship between the two variables, but it is one creation of causality. Spurious Relationship: A coincidental statistical correlation between two variables, shown to be caused by some third variable. Units of Analysis: The what or whom being studied. In social science research, the most typical units of analysis are individual people. Social Artifact: Any product of social beings or their behavior. Can be a unit of analysis. Ecological Fallacy: Erroneously drawing conclusions about individuals solely from the observation of groups. Reductionism: A fault of some researches: a strict limitation (reduction) of the kinds of concepts to be considered relevant to the phenomenon under study. Sociobiology: A paradigm based in the view that social behavior can be explained solely in terms of genetic characteristics and behavior Cross-sectional Study: A study based on observation representing a single point in time. Longitudinal Study: A study design involving the collection of data at different points in time. Trend Study: A type of longitudinal study in which a given characteristic of some population is monitored over time. An example would be the series of Gallup Polls showing the electorate's preferences for political candidates over the course of a campaign, even hough different samples were interviewed at each point. Cohort Study: A study in which some specific subpopulation, or cohort, is studied over time, although data may be collected from different members in each set of observations. For example, a study of the questionnaires were sent every five years would be a cohort study. Panel Study: A type of longitudinal study, in which data are collected from the same set of people (the sample or panel) at several points in time.