It can also be expressed as a hypothesis. In the case of a hypothesis the search for an explanation is made as a statement to be proved of disproved — depending on the goals of your research. The collection of information in quantitative research is what sets it apart from other types.
This is done with statistics. When most people think about quantitative research they think specifically about statistics. It can help you to better understand how to crunch numbers for better quantitative research practices. Survey research uses interviews, questionnaires, and sampling polls to get a sense of behavior with intense precision. It allows researchers to judge behavior and then present the findings in an accurate way.
This is usually expressed in a percentage. Survey research can be conducted around one group specifically or used to compare several groups. When conducting survey research it is important that the people questioned are sampled at random.
This allows for more accurate findings across a greater spectrum of respondents. It is very important when conducting survey research that you work with statisticians and field service agents who are reputable. Since there is a high level of personal interaction in survey scenarios as well as a greater chance for unexpected circumstances to occur, it is possible for the data to be affected. This can heavily influence the outcome of the survey.
There are several ways to conduct survey research. They can be done in person, over the phone, or through mail or email. In the last instance they can be self-administered. When conducted on a single group survey research is its own category.
However survey research can be applied to the other types of research listed below. Have you ever been asked to give your thoughts after visiting a website? These are all examples of survey research. Correlational research tests for the relationships between two variables. Performing correlational research is done to establish what the affect of one on the other might be and how that affects the relationship.
Correlational research is conducted in order to explain a noticed occurrence. In correlational research the survey is conducted on a minimum of two groups. In most correlational research there is a level of manipulation involved with the specific variables being researched.
Once the information is compiled it is then analyzed mathematically to draw conclusions about the affect that one has on the other. Remember, correlation does not always mean causation. Typically, you should not make assumptions from correlational research alone.
Causal-comparative research looks to uncover a cause and effect relationship. This research is not conducted between the two groups on each other. Rather than look solely for a statistical relationship between two variables it tries to identify, specifically, how the different groups are affected by the same circumstance.
First I describe the types of study you can use. Next I discuss how the nature of the sample affects your ability to make statements about the relationship in the population. I then deal with various ways to work out the size of the sample. Finally I give advice about the kinds of variable you need to measure.
Studies aimed at quantifying relationships are of two types: In a descriptive study, no attempt is made to change behavior or conditions--you measure things as they are.
In an experimental study you take measurements, try some sort of intervention, then take measurements again to see what happened. Experimental or longitudinal or repeated-measures. Descriptive studies are also called observational , because you observe the subjects without otherwise intervening. The simplest descriptive study is a case , which reports data on only one subject; examples are a study of an outstanding athlete or of a dysfunctional institution.
Descriptive studies of a few cases are called case series. In cross-sectional studies variables of interest in a sample of subjects are assayed once and the relationships between them are determined.
In prospective or cohort studies, some variables are assayed at the start of a study e. Another label for this kind of study is longitudinal , although this term also applies to experiments. Case-control studies compare cases subjects with a particular attribute, such as an injury or ability with controls subjects without the attribute ; comparison is made of the exposure to something suspected of causing the cases, for example volume of high intensity training, or number of alcoholic drinks consumed per day.
Case-control studies are also called retrospective , because they focus on conditions in the past that might have caused subjects to become cases rather than controls. A common case-control design in the exercise science literature is a comparison of the behavioral, psychological or anthropometric characteristics of elite and sub-elite athletes: Another type of study compares athletes with sedentary people on some outcome such as an injury, disease, or disease risk factor.
Here you know the difference in exposure training vs no training , so these studies are really cohort or prospective, even though the exposure data are gathered retrospectively at only one time point.
The technical name for these studies is historical cohort. Experimental studies are also known as longitudinal or repeated-measures studies, for obvious reasons. They are also referred to as interventions , because you do more than just observe the subjects. In the simplest experiment, a time series , one or more measurements are taken on all subjects before and after a treatment. A special case of the time series is the so-called single-subject design , in which measurements are taken repeatedly e.
Time series suffer from a major problem: For example, subjects might do better on the second test because of their experience of the first test, or they might change their diet between tests because of a change in weather, and diet could affect their performance of the test. The crossover design is one solution to this problem. Normally the subjects are given two treatments, one being the real treatment, the other a control or reference treatment.
Half the subjects receive the real treatment first, the other half the control first. After a period of time sufficient to allow any treatment effect to wash out, the treatments are crossed over. Any effect of retesting or of anything that happened between the tests can then be subtracted out by an appropriate analysis. Multiple crossover designs involving several treatments are also possible.
If the treatment effect is unlikely to wash out between measurements, a control group has to be used. In these designs, all subjects are measured, but only some of them--the experimental group --then receive the treatment. All subjects are then measured again, and the change in the experimental group is compared with the change in the control group. If the subjects are assigned randomly to experimental and control groups or treatments, the design is known as a randomized controlled trial.
Random assignment minimizes the chance that either group is not typical of the population. If the subjects are blind or masked to the identity of the treatment, the design is a single-blind controlled trial. The control or reference treatment in such a study is called a placebo: Blinding of subjects eliminates the placebo effect , whereby people react differently to a treatment if they think it is in some way special.
In a double-blind study, the experimenter also does not know which treatment the subjects receive until all measurements are taken. Blinding of the experimenter is important to stop him or her treating subjects in one group differently from those in another. In the best studies even the data are analyzed blind, to prevent conscious or unconscious fudging or prejudiced interpretation.
Ethical considerations or lack of cooperation compliance by the subjects sometimes prevent experiments from being performed. For example, a randomized controlled trial of the effects of physical activity on heart disease may not have been performed yet, because it is unethical and unrealistic to randomize people to 10 years of exercise or sloth.
But there have been many short-term studies of the effects of physical activity on disease risk factors e. The various designs differ in the quality of evidence they provide for a cause-and-effect relationship between variables. Cases and case series are the weakest. A well-designed cross-sectional or case-control study can provide good evidence for the absence of a relationship.
But if such a study does reveal a relationship, it generally represents only suggestive evidence of a causal connection. A cross-sectional or case-control study is therefore a good starting point to decide whether it is worth proceeding to better designs. Prospective studies are more difficult and time-consuming to perform, but they produce more convincing conclusions about cause and effect.
Experimental studies provide the best evidence about how something affects something else, and double-blind randomized controlled trials are the best experiments. Confounding is a potential problem in descriptive studies that try to establish cause and effect. Confounding occurs when part or all of a significant association between two variables arises through both being causally associated with a third variable. For example, in a population study you could easily show a negative association between habitual activity and most forms of degenerative disease.
But older people are less active, and older people are more diseased, so you're bound to find an association between activity and disease without one necessarily causing the other. To get over this problem you have to control for potential confounding factors. For example, you make sure all your subjects are the same age, or you include age in the analysis to try to remove its effect on the relationship between the other two variables.
You almost always have to work with a sample of subjects rather than the full population. But people are interested in the population, not your sample.
To generalize from the sample to the population, the sample has to be representative of the population. The safest way to ensure that it is representative is to use a random selection procedure.
You can also use a stratified random sampling procedure, to make sure that you have proportional representation of population subgroups e. When the sample is not representative of the population, selection bias is a possibility. A statistic is biased if the value of the statistic tends to be wrong or more precisely, if the expected value--the average value from many samples drawn using the same sampling method--is not the same as the population value.
A typical source of bias in population studies is age or socioeconomic status: Thus a high compliance the proportion of people contacted who end up as subjects is important in avoiding bias. Failure to randomize subjects to control and treatment groups in experiments can also produce bias.
If you let people select themselves into the groups, or if you select the groups in any way that makes one group different from another, then any result you get might reflect the group difference rather than an effect of the treatment.
For this reason, it's important to randomly assign subjects in a way that ensures the groups are balanced in terms of important variables that could modify the effect of the treatment e.
Human subjects may not be happy about being randomized, so you need to state clearly that it is a condition of taking part. Often the most important variable to balance is the pre-test value of the dependent variable itself. You can get close to perfectly balanced randomization for this or another numeric variable as follows: If you have male and female subjects, or any other grouping that you think might affect the treatment, perform this randomization process for each group ranked separately.
Data from such pair-matched studies can be analyzed in ways that may increase the precision of the estimate of the treatment effect. Watch this space for an update shortly. When selecting subjects and designing protocols for experiments, researchers often strive to eliminate all variation in subject characteristics and behaviors. Their aim is to get greater precision in the estimate of the effect of the treatment. The problem with this approach is that the effect generalizes only to subjects with the same narrow range of characteristics and behaviors as in the sample.
Depending on the nature of the study, you may therefore have to strike a balance between precision and applicability. If you lean towards applicability, your subjects will vary substantially on some characteristic or behavior that you should measure and include in your analysis. How many subjects should you study? You can approach this crucial issue via statistical significance, confidence intervals, or "on the fly".
Statistical significance is the standard but somewhat complicated approach. Your sample size has to be big enough for you to be sure you will detect the smallest worthwhile effect or relationship between your variables. Smallest worthwhile effect means the smallest effect that would make a difference to the lives of your subjects or to your interpretation of whatever you are studying.
If you have too few subjects in your study and you get a statistically significant effect, most people regard your finding as publishable. But if the effect is not significant with a small sample size, most people regard it erroneously as unpublishable. Using confidence intervals or confidence limits is a more accessible approach to sample-size estimation and interpretation of outcomes.
You simply want enough subjects to give acceptable precision for the effect you are studying. Acceptable means it won't matter to your subjects or to your interpretation of whatever you are studying if the true value of the effect is as large as the upper limit or as small as the lower limit. A bonus of using confidence intervals to justify your choice of sample size is that the sample size is about half what you need if you use statistical significance.
An acceptable width for the confidence interval depends on the magnitude of the observed effect. If the observed effect is close to zero, the confidence interval has to be narrow, to exclude the possibility that the true population value could be substantially positive or substantially negative.
There are four main types of quantitative research designs: descriptive, correlational, quasi-experimental and experimental. The differences between the four types primarily relates to the degree the researcher designs for control of the variables in the experiment.
What are the main types of quantitative approaches to research? It is easier to understand the different types of quantitative research designs if you consider how the researcher designs for control of the variables in the investigation.. If the researcher views quantitative design as a continuum, one end of the range represents a design .
Quantitative research design is an excellent way of finalizing results and proving or disproving a hypothesis. The structure has not changed for centuries, so is standard across many scientific fields and disciplines. Threats to Validity of Research Designs Nursing Theory Nursing Models Types of Quantitative Research There are four (4) main types of quantitative designs: descriptive, correlational, quasi-experimental, and experimental.
Basic Research Designs. This module will introduce the basics of choosing an appropriate research design and the key factors that must be considered. Learning Objectives. Distinguish between quantitative and qualitative research methods. Identify whether or research project is qualitative or quantitative in nature. QUALITATIVE RESEARCH DESIGNS. Comparison of qualitative & quantitative research: Qualitative: Quantitative: Definitions: a systematic subjective approach used to describe life experiences and give them meaning: a formal, objective, systematic process for obtaining information about the world. A method used to describe, test .