Research Design Qualitative Quantitative and Mixed Methods Approaches
One of the generally known designs of research is quantitative research. Quantitative research is a means for testing objective theories by examining the relationship among variables (Creswell 2008). These variables, in turn, can be measured typically on instruments, so that numbered data can be analyzed using statistical procedures. The final written report has a set structure consisting of introduction, literature and theory, methods, results and discussion.
Researchers suggest that any study should have an idea what data or information they want to collect. They may use or compile available data or records to start their research. Afterward all the data is to be analyzed and assessed to make a statistic data.
In this description, I will appraise a research from Bakalis & Watson (2005) about "Nurses' decision -making in clinical practice" using quantitative research approach. In details will explore about the objectivity, hard data, sample selection, data collection, data analysis and presentation, and conclusion.
Quantitative Research Features
Objectivity
Saddler (2006) described that a quantitative research quantifies or calculates data by using numbers and statistics. This research design distinguishes and associates among variables data. The design uses objective, logical and deductive measures.
Quantitative research also hold the position that truth is absolute and that there is a single reality that one could defined by careful measurement. To find truth, one must be completely objective, meaning that values, feelings and personal perceptions cannot enter into the measurement of reality (Burns & Grove 2005).
Hart (2008) mentioned that objectivity refers to the amount of involvement of the researcher in the collection and analysis of the data. Quantitative researches generally utilize data collection methods that are as objective as possible. Structured, quantifiable instruments are objective. In other hand, there are achievable aims that should be obtained at the end of the research process.
Bakalis article used structured and quantifiable questionnaires. He made 60 questionnaires that developed 15 statements. Then he circulated those questionnaires to 60 nurses which was 20 from each of three clinical areas. It didn't mention clearly how many researchers who involve in the collection and data analysis.
Hard Data
All researchers, clinical or basic, must determine the best way to collect study data so that the outcomes can be generalizable (Winsett and Cashion 2007). The hard data should be collected to gather as much as information. Those references may compose from research text book, medical or nursing journals, newspapers articles, letters or research studies.
Lo Biondo-Wood & Haber, Polit & Beck cited by Winsett & Cashion (2007) wrote about systematic clarification as the first concept. The data collection approach is methodical and organized in such a way as to answer the clinical question posed.
Although most nursing research involves the collection of new data through self-report, observation, or biophysiological instrumentation, some research involves the analysis of preexisting data, such as are available through written documents. Clinical records, such as hospital records, nursing charts, and so forth, constitute rich and relatively inexpensive data sources (Polit cited by Fitzpatrick and Wallace, 2005).
Bakalis Study presented newly questionnaire which was developed 15 statements. The statements were formulated by researchers using well-known nursing texts (Carpenito cited by Bakalis, 2005). To test the internal consistency of this questionnaire, the researcher used Cronbach's alpha reliability coefficient (Cormack cited by Bakalis, 2005).
Statistic
According to Polit & Beck (2009) that the collected data in a study need to be analyzed systemically in order to discover any trends and patterns. Throughout the statistical procedures allow researchers to systematize, deduce and converse into numerical information.
Burns and Grove (2007) stated that statistical procedures essentially used to scrutinize and analyze the data which was collected during the study. With statistic data, the researchers easily understand the result of research either through descriptive (numerical) or inferential (generalization) statistics method.
Burns and Grove (2007) mentioned that numerical data is being categorized as descriptive statistic. To organize the distribution of data and to arrange the data from the lowest to the highest, apply frequency distributions through tables, charts, diagram or percentages. Then it is by measuring the central of tendency through mean, mode and median. Besides, by measuring the variability or dispersion through range, variance, standard deviation, standardized scores and scatterplots (Burn & Grove 2007).
Bakalis used descriptive statistical methods. Their sample characteristics applied central of tendency (mean) to classify age, sex and clinical experience on each nurse. Subsequently they drew the numerical data (mean and standard deviation) to compare with Kruskal-Wallis test parameter (P-Values).
Process of Sample Selection, Data Collection, Data Analysis and Presentation
Sample Selection
Sampling is the process of selecting a portion of the population to represent the entire population (Polit & Beck 2009). This process will collect the sample from certain people or other study objects. A sample itself is a subset of the population that is selected for a particular study, and the members of a sample are the subjects (Burn & Grove 2007).
Kerlinger & Lee cited by Burn & Grove (2007) explained that population is all elements either people itself or others elements meet with the definite criteria for research or study purposes. Polit and Beck (2009) introduced two sampling designs in quantitative research are non probability sampling and probability sampling. Non probability sampling collects non random data from the population but probability sampling takes the elements from population as randomly.
According to Burns & Grove (2007) there are four methods to establish probability sampling: simple random sampling, stratified random sampling, cluster sampling and systematic sampling. Otherwise five methods of non probability are convenience samplings, quota sampling, purposive sampling, network sampling and theoretical sampling.
Bakalis compared the nurses' decision making between medical, surgical and critical care areas. They used a convenience sample of sixty nurses. Twenty samples took from three different specialties. It called a non probability sampling method because the researchers collected the questionnaire from the most conveniently available people as participants (Polit & Beck 2009).
Data Collection
Data Collection is the process of acquiring the subjects and collecting the data for the study (Burns & Grove 2007). In this phase, the researchers gather information from either population or other subjects. Parahoo (2006) mentioned some measurement tools to collects data are questionnaires, observation schedules, scales and instruments.
However, Burns & Grove (2007) described that researchers should focuses on obtaining subjects, collecting data consistently, maintaining research controls, protecting the integrity (or validity) of the study, and solving problems that threaten to disrupt the study. These tasks are related one another and run simultaneously.
In the study, Bakalis utilized questionnaire as a measurement tools. Study participants were focused on three nursing areas. The researchers placed ten questionnaires near the nurse's station with a reminder about what the study involved in order to collect the data consistently.
To protect the integrity or validity of the data and maintain research controls, all completed questionnaires then enclosed in large envelope provided. There has been considerable attention to research governance that after completing study they have to submit questionnaires for ethical scrutiny. These steps are meant to solving problems that threaten to disrupt the study, but did not exist during the time of study.
Data Analysis and Presentation
Burns & Grove (2007) revealed the several stages to analyze quantitative data: preparation of data analysis, description of sample, reliability of measurement, exploratory of data analysis, confirmatory analysis and posthoc analysis. Data analysis is a systematic method of examining data gathered for any research investigation to support conclusions or interpretations about the data (Fitzpatrick & Wallace 2006).
Burns & Grove (2007) mentioned that statistics could be used to describe, examine relationships, predict and examine causality. Polit & Beck (2009) implemented two statistical procedures either descriptive or inferential. They stated that descriptive means synthesize and describe data using frequency distributions, central tendency and variability or dispersion. Then Inferential consist of estimation of parameters and hypothesis.
Aaronson cited by Fitzpatrick & Wallace (2006) exampled some statistical computer programs (e.g. SPSS, SAS, LISREL, EQS) to calculate statistic and distribute sample. In summarize, data analysis is methodical process to understand study data by converting either statistical descriptive or inferential.
Bakalis study firstly prepared the collected data using SPSS. Secondly they categorized the sample using central tendency and variability to convey descriptive analysis. Thirdly the researchers did not re-note the reliability of measurement method, even though they mentioned Cronbach's alpha reliability coefficient to test the internal consistency of the questionnaire during hard data stage.
Fourth Bakalis interpreted the study data into a table to test or describe whether scores differed among groups. To examine the relationship between variables, they utilized Pearson's correlations.
Fifth Bakalis confirmed analysis utilized a non-parametric analysis as a hypothesis (Polit & Beck 2009). Bakalis also summarized that inferential (generalization) was not possible due to small sample size and narrow scope of information.
To conduct posthoc analysis, the researchers implement t-test, Analysis of Variance (ANOVA) or Analysis of Covariance (ANCOVA). Bakalis did not clearly mention about this step.
4. Conclusion
Referred to Creswell (2008), Burns & Grove (2007) and Polit & Beck (2009), Bakalis's study was not fully accomplished the requirement of a quantitative research. The researchers realized that their study had a major limitation due small sample size and narrow scope of information. Furthermore their collected data did not submit to research governance for ethical scrutiny. Bakalis followed four stages process of analyzing and presenting data but did not implement fifth process of posthoc analysis.
However, they utilized scientific questionnaire which was correlated to Cronbach's alpha reliability. This study implemented descriptive statistic to examine or describe the relationship among variables. They used convenience sampling method, interpreted the data into central tendency and variability then compared with Kruskal-Wallis test.
For future research project, it is strongly recommended to conduct both quantitative and qualitative (mixed studies) in order to explore more scope of information and study participants. Their study needs to be re-tested for further reliability and validity. These are inline with their conclusions.