Social science research offers a systematic method of inquiry into empirical phenomena. Using the scientific method and related paradigms of acceptable objectivity, issues can be explored, examined, measured, and reported upon. The following paper addresses the fields of research philosophy, core concepts for research design, and an examination of different methods of research. The schema of the social science research process is offered through the concepts outlined in the following discourse.
Jan 4, 2024 / Visits: 1,368
Social science research pertains to the application of scientific methodology to study various aspects of social phenomena. It is a way to help us understand the world around us, using both qualitative and quantitative methods to derive empirical data (McQueen & Knussen, 2002). The value of such work applies to devising new approaches to issues, or understanding why things happen in the way that they do. The following paper describes the philosophy of research, the core concepts for research design, and other approaches to research. A conclusion provides synthesis of the topic elements.
Empiricism and Quantitative Research
Empiricism is a theory of knowledge that states that we can know our world through our senses. That is, what can be seen, can be studied, measured, and therefore observations of reality can be put forth based on empirically-derived results. Yet, there is a method to apply empiricism that takes it out of the realm of the abstract and firmly roots it into scientific research (Williams & May, 1996). The application of quantitative research allows for the systematic measurement of empirical phenomena using mathematical models and theories that relate to the element being studied. The concept of measurement in quantitative research is elemental to empirical observation; through this relationship of quantitative research methods and empirical observations, mathematical expressions of the observations can be put forth.
Quantitative research methods allow for hypotheses to be proven or disapproved. The empirical observation of some phenomena provides the basis for making a hypothesis. Quantitative methodology then allows the hypothesis to be tested; therefore the observation leads to a hypothesis, the hypothesis is tested using quantitative methods, the data is analyzed, and the hypothesis is either accepted or rejected.
Scientific Method
Essentially, this describes a process whereby the scientific method is applied to research methodology. The scientific method uses a system of techniques to study phenomena, report on research already conducted, and add to the overall knowledge base whether it be through new information or information that corrects previous research. The process of the scientific method is for application in social science is to choose an issue, define a research question, gather/measure information about the issue, form hypotheses, collect data, analyze and interpret the data, and report on the results, perhaps with recommendations for future research (McQueen & Knussen, 2002). Barton and Haslett further define the scientific method as being based upon systems logic, which is a cognitive concept to help make sense of empirical observations (Barton & Haslett, 2007).
Positivism and Post-Positivism
The scientific method is an aspect of positivism, which holds that scientists can only describe and measure what they can see; anything beyond that approach progresses into the metaphysical realm, which is not a part of social science research methods (McQueen & Knussen, 2002). Auguste Comte developed the concept of positivism in the early 19th century; the basic concept is that knowledge derives from our senses and what we experience. It is a form of epistemology, which is to describe how we know what we know (Carlson, 2008). There are perspectives on positivism that yet dare to embrace metaphysical possibilities, yet there is also a spectrum upon which the exact opposite of such thought is grounded in positivism that attempts to base knowledge on logic (Morgan, 2007). The positivist approached is closely linked to empiricism.
Post positivism is a philosophy that rejects the central tenets of positivism. This philosophy questions the ways that scientists have approached the study of the world. The most fundamental concept of post positivism is that the notion of reality is affected by one's perception of reality; as such, it cannot capture everything that may actually exist. One type of post positivism is called critical realism. The critical realist is critical of reality, meaning that there may be a reality for which science cannot see using positivist approaches. In a general view, the field of theoretical physics would allow for the use of critical realism; such intellectual giants as Stephen Hawking, for example, can expand the knowledge base of physics through the application of critical realism among other methods (Patomaki, 2008).
The issue that arises from positivist and post-positivist approaches is how to objectively conduct research. The importance of objectivity is central the empirical observation and the scientific method. Clearly too, the post-positivist perspective maintains that it is only objective to allow for the possibility of phenomena outside the human ability to sense and observe. The challenge therefore is how to approach scientific research knowing the importance of objectivity. The methods, procedures, and definitions are all clearly of significant importance in allowing for a common framework for researchers to follow and one that can be replicated. Yet, there is also the issue of what defines reality. This is known as the Observer Effect (Surhone, Timpledon, & Markesen, 2009). One changes the nature of what is being observed through the act of observation. The state of what is attempting to be measured is altered by the act of observing. Therefore, truly achieving objectivity means embracing a potentially human-limited conception of reality, and being aware of that.
Scientific Realism
Scientific realism evolved over the debate on observable and unobservable phenomena. The development of this philosophy allows scientists to state that even though there may be unobservable phenomena, scientific theories can yet attempt to describe them. This conception supports the notion that science is a progressive phenomenon; the more we know, the more we can know. Theoretical physics is one area that uses scientific realism; and at the other end of the spectrum, a metaphysical approach to answering questions of theology exists (Arntz, Chasse, & Vicente, 2005).
Social Constructivism
Social constructivism developed as an approach to social science based on what an individual can know and observe due to their interactions within groups. This conception is often used in education research, behavioral research, psychology, and other research examining individual behavior as influenced from inter-group influences (McQueen & Knussen, 2002).
Importance of Objectivity
Regarding objectivity and how we know what we know, based on how we conceptualize reality, there are researchers that adopt the advocacy-liberatory framework for research. This goes beyond social constructivism claims that there are observer effects in research, and instead advocates that there may be many possible realities based on the milieu and environment; therefore, the use of research should be to acknowledge this claim, and conduct research for the liberation of knowledge. An example in social science research would be through examining the effects of literacy programs on the illiterate; the subjects are liberated by the knowledge, their world is broadened, and research has shown that knowledge liberation actually helps people (Lodico, Spaulding, & Voegtle, 2010).
Theoretical and Conceptual Research Paradigms
While there are different approaches to determining what objective research should be based upon in terms of research paradigm, there is also the realm of the conceptual research project and the theoretical research project. The conceptual research examines the literature and posits and idea of something new, a link from the literature to a hypothesis. A theoretical framework already offers a confirmation of a hypothesis, and also allows for further study on the subject (McQueen & Knussen, 2002). The relationship of these two different frameworks to research study is that they allow the researcher to either study something new, or in some way add to the knowledge base of something that has already been researched. Either method is acceptable in social science research.
Purpose
In social science research, we identify the research question, hypotheses, and problem so that we may have systematic way to study the phenomenon under issue. This process allows the researcher to determine how to go about studying the problem by addressing the question through the hypotheses. Defining the purpose of the research study is important to establish what the contribution to the knowledge base will be, and hence whether or not the study is feasible and valid. The core issue of the research is addressed in the purpose, that is, what is going to be added to the knowledge base of the discipline and issue being studied (McQueen & Knussen, 2002).
Literature Review
The literature review is one of the most important steps in the research process, and it is conducted in the early stages of the project. In this process, the researcher will examine what has been already studied in the issue at hand, what methods were used, how they were defined, what kinds of analysis have been performed, and what evidence was contributed to the knowledge base of the issue. The literature review is both specific and broad, and should be conducted in that way (Hall, 2008). The researcher will look for similar studies to their own proposal, and cast a broader net to see what might be related yet different from their topic. Both theoretical and conceptual literature should be reviewed in the research process (Williams & May, 1996). A quality literature review utilizes an approach method; this may include defining the data base(s) that will be utilized, the date ranges, the sample populations, the type of research design, and other potential delimiters (Hall, 2008).
Populations and Samples
The population in a study refers to the individuals or objects for which the research is done. It is a large collection of the objects or individuals, from which a sample will be drawn. It is not possible to test all members of a population in many types of social science research, unless the population is very small. Limitations of time, expense, and accessibility preclude total population sampling. The research population is identified as those that share similar characteristics. The researcher, in attempting to gain an understanding of the total population, will use sampling techniques to draw as representative a number as possible from the total population, which will serve as the research sample and the object of the methodology used for data collection. The population must be considered for its applicability to the purpose of the research; meaning, the population utilized fulfills the purpose of the study.
Variables
A variable is an element of the research study that can take on different values. In quantitative research, the variable will be scored numerically. In research, there are dependent and independent variables. This describes a causal or inferential relationship. The dependent variable is affected by the manipulation of the independent variable; the researcher (or outside element) manipulates the independent variable (McQueen & Knussen, 2002). The attributes of the variables should as much as is possible, should be exhaustive and mutually exclusive. As this may not always be feasible, the researcher may define major attributes of the variables and include a category that is a catch-all for 'other' responses or observations (Hall, 2008).
Reporting Findings
Data is captured for research study, analyzed, and the findings of such are reported. This is done in quantitative research through statistical methods, where the statistics indicate degrees of confidence that the observation under study is found to be valid. Findings are then reported on these numerical variables, and applied to the hypotheses. Hypotheses are then reported in the findings as either accepted or rejected. The findings are the empirical results of the study; further discussion may typically follow the findings, to more fully discuss the implications of the results (Grinnell & Unrau, 2010). Reporting the findings of the study involves ethical issues, such as the duty to report, and the duty to protect confidentiality of subjects, among others.
Findings should be reported to the community at large, and to the research participants (if applicable). The findings should be disseminated in language that is applicable to the community in which it is meant to serve. For example, the findings from a high-level bio-molecular study manipulating hydrogen particles in encapsulated polymers, it may well be appropriate to use language that only those knowledgeable in the field would understand. However, if that same research related to the development of synthetic gasoline which would impact the world economy at the macro level and individuals at the micro level, then the researcher should well use language that would be a bit more universally understood (Staedter, 2011).
Research findings can be disseminated through summaries, posters, press releases, brochures and flyers, conferences, newsletter, and relevant seminars, among others. Depending upon the subject or issue, it may be necessary to adopt a delicate approach to disseminating research findings, while not distorting the data. For example, if the issue is regarding a culturally sensitive topic, then the researcher would be wise to adopt a culturally sensitive framework for disclosing the findings, regardless of what they are, in relation to the audience who will be reading the findings (McQueen & Knussen, 2002).
Assumptions, Limitations, and Delimitations
Assumptions are those elements related to the research that have no hard evidence to support them, no facts or other empirical bases. Anecdotal data would be a type of assumption that could be reported and may be relevant, yet should be noted as such. If the researcher is able, it behooves them to attempt to justify how they will validate their assumptions; if this is not possible, then it should be clearly stated in the research.
Limitations in research relate to those factors that may impact internal and external validity of the research design. If the limitations are major, the research project should probably be redesigned; it is not uncommon to have limitations, yet these should be clearly stated in the project and be of relatively minor impact to justify the research continuing as designed. Some examples of limitations include a lack of data, problems in sampling, or the actual limits related to the type of research design selected for the study. Delimitations are those limits set by the researcher, such as studying only blonde-haired people in a project, or restricting the study to a geographic area, for example (Pyrczak & Bruce, 2000).
Validity and Reliability
In social science research (and other research) validity refers to measuring what is intended to be measured. In survey research methodology, measurement of attitudes through inference based on observations is somewhat indirect. Content validity would mean making sure the measurement tool includes all the attributes of the concept being measured (Frankfort-Nachmias & Nachmias, 2008). This would seem be a difficult thing to do in most situations, though a firm conceptual analysis of the issue should help inform the process. Empirical validity is due to the relationship between the measurement tool and measured outcome that inform the research question and hypotheses (Williamson & Ahmed, 2007). An observed measure can be fixed to an observed characteristic.
Construct validity refers to the degree that relationships or inferences can be made based upon the operationalization of the variables in the study. That is, the study construct is considered have high validity when the study scale correlates with the theoretical concept. It is essentially a relationship between the theoretical concept, and the scale of measurement (Hall, 2008).
In social science research, having complete validity through the three tests of validity (content, empirical, construct) may be difficult to achieve, and therefore the question of reliability can serve as another measure of research integrity (Given, 2008). Reliability essentially means that the variables measured and the ways that they are measured have what is called variable errors. Variable errors are those that occur between measured observations measured by the same instrument, either one time or more than one time. For example, using the same survey and asking the same question may get the same answers every time, and therefore be low in variable errors, or the answers may be different, and be high in variable errors; if a measurement instrument is high in variable errors, it is low in reliability (Frankfort-Nachmias & Nachmias, 2008).
Program Evaluation Methods
Program evaluation essentially refers to methods meant to assess different aspects of a program. This is often done is social research, and may be either quantitative or qualitative, or a mix of methods. A major factor is performing program evaluation is to assess if the program is doing what it is intended to do; therefore establishing causation is a primary concern in program evaluation.
Questions that may be addressed in program evaluation include determining efficiency, cost, impact, implementation, design, and need. A needs assessment relates to the examination of the actual need in the population to which a program is intended to address. This includes asking if there is a need, what the actual problem is, can the program address it, and how. The impact pathway of the program may also be assessed, to determine if the actual design of the program matches the intent it was designed for; this is essentially a logic analysis. Assessing the implementation of the program relates to the methods through which the program elements are being applied to the population it is intended for (Posavac, 2010).
Determining causality of a program relates to the impact analysis of the program. Quasi-experimental designs allow for studying impacts in program evaluation. Randomized experiments also offer a strong advantage in assessing program impacts, due to the high reliability and credibility of the design. Essentially, this aspect of program evaluation looks at the issue of whether or not the program is doing or has done what it was intended to do. Statistical methods are often used to accurately measure impacts in program evaluation (Posavac, 2010).
The methods used to evaluate programs must in turn be reliable and valid; in order to establish causality, these elements must be sound in keeping with established guidelines for determining the existence of such in research methodology. Without these elements, program evaluation outcomes would not be credible, and therefore useless (Posavac, 2010). The entire idea behind program evaluation is to assess the usefulness, based on credibility, of the evaluation results. The advantages and disadvantages of the various methods (logic analysis, experimental designs, and other methods) will depend upon what is going to yield the best and most useful outcomes to the evaluation. Some will be better than others, and some may simply be dictated by outside constraints, such as time and money.
Action Research and Other Methods
Action research is otherwise known as practitioner research; this is the systematic undertaking of scientific inquiry, by the participants of the inquiry. The action on the part of the practitioner is to formulate the questions, gather the data, analyze the data, and report on the data. Typically this type of research is some aspect of reflective investigation, with an aspect of personal interest by the participants of the inquiry.
The advantages of action research are that there is a vested interest in the undertaking of the research project by the participants. A major disadvantage is that the research may end up being tainted by researcher bias. A typical application of action research may be within a cultural or ethnic group, a classroom, or even within an organization to study organizational processes (McNiff & Whitehead, 2002).
There are six key principles of action research: reflective critique, dialectical critique, collaborative resource, risk, plural structure, and theory/practice/transformation (as one element) (O'Brien, 2001). This type of research involves real people, not experimental situations, since the application of the research is toward real people in real situations. Therefore the key principles involve a person-centered feedback loop that with strong emphasis on using commonly understandable language. This type of research is used when flexibility is required, when a holistic approach may be necessary, and when events may change quickly (O'Brien, 2001).
Action research has its place in the research world, yet in situations which are not oriented around this particular paradigm, other research approaches may be appropriate. Descriptive research can help us describe what exists in the present moment. Experimental research can tell us what can potentially exist given certain conditions. Case study research helps us explore and analyze real-life situations (Hall, 2008). Where action research may offer solutions to a specific problem or context, other types of basic research can offer theories or generalizations that may further be explored (Frankfort-Nachmias & Nachmias, 2008).
Conclusion
In the social sciences, the process of scientific inquiry can take various forms and follow different methodologies. The area of inquiry may dictate the research paradigm, the research design, and the analysis of results. The basic approach to social science research is to identify an issue, define a problem, develop a research question and hypotheses, establish the variables, describe how the variables will be operationalized, develop the methodology including the data gathering instruments, apply appropriate analysis methods, and report on the findings. If the research is basic and offering to the evidence base of knowledge in an area, basic research methods and designs may be appropriate. When the problem is discrete and has a vested interest by the participants, action research methods may be used. The researcher must carefully plan the research process to ensure reliability and validity of the overall flow of the process; in this manner, the outcome has a higher potential of being accepted as credible evidence in support of the idea being explored.
REFERENCES
Arntz, W., Chasse, B., & Vicente, M. (2005). What the Bleep do we Know. Deerfield Beach, FL: Health Communications.
Barton, J., & Haslett, T. (2007). Analysis, synthesis, systems thinking and the scientific method: rediscovering the importance of open systems. Systems Research and Behavioral Science, 143-155.
Carlson, J. (2008). Model Theory & International Relations Theory: Positivism, Ontology and the Nature of Social Science. ISA 2008 National Convention. San Francisco: ISA.
Frankfort-Nachmias, C., & Nachmias, D. (2008). Research Methods in the Social Sciences 7th ed. New York: Worth Publishers.
Given, L. (2008). The Sage encyclopedia of qualitative research methods, Volume 1. Thousand Oaks: Sage.
Grinnell, R., & Unrau, Y. (2010). Social Work Research and Evaluation: Foundations of Evidence-Based Practice . New York: Oxford University Press.
Hall, R. (2008). Applied social research: planning, designing and conducting real-world research. South Yarra: Palgrave-MacMillan.
Lodico, M., Spaulding, D., & Voegtle, K. (2010). Methods in Educational Research: From Theory to Practice. Hoboken, NJ: John Wiley & Sons.
McNiff, J., & Whitehead, J. (2002). Action research: principles and practice. London: Routledge Farmer.
McQueen, R., & Knussen, C. (2002). Research methods for social science: a practical introduction. New Jersey: Prentice Hall.
Morgan, D. (2007). Paradigms Lost and Pragmatism Regained. Mixed Methods Research, 48-76.
O'Brien, R. (2001). An Overview of the Methodological Approach of Action Research. Theory and Practice of Action Research.
Patomaki, H. (2008). After Critical Realism: The Relevance of Contemporary Science. Helsinki: Globalism Research Centre.
Posavac, E. (2010). Program Evaluation: Methods and Case Studies. NJ: Prentice Hall.
Pyrczak, F., & Bruce, R. (2000). Writing empirical research reports: a basic guide for students of the social and behavioral sciences. Pyrczak Publishing.
Staedter, T. (2011, January 27). SYNTHETIC GASOLINE FOR $1.50/GALLON AND NO EMISSIONS. Retrieved from Discovery News.
Surhone, L., Timpledon, M., & Markesen, S. (2009). Observer Effect. Mauritius: Betascript Publishers.
Translate-Polish. Does College Education Pay? Web: translate-polish.com/college-educated.html
Williams, M., & May, T. (1996). Introduction to the philosophy of social research . London: UCL Press.
Williamson, W., & Ahmed, A. (2007). Survey Research and Islamic Fundamentalism: A Question about Validity . Journal of Muslim Mental Health, 155-176.
Philosophy of Research
Empiricism and Quantitative Research
Quantitative research methods allow for hypotheses to be proven or disapproved. The empirical observation of some phenomena provides the basis for making a hypothesis. Quantitative methodology then allows the hypothesis to be tested; therefore the observation leads to a hypothesis, the hypothesis is tested using quantitative methods, the data is analyzed, and the hypothesis is either accepted or rejected.
Scientific Method
Essentially, this describes a process whereby the scientific method is applied to research methodology. The scientific method uses a system of techniques to study phenomena, report on research already conducted, and add to the overall knowledge base whether it be through new information or information that corrects previous research. The process of the scientific method is for application in social science is to choose an issue, define a research question, gather/measure information about the issue, form hypotheses, collect data, analyze and interpret the data, and report on the results, perhaps with recommendations for future research (McQueen & Knussen, 2002). Barton and Haslett further define the scientific method as being based upon systems logic, which is a cognitive concept to help make sense of empirical observations (Barton & Haslett, 2007).
Positivism and Post-Positivism
The scientific method is an aspect of positivism, which holds that scientists can only describe and measure what they can see; anything beyond that approach progresses into the metaphysical realm, which is not a part of social science research methods (McQueen & Knussen, 2002). Auguste Comte developed the concept of positivism in the early 19th century; the basic concept is that knowledge derives from our senses and what we experience. It is a form of epistemology, which is to describe how we know what we know (Carlson, 2008). There are perspectives on positivism that yet dare to embrace metaphysical possibilities, yet there is also a spectrum upon which the exact opposite of such thought is grounded in positivism that attempts to base knowledge on logic (Morgan, 2007). The positivist approached is closely linked to empiricism.
Post positivism is a philosophy that rejects the central tenets of positivism. This philosophy questions the ways that scientists have approached the study of the world. The most fundamental concept of post positivism is that the notion of reality is affected by one's perception of reality; as such, it cannot capture everything that may actually exist. One type of post positivism is called critical realism. The critical realist is critical of reality, meaning that there may be a reality for which science cannot see using positivist approaches. In a general view, the field of theoretical physics would allow for the use of critical realism; such intellectual giants as Stephen Hawking, for example, can expand the knowledge base of physics through the application of critical realism among other methods (Patomaki, 2008).
The issue that arises from positivist and post-positivist approaches is how to objectively conduct research. The importance of objectivity is central the empirical observation and the scientific method. Clearly too, the post-positivist perspective maintains that it is only objective to allow for the possibility of phenomena outside the human ability to sense and observe. The challenge therefore is how to approach scientific research knowing the importance of objectivity. The methods, procedures, and definitions are all clearly of significant importance in allowing for a common framework for researchers to follow and one that can be replicated. Yet, there is also the issue of what defines reality. This is known as the Observer Effect (Surhone, Timpledon, & Markesen, 2009). One changes the nature of what is being observed through the act of observation. The state of what is attempting to be measured is altered by the act of observing. Therefore, truly achieving objectivity means embracing a potentially human-limited conception of reality, and being aware of that.
Scientific Realism
Scientific realism evolved over the debate on observable and unobservable phenomena. The development of this philosophy allows scientists to state that even though there may be unobservable phenomena, scientific theories can yet attempt to describe them. This conception supports the notion that science is a progressive phenomenon; the more we know, the more we can know. Theoretical physics is one area that uses scientific realism; and at the other end of the spectrum, a metaphysical approach to answering questions of theology exists (Arntz, Chasse, & Vicente, 2005).
Social Constructivism
Social constructivism developed as an approach to social science based on what an individual can know and observe due to their interactions within groups. This conception is often used in education research, behavioral research, psychology, and other research examining individual behavior as influenced from inter-group influences (McQueen & Knussen, 2002).
Importance of Objectivity
Regarding objectivity and how we know what we know, based on how we conceptualize reality, there are researchers that adopt the advocacy-liberatory framework for research. This goes beyond social constructivism claims that there are observer effects in research, and instead advocates that there may be many possible realities based on the milieu and environment; therefore, the use of research should be to acknowledge this claim, and conduct research for the liberation of knowledge. An example in social science research would be through examining the effects of literacy programs on the illiterate; the subjects are liberated by the knowledge, their world is broadened, and research has shown that knowledge liberation actually helps people (Lodico, Spaulding, & Voegtle, 2010).
Theoretical and Conceptual Research Paradigms
While there are different approaches to determining what objective research should be based upon in terms of research paradigm, there is also the realm of the conceptual research project and the theoretical research project. The conceptual research examines the literature and posits and idea of something new, a link from the literature to a hypothesis. A theoretical framework already offers a confirmation of a hypothesis, and also allows for further study on the subject (McQueen & Knussen, 2002). The relationship of these two different frameworks to research study is that they allow the researcher to either study something new, or in some way add to the knowledge base of something that has already been researched. Either method is acceptable in social science research.
Core Concepts for Research Design
Purpose
In social science research, we identify the research question, hypotheses, and problem so that we may have systematic way to study the phenomenon under issue. This process allows the researcher to determine how to go about studying the problem by addressing the question through the hypotheses. Defining the purpose of the research study is important to establish what the contribution to the knowledge base will be, and hence whether or not the study is feasible and valid. The core issue of the research is addressed in the purpose, that is, what is going to be added to the knowledge base of the discipline and issue being studied (McQueen & Knussen, 2002).
Literature Review
The literature review is one of the most important steps in the research process, and it is conducted in the early stages of the project. In this process, the researcher will examine what has been already studied in the issue at hand, what methods were used, how they were defined, what kinds of analysis have been performed, and what evidence was contributed to the knowledge base of the issue. The literature review is both specific and broad, and should be conducted in that way (Hall, 2008). The researcher will look for similar studies to their own proposal, and cast a broader net to see what might be related yet different from their topic. Both theoretical and conceptual literature should be reviewed in the research process (Williams & May, 1996). A quality literature review utilizes an approach method; this may include defining the data base(s) that will be utilized, the date ranges, the sample populations, the type of research design, and other potential delimiters (Hall, 2008).
Populations and Samples
The population in a study refers to the individuals or objects for which the research is done. It is a large collection of the objects or individuals, from which a sample will be drawn. It is not possible to test all members of a population in many types of social science research, unless the population is very small. Limitations of time, expense, and accessibility preclude total population sampling. The research population is identified as those that share similar characteristics. The researcher, in attempting to gain an understanding of the total population, will use sampling techniques to draw as representative a number as possible from the total population, which will serve as the research sample and the object of the methodology used for data collection. The population must be considered for its applicability to the purpose of the research; meaning, the population utilized fulfills the purpose of the study.
Variables
A variable is an element of the research study that can take on different values. In quantitative research, the variable will be scored numerically. In research, there are dependent and independent variables. This describes a causal or inferential relationship. The dependent variable is affected by the manipulation of the independent variable; the researcher (or outside element) manipulates the independent variable (McQueen & Knussen, 2002). The attributes of the variables should as much as is possible, should be exhaustive and mutually exclusive. As this may not always be feasible, the researcher may define major attributes of the variables and include a category that is a catch-all for 'other' responses or observations (Hall, 2008).
Reporting Findings
Data is captured for research study, analyzed, and the findings of such are reported. This is done in quantitative research through statistical methods, where the statistics indicate degrees of confidence that the observation under study is found to be valid. Findings are then reported on these numerical variables, and applied to the hypotheses. Hypotheses are then reported in the findings as either accepted or rejected. The findings are the empirical results of the study; further discussion may typically follow the findings, to more fully discuss the implications of the results (Grinnell & Unrau, 2010). Reporting the findings of the study involves ethical issues, such as the duty to report, and the duty to protect confidentiality of subjects, among others.
Findings should be reported to the community at large, and to the research participants (if applicable). The findings should be disseminated in language that is applicable to the community in which it is meant to serve. For example, the findings from a high-level bio-molecular study manipulating hydrogen particles in encapsulated polymers, it may well be appropriate to use language that only those knowledgeable in the field would understand. However, if that same research related to the development of synthetic gasoline which would impact the world economy at the macro level and individuals at the micro level, then the researcher should well use language that would be a bit more universally understood (Staedter, 2011).
Research findings can be disseminated through summaries, posters, press releases, brochures and flyers, conferences, newsletter, and relevant seminars, among others. Depending upon the subject or issue, it may be necessary to adopt a delicate approach to disseminating research findings, while not distorting the data. For example, if the issue is regarding a culturally sensitive topic, then the researcher would be wise to adopt a culturally sensitive framework for disclosing the findings, regardless of what they are, in relation to the audience who will be reading the findings (McQueen & Knussen, 2002).
Assumptions, Limitations, and Delimitations
Assumptions are those elements related to the research that have no hard evidence to support them, no facts or other empirical bases. Anecdotal data would be a type of assumption that could be reported and may be relevant, yet should be noted as such. If the researcher is able, it behooves them to attempt to justify how they will validate their assumptions; if this is not possible, then it should be clearly stated in the research.
Limitations in research relate to those factors that may impact internal and external validity of the research design. If the limitations are major, the research project should probably be redesigned; it is not uncommon to have limitations, yet these should be clearly stated in the project and be of relatively minor impact to justify the research continuing as designed. Some examples of limitations include a lack of data, problems in sampling, or the actual limits related to the type of research design selected for the study. Delimitations are those limits set by the researcher, such as studying only blonde-haired people in a project, or restricting the study to a geographic area, for example (Pyrczak & Bruce, 2000).
Validity and Reliability
In social science research (and other research) validity refers to measuring what is intended to be measured. In survey research methodology, measurement of attitudes through inference based on observations is somewhat indirect. Content validity would mean making sure the measurement tool includes all the attributes of the concept being measured (Frankfort-Nachmias & Nachmias, 2008). This would seem be a difficult thing to do in most situations, though a firm conceptual analysis of the issue should help inform the process. Empirical validity is due to the relationship between the measurement tool and measured outcome that inform the research question and hypotheses (Williamson & Ahmed, 2007). An observed measure can be fixed to an observed characteristic.
Construct validity refers to the degree that relationships or inferences can be made based upon the operationalization of the variables in the study. That is, the study construct is considered have high validity when the study scale correlates with the theoretical concept. It is essentially a relationship between the theoretical concept, and the scale of measurement (Hall, 2008).
In social science research, having complete validity through the three tests of validity (content, empirical, construct) may be difficult to achieve, and therefore the question of reliability can serve as another measure of research integrity (Given, 2008). Reliability essentially means that the variables measured and the ways that they are measured have what is called variable errors. Variable errors are those that occur between measured observations measured by the same instrument, either one time or more than one time. For example, using the same survey and asking the same question may get the same answers every time, and therefore be low in variable errors, or the answers may be different, and be high in variable errors; if a measurement instrument is high in variable errors, it is low in reliability (Frankfort-Nachmias & Nachmias, 2008).
Other Approaches to Research
Program Evaluation Methods
Program evaluation essentially refers to methods meant to assess different aspects of a program. This is often done is social research, and may be either quantitative or qualitative, or a mix of methods. A major factor is performing program evaluation is to assess if the program is doing what it is intended to do; therefore establishing causation is a primary concern in program evaluation.
Questions that may be addressed in program evaluation include determining efficiency, cost, impact, implementation, design, and need. A needs assessment relates to the examination of the actual need in the population to which a program is intended to address. This includes asking if there is a need, what the actual problem is, can the program address it, and how. The impact pathway of the program may also be assessed, to determine if the actual design of the program matches the intent it was designed for; this is essentially a logic analysis. Assessing the implementation of the program relates to the methods through which the program elements are being applied to the population it is intended for (Posavac, 2010).
Determining causality of a program relates to the impact analysis of the program. Quasi-experimental designs allow for studying impacts in program evaluation. Randomized experiments also offer a strong advantage in assessing program impacts, due to the high reliability and credibility of the design. Essentially, this aspect of program evaluation looks at the issue of whether or not the program is doing or has done what it was intended to do. Statistical methods are often used to accurately measure impacts in program evaluation (Posavac, 2010).
The methods used to evaluate programs must in turn be reliable and valid; in order to establish causality, these elements must be sound in keeping with established guidelines for determining the existence of such in research methodology. Without these elements, program evaluation outcomes would not be credible, and therefore useless (Posavac, 2010). The entire idea behind program evaluation is to assess the usefulness, based on credibility, of the evaluation results. The advantages and disadvantages of the various methods (logic analysis, experimental designs, and other methods) will depend upon what is going to yield the best and most useful outcomes to the evaluation. Some will be better than others, and some may simply be dictated by outside constraints, such as time and money.
Action Research and Other Methods
Action research is otherwise known as practitioner research; this is the systematic undertaking of scientific inquiry, by the participants of the inquiry. The action on the part of the practitioner is to formulate the questions, gather the data, analyze the data, and report on the data. Typically this type of research is some aspect of reflective investigation, with an aspect of personal interest by the participants of the inquiry.
The advantages of action research are that there is a vested interest in the undertaking of the research project by the participants. A major disadvantage is that the research may end up being tainted by researcher bias. A typical application of action research may be within a cultural or ethnic group, a classroom, or even within an organization to study organizational processes (McNiff & Whitehead, 2002).
There are six key principles of action research: reflective critique, dialectical critique, collaborative resource, risk, plural structure, and theory/practice/transformation (as one element) (O'Brien, 2001). This type of research involves real people, not experimental situations, since the application of the research is toward real people in real situations. Therefore the key principles involve a person-centered feedback loop that with strong emphasis on using commonly understandable language. This type of research is used when flexibility is required, when a holistic approach may be necessary, and when events may change quickly (O'Brien, 2001).
Action research has its place in the research world, yet in situations which are not oriented around this particular paradigm, other research approaches may be appropriate. Descriptive research can help us describe what exists in the present moment. Experimental research can tell us what can potentially exist given certain conditions. Case study research helps us explore and analyze real-life situations (Hall, 2008). Where action research may offer solutions to a specific problem or context, other types of basic research can offer theories or generalizations that may further be explored (Frankfort-Nachmias & Nachmias, 2008).
Conclusion
In the social sciences, the process of scientific inquiry can take various forms and follow different methodologies. The area of inquiry may dictate the research paradigm, the research design, and the analysis of results. The basic approach to social science research is to identify an issue, define a problem, develop a research question and hypotheses, establish the variables, describe how the variables will be operationalized, develop the methodology including the data gathering instruments, apply appropriate analysis methods, and report on the findings. If the research is basic and offering to the evidence base of knowledge in an area, basic research methods and designs may be appropriate. When the problem is discrete and has a vested interest by the participants, action research methods may be used. The researcher must carefully plan the research process to ensure reliability and validity of the overall flow of the process; in this manner, the outcome has a higher potential of being accepted as credible evidence in support of the idea being explored.
REFERENCES
Arntz, W., Chasse, B., & Vicente, M. (2005). What the Bleep do we Know. Deerfield Beach, FL: Health Communications.
Barton, J., & Haslett, T. (2007). Analysis, synthesis, systems thinking and the scientific method: rediscovering the importance of open systems. Systems Research and Behavioral Science, 143-155.
Carlson, J. (2008). Model Theory & International Relations Theory: Positivism, Ontology and the Nature of Social Science. ISA 2008 National Convention. San Francisco: ISA.
Frankfort-Nachmias, C., & Nachmias, D. (2008). Research Methods in the Social Sciences 7th ed. New York: Worth Publishers.
Given, L. (2008). The Sage encyclopedia of qualitative research methods, Volume 1. Thousand Oaks: Sage.
Grinnell, R., & Unrau, Y. (2010). Social Work Research and Evaluation: Foundations of Evidence-Based Practice . New York: Oxford University Press.
Hall, R. (2008). Applied social research: planning, designing and conducting real-world research. South Yarra: Palgrave-MacMillan.
Lodico, M., Spaulding, D., & Voegtle, K. (2010). Methods in Educational Research: From Theory to Practice. Hoboken, NJ: John Wiley & Sons.
McNiff, J., & Whitehead, J. (2002). Action research: principles and practice. London: Routledge Farmer.
McQueen, R., & Knussen, C. (2002). Research methods for social science: a practical introduction. New Jersey: Prentice Hall.
Morgan, D. (2007). Paradigms Lost and Pragmatism Regained. Mixed Methods Research, 48-76.
O'Brien, R. (2001). An Overview of the Methodological Approach of Action Research. Theory and Practice of Action Research.
Patomaki, H. (2008). After Critical Realism: The Relevance of Contemporary Science. Helsinki: Globalism Research Centre.
Posavac, E. (2010). Program Evaluation: Methods and Case Studies. NJ: Prentice Hall.
Pyrczak, F., & Bruce, R. (2000). Writing empirical research reports: a basic guide for students of the social and behavioral sciences. Pyrczak Publishing.
Staedter, T. (2011, January 27). SYNTHETIC GASOLINE FOR $1.50/GALLON AND NO EMISSIONS. Retrieved from Discovery News.
Surhone, L., Timpledon, M., & Markesen, S. (2009). Observer Effect. Mauritius: Betascript Publishers.
Translate-Polish. Does College Education Pay? Web: translate-polish.com/college-educated.html
Williams, M., & May, T. (1996). Introduction to the philosophy of social research . London: UCL Press.
Williamson, W., & Ahmed, A. (2007). Survey Research and Islamic Fundamentalism: A Question about Validity . Journal of Muslim Mental Health, 155-176.
Author InfoPrintable
Empirical WriterUK
More about Author
A science writer and editor currently working for GraduateWriter.com