Wednesday, September 18, 2013

Example of knowledge mobilization



So, along with submitting and posting his dissertation here, McGill grad student Timothy Blaise also made a musical version of his research, which is a quite amazing cover of an infamous Queen tune, which Blaise called "Bohemian Gravity". The lyrics can be found here. This is a fine example of knowledge mobilization.

Friday, July 26, 2013

Types of Field Notes


Groenewald (2004) describes the 4 types of field notes he used in his phenomenological research:

• Observational notes (ON) — 'what happened notes' deemed important enough to the researcher to make. Bailey (1996) emphasises the use of all the senses in making observations.
Theoretical notes (TN) — 'attempts to derive meaning' as the researcher thinks or reflects on experiences.
Methodological notes (MN) — 'reminders, instructions or critique' to oneself on the process.
Analytical memos (AM) — end-of-a-field-day summary or progress reviews

Sources cited:
Groenewald, T. (2004). A phenomenological research design illustrated. International Journal of Qualitative Methods, 3(1). Article 4.

Sunday, July 14, 2013

Types of Validity in Research

Validity is important to any type of research. Types include (but are NOT limited to!):

  • External validity - the degree to which the research can be generalized to other settings. There are 2 types: population validity, and ecological validity
  • Internal validity - the degree to which the research accurately follows cause and effect
  • Criterion validity - the degree to which the research results concur with other research on the same construct to come to the same conclusions. An example of this might be whether GPA scores correlate to SAT scores.
  • Content validity - the degree to which the research measures accurately the "content" associated with the construct studied. So, for instance, if a literacy test only looks at grammar, it is missing many different parts of the construct called "literacy" since literacy requires much more than just grammar!
  • Construct validity - the degree to which the "construct" measures what it's supposed to measure. For instance, does IQ actually measure intelligence as it is supposed - or something else, such as test-taking proficiency or cultural understandings. An example of a problem with construct validity in the IQ test is that it is specific to the country/jurisdiction. The IQ tests in the US use imperial measurements (inches, feet, gallons) so a person in Canada might score poorly on such questions not because they do not know how to convert measurements, but because they are familiar with metric (not imperial) measurements
  • Face validity - similar to content validity, face validity has to do with whether the research appears to measure what it is supposed to measure based on expert observers. A famous example of this would be the "tests" used during the Salem Witch Trials. The "test" of burning witches at the stake might have seemed to some at the time as valid, but we know now that it was seriously flawed.
  • Predictive validity - the degree to which the research will predict certain types of results if similar research is done in the future. For example, are SAT scores predictive of academic performance in college? 

Sample size determination for quantitative research

Continuous data is data where the responses fall on a continuum (e.g., likert-type scales), unlike categorical data (e.g., gender, occupation, etc.)

This article by James E. Bartlett, Joe W. Kotrlik and Chadwick C. Higgins describes the process for sample size determination. They also offer a useful table that compares continuous and categorical data calculations:

In the article the authors invite use of this table *if* the margin of error is appropriate for a researcher's study. If the researcher selects a different margin of error, the sizes need to be re-calculated.
A final note to researchers is to remember that the degree to which you can generalize is based on the sampling METHOD (not just the sampling numbers!) so be sure to familiarize yourself with all of that!

There are also a few rules of thumb you might consider, especially if the population size is not known (Hill, Robin, 1998, What sample size is "enough" in internet survey research? Interpersonal Computing and Technology: An Electronic Journal of the 21st Century, 6(3/4), 1-10). These include:

  • Generally speaking, it's difficult to justify fewer than 10 cases, or more than 500
  • In simple matched-pairs experimental designs, 10 cases can suffice, but more complicated experimental designs OR correlational research should have at least 30 cases. When these are broken down into categories (e.g., male/female) then multiple the minimum number of cases by categories
  • For multiple regressions, samples should be at least 10 times the number of variable (so 5 variables means you should have at least 50 cases
  • For purely descriptive research, the sample should be 10% of the population

To check sufficiency of data when the population is not known, you can perform a "split half analysis" in which you divide the data in two, and see if both halves generate the same conclusions.

Saturday, June 22, 2013

Critical History

Critical histories can be applied to policies, practices, or institutions (among other phenomena). They can be constructed from archival and textual data, and/or from microhistories (a focus on a single person or place). Critical histories require the researcher to apply a critical theoretical framework (such as CRT, feminism, critical democracy, etc.) to analyze the data collected.



Examples:
Gilroy, P. & McNamara, O. (2009). A critical history of research assessment in the UK and its post-1992 impact on education. Journal of Education for Teaching: International Research and Pedagogy, 35(4), 321-335.

Reid, D.K. & Knight, M.G. (2006). Disability justifies exclusion of minority students: A critical history grounded in disability studies. Educational Researcher, 35(6), 18-23.

Friday, June 14, 2013

Levels of Evidence

It's useful to consider the varying levels of evidence when thinking about the strength of research. The levels in the figure below come from the medical literature - and tend to privilege a positivist research perspective. Notice that the methodological design of the research plays a role in its level - and  more generalizable, quantitative results are privileged. This is one perspective, and it is contested, especially in the social sciences!
From Pinto, L., Spares, S. & Driscoll, L. (2012). 95 Strategies for Remodeling Instruction. Thousand Oaks, CA: Sage.

Saturation

Conclusions in qualitative research are drawn from patterns a researcher identifies in the data, or conclusions can uncover conceptual (NOT statistical) relationships. As such, we look for points of saturation to know when we have collected and analyzed enough data – and this can determine the sample size and the point at which analysis should end.

Data saturation is the point where new data and theirsorting only confirm the categories (often numbering between three and six orso), themes, and conclusions already reached. Onwuegbuzie and Leech (2007) also discuss theoretical saturation (Strauss & Corbin, 1990), and informational redundancy (Lincoln & Guba, 1985) as specific areas of saturation. There are various strategies for determining when saturation is reached, but researchers should consider a codebook to track themes and findings.

For more information, see the links above, and
Onwuegbuzie, A., & Leech, N. L. (2007). A Call for Qualitative Power Analyses. Quality & Quantity, 41,105–121. DOI 10.1007/s11135-005-1098-1

Some Sampling Methods Summarized

This is a general overview - please refer to other sources for details. The summary here is based on:

Onwuegbuzie, A., & Leech, N. L. (2007). A Call for Qualitative Power Analyses. Quality & Quantity, 41,105–121. DOI 10.1007/s11135-005-1098-1
 
Random Sampling

Sampling type
Description
Notes
Simple Random Sampling
Participants selected so every person in the population has the same chance of selection, and selection is independent (the selection of 1 doesn’t affect the selection of others). Random means that each person in the population is assigned a number, and the selection is based on a random table of numbers. This is different from the “conversational” use of the term random.
 
Requires an accurate list of the entire population – so if the population was “science teachers in the GTA,” the researcher would have to have a list of ALL science teachers including contact information, and draw from that
Stratified Random Sampling
Similar to above, but population is divided into homogenous sub-populations (e.g., males and females) and the sub-populations are randomized and selected
See above
Cluster Random Sampling
As above, but groups or clusters are randomly selected
As above
Systematic Random Sampling
As above, but the researcher selects every kth sampling frame member,
where k represents the population size divided by the desired sample size
As above
Multi-stage Random Sampling
The researcher samples in two or more stages because either the population is relatively large or its members cannot easily be identified
 

 

 

Non-Random Sampling

Sampling type
Description
Notes
Purposive Maximum Variation Sampling
A wide range of individuals, groups, or settings is purposively selected so that different and divergent perspectives are represented
The researcher has to have expert knowledge of and access to the population
Purposive Homogeneous Sampling
sampling individuals, groups, or settings because they all possess
similar characteristics or attributes
 
Purposive Critical Case Sampling
individuals, groups, or settings are selected that
bring to the fore the phenomenon of interest such that the researcher can
learn more about the phenomenon than would have been learned without including these critical cases
As above
Theory-based sampling
individuals, groups, or settings are selected because they help the researcher to develop or expand a theory
 
Confirming and Disconfirming Case Samples
This is often applied at the end of data collection based on what the individual cases said
Occurs at the end of the research process, in combination with another sampling method
Snowball Sampling
Asking participants who have already been selected for the study to recruit
other participants.
Occurs during data collection
Extreme Case Samples
an outlying case or one with
more extreme characteristics is studied
 
Intensity Sampling
researcher studies individuals, groups, or settings that experience the phenomenon intensely but not extremely
 
Typical Case Sampling
researcher should consult several experts in the field of study in order to obtain a consensus as to what example(s) is typical of the phenomenon
Requires access to “experts” for consensus
Politically Important Sampling
researcher selects informants to be included/excluded because they connect with politically sensitive issue
 
Random Purposeful Sampling
the researcher chooses cases at random (see above for clarification on the formal definition of “random”) from the sampling frame consisting of a purposefully selected sample. The researcher first obtains a list of individuals of interest for study using one of the 15 other methods of purposeful sampling, and then randomly selects cases
 
Stratified Purposeful Sampling
As above, but the selection is stratified (see above for the definition of stratified)
 
Criterion Sampling
individuals, groups, or settings are selected that meet criteria central to the research
 
Opportunistic Sampling
the researcher capitalizes on opportunities during data collection stage to select cases. Cases could represent typical, negative, critical, or extreme cases
 
Mixed Purposeful Sampling
mixing of more than one sampling
strategy (e.g., one extreme case sample and another critical case sample). Results can be compared to
triangulate data
 
Convenience Sampling
selecting individuals or groups that happen to be available and are willing to participate at the time
 
Quota Sampling
Cases selected based on specific characteristics and quotas
A main limitation is that only those accessible at the time of selection have a chance of being selected

 

 

 

 

 

How many? Sampling in qualitative research


Onwuegbuzie and Leech (2007) stress that though qualitative research typically relies on small samples, the sample size is important because it determines the extent to which the researcher can make generalizations. Sample sizes in qualitative research should small enough so that the researcher can extract thick, rich data, but also large enough that saturation (data, theoretical saturation, and informational redundancy) are achieved (Lincoln & Guba, 1985; Onwuegbuzie & Leech, 2007).

Mason (2010) cites Guest, Bunce And Johnson ‘s (2006) finding that only 7 sources provided guidelines for qualitative sample sizes. They are:

Source
Methodology
Sample Size
Morse, J.M. (1994). Designing funded qualitative research. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp.220-35). Thousand Oaks, CA: Sage
Ethnography and ethnoscience
30-50 interviews
Bernard, H.R. (2000). Social research methods. Thousand Oaks, CA: Sage
Ethnography and ethnoscience
30-60 interviews
Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage.
Grounded theory
20-30 interviews
Morse (1994)
Grounded theory
30-50 interviews
Cresswell (1998)
Phenomenology
5-25 interviews
Morse (1994)
Phenomenology
At least 6 interviews
Bertaux, D. (1981). From the life-history approach to the transformation of sociological practice. In D. Bertaux (Ed.), Biography and society: The life history approach in the social sciences (pp.29-45). London: Sage.
All qualitative
At lease 15

Sources cited:
Guest, G., Bunce, A., & Johnson, L. (2006). "Howmany interviews are enough? An experiment with data saturation andvariability". Field Methods, 18(1), 59-82.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Mason, M. (2010). Sample Size and Saturation in PhD Studies Using Qualitative Interviews. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 11(3), Art. 8, http://nbnresolving.de/urn:nbn:de:0114-fqs100387

Onwuegbuzie, A., & Leech, N. L. (2007). Sampling designs in qualitative research: Making the sampling process more public. The Qualitative Report, 12(2), 238-254. Retrieved [Insert date], from http://www.nova.edu/ssss/QR/QR12-2/onwuegbuzie1.pdf

Monday, May 27, 2013

Critical Discourse Analysis

Discourse is “a practice not just of representing the world, but signifying the world, constituting and constructing the world in meaning” (Fairclough, 1992, p. 64). As such, critical discourse analysis (CDA) allows for the development of an account of the role of language, language use, and discourse in the (re)production of dominance and inequity (van Dijk, 1993).

Data to which CDA is applied can vary - any textual source (e.g., books, policy, curriculum, etc.) or discursive form (e.g., speeches, debates, transcripts, narratives). CDA dictates what you look for, and how you look for it - and it must focus on issues of power (that's what the "critical" part of CDA addresses).

Readings to understand the method:

Fairclough, N. (1992). Discourse and Social Change. Cambridge: Polity Press.

van Dijk, T.A. (1993). Principles of critical discourse analysis. Discourse & Society, 4(2), 249-283.

Full text of Gee, J.P. (2011). How to do Discourse Analysis: A Toolkit. New York: Routledge.

A very handy, 20-step summary for doing critical discourse analysis based on Hill (2012).

Examples of its application in research:

Pinto, L.E. & Coulson, E. (2012). Social justice and the gender politics of financial literacy education. Canadian Journal of the Association for Curriculum Studies, 9(2), 54-85.

Bührmann, A.D., (2005). The Emerging of the Entrepreneurial Self and Its Current Hegemony. Some Basic Reflections on How to Analyze the Formation and Transformation of Modern Forms of Subjectivity. Forum: Qualitative Social Research, 6(1). Retrieved from: http://www.qualitative-research.net/index.php/fqs/article/view/518/1122

Additional notes:

Bührmann's (2005, see above for citation) study is an interesting CDA using a corpus of policy documents. She discusses "Variegated and interdependent levels of investigation" and these levels:
  • the object or area of knowledge 
  • the enunciative modalities
  • the construction of concepts
  • the strategic choice. 
She also distinguishes between

  • power relations
  • authority of authorization
  • technologies of power
  • strategies of power. 
These can be useful systems of organization for other researchers.

Narrative Policy Analysis

Analysis rooted in instrumental reason cannot accurately capture the subjective nature of political reality (Stone 2002; van Eeten 2007). Rather, application of narrative policy analysis (NPA) allows for substantive qualitative analysis of the dynamics at play on the issue studied. The efficacy of NPA has been empirically confirmed as a tool to make sense of and analyze public policy production (Boswell, Geddes and Scholten 2011; Bridgman and Barry 2002; McBeth et al. 2007; Petković 2008; Shanahan et al. 2008).

Policy narratives capture the stories behind the “wicked problems” addressed by public policy, and the proposed solutions to those problems (McBeth et al. 2007; Rittel and Webber 1973; Schon and Rein 1994) that are essential to understanding the politics of financial literacy education. Distinct from discourses (which refer to a wider set of values), policy narratives depict certain, often idealized interpretations of problems and solutions as stories, rather than realities (Fischer 2003). When successful, they become influential in legitimizing policy action. At the micro level of analysis, policy narratives affect individual attitudes, and have been shown to affect aggregate public opinion (Jones and McBeth 2010), especially when the media acts as a conduit to communicate them (Shanahan et al. 2008). Policy narratives can thus be strategic when constructed in conjunction with political manoeuvring, pointing to power dynamics as they operate in the public sphere (Hampton 2009; Stone 2002), and legitimize policy decisions.

Researchers have applied NPA qualitatively and quantitatively depending on the researcher's epistemological orientation.

Examples of its application include:
Boswell, C., A. Geddes and P. Scholten. 2011. The role of narratives in migration policy-making: A research framework. The British Journal of Politics and International Relations 13 no. 1: 1-11.

Bridgman, T. and D. Barry. 2002. Regulation is evil: An application of narrative policy analysis to regulatory debate in New Zealand. Policy Sciences 35: 141-161.

Hampton, G. 2009. Narrative policy analysis and the integration of public involvement in decision making. Policy Sciences 42: 227–242.

McBeth, M.K., E.A. Shanahan, R.J. Arnell and P.L. Hathaway. 2007. The intersection of narrative policy analysis and policy change theory. Policy Studies Journal, 35, no. 1: 87-106.

Pinto, L.E. (2013). When politics trump evidence: Financial literacy education narratives following the global financial crisis. Journal of Education Policy, 28(1), 95-120.

Additional reading to understand NPA, as well as analytic techniques to support it, include the following (though it's a good idea to look at how it was applied in the examples above first!)

Fischer, F. 2003. Reframing public policy: Discursive politics and deliberative practices. Oxford: Oxford University Press.

Mello, R.A. 2002.Collocation analysis: a method for conceptualizing and understanding narrative data. Qualitative Research 2, no. 2: 231-243.

Stone, D. 2002. Policy Paradox: The Art of Political Decision Making, 3rd ed. New York: W. W. Norton.