Reading Between the Lines: Understanding the role of latent content in the analysis of online asynchronous discussions
This was my trigger for taking a content-analysis approach to the learning discussions surrounding goals in 43 Things, but as of right now, I'm not planning to study latent content. As interesting as it would be to survey participants about their motivations, impressions and experiences in using 43 Things, this may be outside of the scope of the project. I'm thinking that studying the manifest content (text, photos and links posted to the site) will be more than enough to chew on. That said, the design of this case-study research seems really solid. A few chunks I'd like to save for later:
"This distinction between manifest and latent content was highlighted in a general context of content analysis prior to the existence of online discussions. Berelson (1952) argued that content analysis should be limited to analysis of manifest content. Consistent with this perspective, he described content analysis as "a research technique for the objective, systematic, and quantitative description of the manifest content of communication" (p. 18). Content analysis proceeds in terms of "what-is-said", and not in terms of "why-the-content-is-like-that (e.g., 'motives') or how-people-react (e.g. 'appeals' or 'responses')" (p. 16). Hair, Anderson, Tatham, Ronald and Black (1995) argue, like Berelson, that content analysis should focus only on manifest content."These folks seem to think that content analysis on the manifest content alone is the way to go, which is encouraging. What I found a little disheartening was the rigour required to do the actual analysis, with four people each reading all of the content in the study and coding each unit using a classification instrument developed by Dr. Murphy:
"The transcripts of the discussion were grouped by a participant and coded by two independent coders against the nineteen indicators of behavior associated with PFR in the instrument using the paragraph as the unit of analysis. The transcripts were also coded a third time jointly by the two coders and the creator of the instrument and principal investigator. This third coding is used in this study to report aggregate results of engagement in PFR in the online discussion. Cohen's Kappa was used to calculate interrater reliability."And here I thought I was going to be doing some nice artsty-fartsy qualitative work...this looks more like hardcore statistics work, and who else (besides me) is going to go through thousands of posts on 43 Things to do this kind of coding? Ugh.
Apparently there are lots of issues surrounding the reliability of the coding done in content analysis, many of which are covered in these articles that I only skimmed tonight...Sources of Difference in Reliability: Identifying Sources of Difference in Reliability in Content Analysis of Online Asynchronous Discussions and A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability.