Reading Between the Lines: Understanding the role of latent content in the analysis of online asynchronous discussions
This was my trigger for taking a content-analysis approach to the learning discussions surrounding goals in 43 Things, but as of right now, I'm not planning to study latent content. As interesting as it would be to survey participants about their motivations, impressions and experiences in using 43 Things, this may be outside of the scope of the project. I'm thinking that studying the manifest content (text, photos and links posted to the site) will be more than enough to chew on. That said, the design of this case-study research seems really solid. A few chunks I'd like to save for later:
"This distinction between manifest and latent content was highlighted in a general context of content analysis prior to the existence of online discussions. Berelson (1952) argued that content analysis should be limited to analysis of manifest content. Consistent with this perspective, he described content analysis as "a research technique for the objective, systematic, and quantitative description of the manifest content of communication" (p. 18). Content analysis proceeds in terms of "what-is-said", and not in terms of "why-the-content-is-like-that (e.g., 'motives') or how-people-react (e.g. 'appeals' or 'responses')" (p. 16). Hair, Anderson, Tatham, Ronald and Black (1995) argue, like Berelson, that content analysis should focus only on manifest content."These folks seem to think that content analysis on the manifest content alone is the way to go, which is encouraging. What I found a little disheartening was the rigour required to do the actual analysis, with four people each reading all of the content in the study and coding each unit using a classification instrument developed by Dr. Murphy:
"The transcripts of the discussion were grouped by a participant and coded by two independent coders against the nineteen indicators of behavior associated with PFR in the instrument using the paragraph as the unit of analysis. The transcripts were also coded a third time jointly by the two coders and the creator of the instrument and principal investigator. This third coding is used in this study to report aggregate results of engagement in PFR in the online discussion. Cohen's Kappa was used to calculate interrater reliability."And here I thought I was going to be doing some nice artsty-fartsy qualitative work...this looks more like hardcore statistics work, and who else (besides me) is going to go through thousands of posts on 43 Things to do this kind of coding? Ugh.
Apparently there are lots of issues surrounding the reliability of the coding done in content analysis, many of which are covered in these articles that I only skimmed tonight...Sources of Difference in Reliability: Identifying Sources of Difference in Reliability in Content Analysis of Online Asynchronous Discussions and A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability.
4 comments:
Your blog has a metacognitve appeal. You're using social software to articulate and advance your thinking about your study of using social software for learning.
Content analysis is one approach you could take to the inquiry. The collaboration model that you referenced today could be mapped onto the interactions in 43 things.
Beware however of the epistemological assumptions behind use of the model in this context. Rather than looking at 43 things and asking -what do I see there? What's happening,? What are people doing? you end up asking: what evidence do I find in the transcript of 43 things of collaboration as defined in Murphy's (2002) model.
A different starting point ( more inductive) would be to focus on 43 things and ask questions like? what does this tell me about learning? What is at play here-for example are people actually responding to each other or are they moreso articulating individual prespectives? Is there some evidence of construction of knowledge or of deepened understandings?
Something is happening at 43 things. But I'm not sure what it is. The research literature on "informal learning" may provide you with more direction.
Keep on bloggin...
Thanks so much for this direction, Dr. Murphy. I didn't really set out to make any point about using my blog to develop the thesis...it's just how I store and summarize my thoughts. But I'm really glad that you've discovered this space and offered your insight -- my space online is your space!
I understand what you're saying about the limitations of using the model -- it's probably too early to lock myself in on that level, and even the content analysis approach itself isn't set in stone. I'm much more interested in the types of questions you're introducing here.
A colleague responded to my thesis post on 43Things with this comment: "Never mind passively analysing content. You’re posting here in 43things and setting your own goals so that makes you a participant researcher." I suppose I could take a more active approach to this investigation, but it somehow seems to muddle things up. I've certainly found direction and learned things from my participation in 43 Things, but I don't really want my participation to be the focus. Or should it be? Hmmm...
You mentioned:
"I didn't really set out to make any point about using my blog to develop the thesis...it's just how I store and summarize my thoughts."
Do you think maybe that what you are doing here is similar in effect to what's happening at 43 things.com?
Is your implicit purpose for using social software to make your reflections about learning more explicit? Is that what people are doing at 43 things?
What does blogging about your thesis do?
What do you get out of it?( You would not be doing it if it did not serve some purpose. )
Does it help clarify your thinking by making it public?
I think that if you can begin to answer those questions maybe we can begin to understand the role that social software (and social networks) play in learning.
"Do you think maybe that what you are doing here is similar in effect to what's happening at 43 things?"
The motivation may different, but there are some similarities. I think the main difference is that 43 Things is an explicit social network -- you post your goals there with the specific expectation that it will connect you to a wide range of people you don't know who are either pursuing the same goal or have already completed it. It's very structured in its purpose from that perspective.
I post here primarily as a way to organize my thoughts, without the expectation of outside participation. Any outside comment or feedback is a bonus, and it usually comes from someone already in my network of shared interests. Occasionally colleagues have left me very thought-provoking comments that changed my view on something, or sent me into a new avenue of research that I hadn't considered, or even offered a simple book recommendation that turns out to be very helpful...these same types of experiences with feedback make 43 Things effective too.
One common motivation may hinge on a kind of accountability or discipline that emerges from the shared/public space. There's something about putting your goals in a public forum that makes it more likely that you'll actually follow them up, particularly if others share a goal/interest or offer their support. I'm thinking of the analogy of the workout partner -- having someone else knowing what your intentions are helps get you over those humps when you don't feel like pursuing a difficult goal (whether it's losing weight, learning software, or writing a thesis).
One more similarity focuses on the under-reported and essential part of blogging -- reading. You see the artifact of my blogging -- the words on the page -- but can't see the hours I spend skimming, reading, printing and commenting elsewhere. Many of the online writers I read regularly only post occasionally on my specific areas of interest, but their other topics and stories usually interest me on some level as well. Over time, this forms a connection to the people I'm reading. In 43 Things, that connection may not go as deep because it doesn't look like many people write there regularly (or on an ongoing basis), but you do get the benefit of seeing what other goals they're pursuing (or have completed) aside from the ones you share with them. Maybe it's a bit like the benefit of browsing in a library, when you find something good that you weren't really looking for...but this is more personal.
"Does it help clarify your thinking by making it public?"
Yes, and the idea of an imagined audience helps raise my own expectation of quality. It can act as a sort of bullshit meter for me as well -- when I know it's going to be out there (and archived) for all to see, I'd better believe what you're saying. That's not to say that I want to self-censor or only publish polished thoughts...in fact I love seeing progression in my learning over time...but there is likely a filtering effect that enforces rigour rather than stifling creativity.
There also seems to be great learning value in articulating things in writing that you think you understand. Sometimes halfway through writing, I realize I have no idea what I'm talking about, which forces me back into the reading -- I'm not sure how to conceptualize that process, but I think it's significant.
Post a Comment