Attached is my final report on my qualitative case study of the twitter sub-network comprised of reciprocol following relationships with user edtechtalk. Feel free to add you comments ... I prefer nice comments.
Stumbled on some interesting resources and articles regarding qualitative data analysis (QDA):
I purchased Transana and downloaded the trial version of the commercial Atlas QDA. Beyond the twitter project, I have a back-channel project where I am trying to sync up audio / video / text chat during analysis and I think Transana will be helpful for that purpose. Otherwise, good old Excel seems to work quite nicely :)
My data collection is done and I have a ton to read and re-read. I am clearly at the "so what?" stage and still struggling with the impact of where (and who) I focused on during the data collection stage. That said, I do think I captured data to help me answer my primary research questions. Therefore, it may be more a case of gaining knowledge during the process that would re-focus my original research questions. For example, what about those who DON'T post anymore or very often? I focused on the sample of posters during my 5 hours of observation and I feel by picking 10 1/2 hour slots during various times during a 10 day period I did a fair job of getting "off work" and "on work" commentary, but I made no effort (purposely) to reach out to those who don't post. Which makes me wonder if there is something "different" in the motivations of those who post versus those who don't (self-promotion, familiarity with online connections, privacy concerns, etc). Clearly questions for another study for another day, but plays into my "so what?" question. Is my focus sufficiently important?
I spent the week at AECT in Louisville, KY. The funny thing is the conference is attended by those who are paid to be edu-geeks, but I had about the most powered down week in the past 3 years. I had the awesome opportunity to meet f2f with 12 or so of my fellow ODU PhD students, as well as all of the professors in the instructional design and technology program. However, we spent very little time talking tech and just ... talked (oh, and ate and drank ... and laughed ... a lot).
By now, I am very aware of the various levels of friends one makes and keeps (and loses) in the online world ... and used to the "first meeting" of "long time online friends". However, I am always amazed at the incredibly short time it takes for "online friends" to transition to "f2f friends". Well before we got to KY, I knew how many kids everyone has, where they work, and what they looked like (enough to grab most in bear hugs as I first saw them in the conference hall). As we talk about a lot on ETW, there is nearly complete transparency in long time online relationships. While one can TRY to create a different online persona, the "real" person inevitably shines through to the point that there are usually no surprises during the first f2f meetings. However, my school friends for some reason seem to percieve me as a bit of a teacher's pet (suck up?). I have NO idea where they get such a perception :)
Just waiting now to hear if I have HSR approval to send out the interview. I tossed up the interview questions on SurveyMonkey last night and it should take participants around 15 minutes to answer the questions. I made each question a "required" response, but all they have to do is put a character in each box to flip through. From past experience, I am seeing the need to make questions required as I could end up with nothing for the effort. I also am allowing participants to return to the survey, but I will eventually cut it off when it is time to analyze. If they click on the link, they go right back to where they left off.
I used an online random number generator and linked the number to the numbered list of user names (sorted by alphabet). Probably overkill, but I was tempted to throw out the surveys to those names I recognized ... bad researcher! I'm sending out 10 interview requst at first in hopes of getting 3 back and I'll continue sending out invites until 3 respond. Who knows, maybe I will end up sending out to all 499 :)
I concluded my Twitter observations last night and compiled all of my notes into one file. I am too tempted to quantitatively review my data ... so I am going to get it out of my system here ...
I have over 1,093 individual tweets to review which I dumped into Excel to helps me ID posts with @ which I am using as a guide to when the tweets are directed at a person vs broadcast messages. Also, by sorting out those with # tags or http:// I can see ID topics. Over the 10 1/2 hour observations, the tweets came from 499 unique users. This is about 15% of the users with reciprocal following relationships with edtechtalk ... seems kind of high given it is only over a 5 hour time period, right? Also, one user tweeted 16 times (while the mean was 2 ... sorry, I couldn't help myself).
Interestingly, I didn't see tweets from some of closest online buds which makes me wonder about the impact of the snapshots in time on my perceptions / observations. In other words, is my view / perspective from my 5 observations different than it would be if I had made the observations at a different time?
Here is an interesting take on the "macro" twitter world (taken from the twitter API for all of twitter) that maybe can help me compare / contrast what I see in my "micro" world (for example, where folks post from, average number of followers, tweets, etc). http://www.techcrunch.com/
Going through the Twitter data collection I see that webinar means commercial ... click to URL and you are greeted with cheesy graphics of earnest looking people in suits on the phone, looking at a computer, smiling around a conference table with words like "strategic / "newest" / "monitor progress" / "agenda" / "goal alignment". In contrast, teachers and other edtechers just say something like " Catch the live stream of blah, blah" or "So and So is presenting over at ..." Just a wee example of the culture and language embedded in the network.
I thought I "knew" the edtech network before I started this Twitter project ... or least I felt confident explaining it to others at conferences and on EdTechWeekly. However, 7 days into the data mining of the 3,500 or so reciprocal following / followed by relationships with Twitter user edtechtalk, I realize that I have purposefully (but largely subconsciously) filtered my understanding based on what I thought I knew about the network. In other words, I followed the activities of those I knew about, framed my understanding based on those I knew, and rarely went outside a small subnetwork within the larger edtech network.
My professor and (principle research investigator) routed my Twitter proposal to Human Subjects in the College of Education today. I also have two of my three research protocols stitched together, so things are moving forward despite (or maybe because of) comps. Kind of like cleaning during final exams ... other work becomes a happy distraction at times :)
I began poking around My Sample® to try to think about ways to slice and dice what I am seeing. I plan to start my formal observations on Saturday, but I did a sneak peek. Some students in class suggested that maybe I am biting off too much with the thousands of potential tweeters, but from what I see that really won't be the case, especially if I stick to my plan of observing from different spots during the day. Time will tell ...
I just received my first of 3 essay questions for my comps. I hate to wish away weeks of my life, but I can't wait for Thanksgiving :)
So, I'm getting everything set up to begin an observation of the activity on the ETT twitter account. However, when I checked in on the account earlier this month, I saw that ETT was following about 1,500 out of the 4,300 or so followers. Assuming that no one had taken the time to troll through the followers to follow those with a like interest in edtech, I went seeking a tool to more quickly breeze through profiles and click "follow" for those where there is an edtech "match". This process is still quite painful on twitter.com, so I was happy to stumble upon refollow.com, a site that allows you to filter your follow and followed by lists based on key words and other criteria, such as those who have posted in the last 1/15/30/90 days or those who lock / unlock their tweets, etc. I now have the following list to 3,100 based primarily reciprocated following relationships with those who chose to follow ETT ... which is the pool of folks I wanted to target. I filtered out those who lock their tweets as I can't use them in the study without getting informed consent from each one (an interesting group to study, but beyond the scope of what I can do in a short time frame). Also, I targeted those who have posted in the past 90 days as I am not going to do anything with those who don't post (such as try to understand why they don't post ... another question for another day).
A recent free report published in Faculty focus summarizes a survey of Twitter usage and trends among higher ed faculty. As noted in the summary to the report, about 20% are familiar or very familiar with Twitter and of those who use it 7% use it in the classroom. It is this group of teachers who scare me a bit and begs a familiar question that has been nagging at me for some time. Should twitter (or any social-networking tool) be forced on to learners to facilitate their social interactions? Just because the teacher finds value for their own personal and professional development, what evidence do we have that a similar benefit will accrue to their students forced into the social network? I also wonder what we can generalize about the habits and conversations of a VOLUNTARY network of Twitter users. Can their behavior shed light on the behavior of those who have the network forced upon them? My gut-feeling hunch is "nope" ...
Nardi, Shiano, and Gumbreckt (2004) summarize an ethnographic study of blogging considering motivations, social interactivity, and relationships between blogger and audience. From prior studies on blogging, blog "types" can be roughly categorized into three "types" including personal journals / online diary (the majority), "filters" which provide commentary and information from other websites, knowledge logs. However, from this study the authors suggest that the blogs are less like personal diaries than they were like radio broadcasts with limited interactivity. The bloggers were looking for readers, but with interaction that the bloggers controlled.
Nardi et al. followed the blogs of 23 bloggers and interviewed the "informants" with a fixed set of questions. All were in either California or New York and well-education and either employed or in school. As in prior studies, they trolled the Stanford University portal looking for the words "blog" and snowballed the sample by asking for friend-of-a-friend relationships.
Reciprocity ... certainly a concept that comes up frequently in discussions of networks. It came up again in a recent article by Huberman, Romero, and Wu (2008) regarding Twitter in which the authors' found that 90 percent of a users' friends reciprocate attention by being friends of the user which they suggest plays a role in defining the hidden networks within Twitter. Interestingly, they also found that that pattern of reciprocity is consistent regardless of the number of friends. So, what does that degree of reciprocity mean with regard to the hidden network? Does it signal a hidden network of strong-tie network relationships or point to the existence of a slew of weak-tie bridges? Probably a good bit of both. Maybe the answer to that is in the number of direct messages and @replies between those with reciprocal relationships. Looking at my own Twitter account, I don't know a good number of the 600 or so followers, but I generally follow them back if their profiles suggest a common interest. Maybe that would make a good sub-questions to ask during an interview ... What prompts you to follow those who follow you? Describe the relationship you perceive with those who follow you and who you follow back.
TechCrunch had an interesting article from a few weeks back about why teens don't (or do) tweet. As usual, it is important to try to find the story behind the numbers. The "story" (as summarized by TechCrunch) tells us that some studies suggest only 11% of Twitter users are teen which seems like a tiny number given how much we hear about the Internet usage of "digital natives" vs "digital immigrants" (barf). However, 11% is higher than the 9% of Facebook users who are teens and as everyone knows ... teens love Facebook :) Also, as a percentage of their age group, teens do tweet more than other age groups.
I'm trying to get my head around "case study research". While I have read countless case studies, I tend not to put them in the category of research. Too often they read like "what I did last summer" reports where (again) I question the value to anyone but the souls associated directly with the situation at hand. However, I'll plow forward knowing this is all about giving it the old college try this semester :)
So, the readings describe cases as bounded systems and apparently some view case study research less about a methodology as the subject of the study ... however, for the purpose of class we are considering a case study the subject, method, and means to report ... clear as mud. Right away the issue of coming up with representative case(s) comes up. I can get my head around a "unique" situation case better than I can a representative case. Again, the generalizability bug keeps biting, but I must learn not to scratch that itch (for now).
I've had just enough exposure to "research" to be dangerous. I have read a ba-zillion journal articles and have taken all of the required research courses in the program ... with the exception of this last one involving qualitative research. At the outset of this class, I have read several introductory chapters in the required textbook and reviewed my proposed qualitative research topic with my professor. However, at this very early stage in the semester, I am struggling with what is gained by qualitative research? For the setting being examined, probably (potentially) a lot. However, what about for everyone else? In my study (a case study, I guess), I am planning to observe, interview, and analyze the profiles of a sub-set of Twitter users (those 1,000 or so folks followed by user "edtechtalk"). While I think it will be very interesting to see what is going on within that loosely clustered network of Twitteristas who care a lot about education and technology, I wonder the boundaries of what will be gained by the analysis. I keep thinking, "... but what if I turn my head slightly and analyze a different sub-set of Twitter users?" Clearly, I'm hitting on the "generalizability" issue which for me right now is a pragmatic issue. What is the relevance if what you are looking at only is relevant to what you are looking at? Interesting to see how my perceptions will (or won't) change over the course of this process ...
As an early adopter of Twitter, I was also early to depart ... as least as a frequent user. While I check in a few times a week to lurk within my "hidden network" (more on that in future posts), I now only make the occasional tweet ... usually when I am out and about at some event of interest (to me). However, I have always had the itch to look into Twitter, specifically the nature of the communication within a sub-network, such as the loosely joined cluster of folks interested in edtech matters. I now have the opportunity this semester in a Qualitative Research course to do just that. Therefore, I will be dusting off my trusty blog to post my reflections on the process here ... largely due to a journaling requirement in the course, but also to openly (not a requirement) share the research processes I will be undertaking.