Is There a Difference Between xMOOCs and cMOOCs?

Recently I have been reading a few different thoughts on the difference between cMOOCs and xMOOCs. Or more specfically, how there is no real difference between the two and the classifications do more harm to the conversation than help. I would respectfully disagree – the differences are real, and they do matter. To ignore the differences would cause more damage in my opinion.

Of course, this has been explained in much better terms by others before – but this is just my attempt to try a different framing mechanism.

A lot of the discussion centers around how there are social activities in xMOOCs as well as guided content in cMOOCs. To me, that’s a non-issue. Social elements do not define cMOOCs, and lack of social elements does not make an xMOOC. Instructor-led content does not define an xMOOC, and lack of content does not define a cMOOC. That is like saying that pizzas and burgers are the same because they both have salt and can be ordered at a fast food restaurant. Sharing some similar characterisitcs does not mean that the ones that they don’t share are not important.

I’m working on a content analysis research project that is looking at what themes would emerge if you analyzed the content of the syllabuses of 30 MOOC courses. The differences between cMOOCs and xMOOCs are quite noticeable. Everyone has sightly different terms for the concept of power, but whether it is “who holds the power” or “who has autonomy” or if “autonomy is a classification of power”, the seat of power is the real difference between xMOOCs and cMOOCs. Whether you look at is as active learning versus passive learning, or instructivism versus connectivism, or constructivism versus behaviourism, or student-centered versus instructor-centered, the basic question is “who is in the driver’s seat for the learning of each individual learner?”

If the content is laid out for the learner (or “curated” by the instructor) and the learner must go through a certain set of modules and take certain tests and discuss certain topics and so on, the instructor (via course design) is in control of the steering wheel for each learner. They may discuss and form groups and all kinds of social things. They may form PLNs and use Twitter. That does not make the course connectivist. I have been in some courses that had no content but the social groups were so controlled that we had no input on the whole class. If a course is designed on a passive, instructivist, behaviourist, instructor-centered manner, it is still an xMOOC no matter how much social stuff is tacked on.

On the flip side, if each learner is in the driver’s seat for their learning, and you are creating a course that is active, connectivist, contsructivist, student-centered, etc – that is the heart of a cMOOC. You can create weeks worth of content and put it in there, but as long as it is optional for students that want to use it as they see fit, it is still a cMOOC.

So what that means is that courses like EDCMOOC that claim to be neither xMOOC nor cMOOC are actually xMOOCs that just don’t know it. Nothing wrong with being an xMOOC. But why is it an xMOOC? Because the content is “teacher-curated and -annotated selection of resources on weekly themes, including short films, open-access academic papers, media reports, and video resources” that “were the foundation for weekly activities, including discussion in the Coursera forums, blogging, tweeting, an image competition, commenting on digital artifacts created by EDCMOOC teaching assistants, and two Google Hangouts” according to the paper on the course.

The instructors were still in the drivers seat. Sure, they let students form their own groups. They let the students form networks. But they were still in the seat of power.

And to be honest, I don’t have a problem with that happening. Many learners (for better or worse) still want the instructor to be in the driver’s seat. But what about the students that wanted one thing and got another? Confusions in power structure in courses can lead to frustration among learners. They may still end up happy with the course but be confused about what happened along the way. The EDCMOOC article authors pointed out that “For every person who hated the peer assessment, someone else loved it.” Why is that? Were they expecting one thing and got another? Were they confused as to why they read all this curated content and then had another student assess their work? Learners that have to find their own content tend to feel more comfortable with peers assessing their work, but those that have to read curated content (technically, all content added in any course ever was curated) as the foundation for the activities will usually want the instructor to assess their work, since it was the instructor that first told them to consume that content.

Of course, classifications in education are not about black & white, either/or boxes. Classifications like “xMOOC/cMOOC” are really more of generalized categories that kind of coalesce around certain characteristics. But most people know that they are not hard, fast lines. One problem that is emerging in education is misunderstanding what educational classifications are and what they aren’t. MOOC designs that mix elements of xMOOCs and cMOOCs are not a sign that the classifications are wrong. They are a sign that we need to understand the underlying differences even more or we could continue to confuse and polarize the issue even further. More and more learners are discovering the difference between instructivism and connectivism (even if they don’t know those words), and are wanting to learn in their preferred paradigm.

Bridging Learners From Instructivism to Connectivism

One of the more interesting challenges of the Dual Layer MOOC project (at least from a design standpoint) is the learner autonomy goal. The instructors don’t want to force learners to be open (or closed, for that matter). If learners want to be completely guided by the instructor (instructivsm), then there will be that option. If learners want to use completely networked learning (connectivism), then there is that option. Designing two layers based on those two ideas is fairly straight forward (as long as you do it well). If learners that are on the networked learning path want to dip into the guided path, that usually is not a problem, because that has always been part of being an autonomous, self-directed, networked learner: find some content and consume it as needed and then go back to your network. However, for those on the guided path that want to transition into networked learning, the path is not as easy. Many may not even try it because they are used to being guided. You can blame the system or learners not wanting to take risks or many other factors and be correct, but the reality is that transitioning from guided objectives to self-directed competencies is a barrier for many learners. One possible solution is to scaffold the learner from instructivism to connectivism. This would go back to the deconstructing objectives idea I touched on earlier, but in this case you could guide learners through it. Remember, this is for the learners who are used to being guided, so you would have to also guide them through the process of learning how to learn (or heutagogy as some call it). Starting with a basic instructivist guided objective with conditions, behaviors, and criteria, you might have something like:

Given the EdX module resources (CN), the learner will analyze ethics in data analytics (B) by scoring at least 90% on the module quiz (CR).

But since that is what they are used to, you could stretch them a little bit by removing the criteria to get them to start thinking for themselves a bit:

Given the EdX module resources (CN), the learner will analyze ethics in data analytics (B) by __________________ (CR).

Learners would have to fill in the blank for themselves. What you have here is the beginnings of the idea behind the ds106 assignment bank, although not quite there yet. Once the learner has gotten this down a bit, you could then take it a step further by removing the condition:

Given _______________________ (CN), the learner will analyze ethics in data analytics (B) by __________________ (CR).

This is a lot closer to the ds106 assignment bank. And then you could even strip everything away from the behavior except the topic and move that to the condition:

Given the topic of ethics in data analytics (CN), the learner will ______________________ (B) by __________________ (CR).

At this point, learners are practically writing their own competencies – they just need to make sure to create something applicable to their situation and they are there. Along with this, you might want to also scaffold them into group work. For example, with the first level of scaffold you might tell them to goto the group discussion board and get feedback on the criteria they are creating. Then on the next level, they could get in groups and swap their personal objectives with others to see if others can accomplish them. Finally, they are placed in groups with other that have similar objectives to find a common goal to work on. Hopefully they can then start working as an autonomous learner within a connectivist environment for the final step. However, there is the big issue of not forcing learners to take this path if they are not ready. There would be great value in creating a course that specifically teaches learners to move from instructivism to connectivism, but that would still be basically one path through the content. Even adding that path to the dual layer MOOC would essentially make it a single pathway course if it was forced on all at a certain point. But learners that are used to instructivism need that path – that guidance – to start the process of stepping out. So the tricky part of the course design would be to create a system that allows learners to stick with the course layer they like, but also switch over as they like (and by default have a pathway for guided instructivist learners to switch over at any point they are ready). One possible solution is to lay out all possible steps each week in the weekly blast or announcement or blog post or whatever it may be. It could look something like this:

Welcome to Week 3 of Data Analytics! The topic for this week will be ethics in data analytics. For those of you in the networked path, you know what to do. Or maybe you don’t yet, but go to your groups and get working. Write your own competencies and get working with others on one of the weekly problems in the Problem Depot. Or create your own problem. Those of you that need a new group to join, go to the Random Group-o-Mizer and select “new networked group”. For those of you on the guided path, your content is in the EdX course. For this week:

  • Given the EdX week 3 resources, you will analyze ethics in data analytics by scoring at least 90% on the module quiz.

For those of you on the guided path that are ready to dip your toe into the networked path, this is your challenge:

  • Given the EdX week 3 resources, you will analyze ethics in data analytics by ____________________ (?)
  • Create your own criteria for determining that you know the content (i.e. fill in the blank above).
  • Go to the Random Group-o-Mizer and select “dipping my toes in”.
  • Share your personalized objective with the group you are assigned to and give feedback on the other group member’s criteria.

For those of you that have dipped your toe in and are ready to go deeper down the rabbit hole, this is your new challenge:

  • Given ________________ (?), you will analyze ethics in data analytics by ____________________ (?)
  • After creating your own criteria (first blank), go find some kind of resources to help you learn what you need to (second blank).
  • Go to the random Random Group-o-Mizer and select “Going deeper down the rabbit hole”
  • In your assigned group, switch your personalized objective with others and see if you can accomplish each other’s objectives.

For those that have taken more control and are almost ready to dive fully in to networked path, this is your final challenge:

  • Given the topic of ethics in data analytics, you will ______________________ (?)  by __________________ (?).
  • Figure out what you are going to do with the topic, how you are going to do that (first blank), and how you are going to prove you did it (second blank).
  • If you apply this objective to some situation in your life, you be pretty much writing your own competencies like a pro.
  • Go to to the Random Group-o-Mizer and select “My path to being a Jedi is almost complete”
  • This should match you up with a small group of people with similar competencies. Your goal as a group is to work together to solve one of the problems in the Problem Depot based on shared competencies.

If you think you are good with the final challenge and want to go through with the full transformation to networked learning, go back to the first part of this daily blast and jump in to the networked learning path.

Of course, there would need to be more guides in there for some of these steps, but hopefully this gives you an idea. The “Random Group-o-Mizer” would basically just be some profile system that allows learners to put in some basic interests, select a level of participation, input their objectives or competencies, and then be grouped according to some algorithms that puts them together by shared objectives/competencies. The “Problem Depot” is basically an assignment bank that is re-purposed for problem-based learning. Learners could even create their own problems and submit to this depot. The basic idea is that every week we give learners the steps to scaffold to connectivism and let them go at their own pace through the transformation. Of course, it won’t be this straight forward or easy in real life, but the struggle is part of connectivism, right?

Scaffolding an Entire University to Open Learning

A lot of what I have been blogging lately is just me struggling through various ideas surrounding this whole “Dual Layer MOOC” design idea. Probably the whole term “Dual Layer” is a misleading descriptor anyways. Multiple pathways is better, but since that term already has specific designs attached to it, its hard to fight against that. “Multiple pathway” courses still tend to be “multiple siloed pathways” in which five or ten or how many ever specific defined pathways are given. That’s not really the goal that instructors have for this course.

The underlying goal is create a course that emphasizes diversity, experience, and autonomy in learning, to borrow a description from Stephen Downes.  The problem we are dealing with is the reality that the entire University system is set up in an instructivist manner that values all students going through the same path in each course in order to pass the course by doing exactly what they are told. Students are so used to this system that they are comfortable with it and start freaking out if they are forced to take an open course. To borrow a statement from George Siemens: “We can’t force students to be open.”

So the dual layer MOOC is not about blending cMOOCs and xMOOCs as much as creating a scaffold for those students who are used to instructivist learning to dip their toe in and try out networked learning – if they want. But there are those that want connected deconstructed learning from the beginning, so that option has to be a viable one from the beginning also. If at any time we create narrow pathways that force students to scaffold from instructivism to connectivism, we leave diversity, experience, and autonomy behind. So the door has to remain open, but the learner has to choose when to pass through it.

So this is not a case of the xMOOC wagging the cMOOC tail, or vice versa. If it looks that, its just because I am failing to create adequate metaphors to explain what has been coming out of the design meetings. I still like the play dough metaphor best (we’re just throwing a bunch of play dough cans on the table and learners can pick them up and use them as the like in groups or individually or even just leave the room and go get their own play dough) – but that makes for a lousy blog diagram :)

So, in a lot of ways, I just see this dual layer thing as a step on the process of scaffolding the university system from instructivism and teaching to “sharing the process of thought and inference and discovery with those around you” (to quote Stephen Downes again). That sharing process is the main reason why I started blogging so much about the dual layer MOOC – it will change and even possibly go away. I’m just sharing my process openly. And the feedback I have received has been awesome – so it has been a worthwhile process and will continue time permitting.

Research Says: Online or Face to Face Is Better?

You know what they say about getting into an argument with an instructional designer over learning design? Oh… they don’t? Well, they should. Anyway… if they did say anything about it, they would say not to do it because instructional designers pretty much shoot holes in everything.

People argue all the time over whether online learning is better or worse than face-to-face. But you ask an instructional designer which is better? Well, neither, both, and… it kinda depends.

Confusing? Yeah, well blame the research. Research is important. Research tells us a lot. Research raises a lot of good questions. But it seems like we as the educational community are misusing and over simplifying the results of the research.

A lot of research is based on numbers. And those numbers might tell us that, say, there is a statistically significant difference between the number of learners that passed the test in the face-to-face version of a course and the number of those that passed in the online version. Or substitute “test” with whatever metric you are using to determine which is better. And so face-to-face is declared the winner and online is the loser that has to slink off and die because it *lost*!

The problem is – online learning obviously worked great for those students that passed – even if there were statistically significantly fewer of them (did I just butcher the English there?). Research is not a contest to show which option is the one right one. We are not in a giant game of Highlander: Education. There can be more than one right way. It can be online and blended and face-to-face. We are not waiting to see which one beheads all the others to become the clear champion of the universe.

So when the Department of Education came out and declared blended learning the best, that did not mean that online and face-to-face were horrible or ineffective. They just found a higher number for blended. That’s all. That doesn’t invalidate the other two. They are  a national entity that has to look at what works for millions of students.

One way that we know that online learning is working is by learner testimonials. There are thousands and thousands of learners all over the nets saying how online learning worked for them. And guess what – some of them actually failed their courses! Wait – am I telling you that scores don’t matter? Well, of course they matter if you want to earn a piece of paper. But many learners don’t look at a passed test or course as a sign of “working.” Earning an “F” in a course could mean they don’t take tests well, or they had a death in the family during the semester, or they went off on a tangent and forgot to take the final because they were too busy learning informally.

Then there is the other end of the spectrum, where students get annoyed at classes and give them bad satisfaction ratings because they were required to do actual work and they thought they should get an A just for paying for the class.

So ultimately, if a student says an online course worked for them because it challenged them to think and learn, that is good evidence that it worked. Test scores and completion rates and satisfaction surveys might also tell us something, but typically those are ranking systems and not a “winner takes all” cage matches.

But another huge problem – one that instructional designers would point out to you – is that even the best research studies cannot really tell you if online or face-to-face is better. They can compare how the learners in one type of online learning design for a specific time period performed against another group of learners in one type of face-to-face learning design for that same period. There are so many different ways to design for learning online, and there are so many different ways to design for face-to-face, and so many ways that different instructors can affect their classes, and so many ways the learner population can affect the mood of the class, and so on. Research gives us a snap shot of what is going on in specific set of classes at a specific time – but the goal should be to ponder what this means for our personal situation and adapt and experiment ourselves. Not “this works! This doesn’t” and move on.

So the instructional designer will tell you that, yes we know a whole lot about what “works” in the macro sense of education, but in a lot of ways we also know very little of what “works” also. We can tell you want generally works in online or face-to-face and what doesn’t… but it ends up being a long vague list that you still have to take a stab at to see what does and doesn’t work for you specifically.

And the kicker is – despite all the research and facts I knew when I started as an instructional designer… I didn’t really get all of this until I started teaching online. Once you start teaching yourself, and trying to actually do what the research says… you begin to realize that it’s not so black and white. There are no champions of the universe, no best practices, no learning styles, no easy categories for everything to fit in. Oh, sure – you “know” that before you start teaching, but it’s kind of like you “know” parenting is tough until you have a kid and see how tough (and wonderful) it can be for yourself. First-hand knowledge changes your perspective radically. And simplistic answers from research goes out the window. The research itself (or at least the good research) doesn’t really ever give easy answers – people just misread it and think it does. Once you start teaching yourself, you begin to realize that you will use research to inform your practice instead of dictate it.

Some day soon I hope we move beyond this pointless rhetoric about online or face-to-face or blended learning being better or a good way to learn or whatever. All education is distributed over a distance anyways. Learners have declared that all work for them. Its better to start looking at what worked or didn’t work for the learners and go from there. That might call for some – gasp – qualitative research!

“So okay, Matt, stop with the whole ‘there is no spoon BS’ and tell me straight – does online learning work or not” you might say. Online learning works – for certain students. What all of the research is really telling us is that what doesn’t work is forcing all students into one-size fits all learning designs. Therefore, that leads me back to why I like working with the dual layer MOOC group – how can we offer students options to determine for themselves what works best? How can we create multiple paths that are truly multiple paths and not just “five different version of the same silo”? How can we create learning designs that emphasize diversity, experience, and autonomy in learning? Especially when so many students are used to instructivist learning?

The Value of True Openness

People sometimes ask me why I make a big deal about the difference between open and free. Or even “easy access” and free, for that matter. Well, I thought I would open up a bit about the reasons why it is such a big deal to me. It has to deal with my bitterness towards Google for the whole Jaiku debacle.

You see, I remember when this really cool podcasting company called Odeo started discussing this idea they had for a new service that eventually became know as Twitter. Most people can look that up online. What you can’t quite find written anywhere is that a few days before Twitter went public on July 15, 2006, another microblogging company jumped the gun and launched first.

Jaiku was the cooler, more easy to understand version of Twitter. In addition to your avatar, you could add these cool symbols (icons) to each post that were basically Wing Dings to add a dash of an emotional cue to your short bursts. Comments on a jaiku were threaded. You could also use it as an RSS feed aggregator (your feeds showed up as jaikus). They had several other features that many of us liked more than Twitter, also. Time has erased those memories. But at one point, Jaiku gave Twitter a serious run for their money (although that article seriously gets the launch dates for both services wrong – Twitter was used internally until July 15, 2006).

Another cool feature was that Jaiku had channels – you could create a few of your own and then if you posted your jaiku to a channel it would only appear there. Man, I miss that feature when conference season comes around. And guess what Jaiku used to visually separate these channels from the main flow? A pound sign (#). Look familiar? Yep – Twitter users wanted that feature and didn’t get it so they created the hashtag idea (and technically, this happened well before Chris Messina rallied the Titter community around the idea in August 2007). Back in 2007, the competition between Jaiku and Twitter was intense – a common question at Ed Tech conferences was “do you jaiku or tweet?” The wrong answer – depending on the service the person asking the question was using – could earn you a disappointed “Ohhhhhh…..” (although tech savy people knew how to use both)

So, the hashtag phenomenon we have now? Started at Jaiku. Of course, the pound sign had been used for a long time before that – but the current hashtag as we know it started at Jaiku.

Then Google came along, bought Jaiku, and neglected it. Try as I might, I could not get my jaikus to export to all the tools that claimed they could. I loved those early messages on Jaiku because they were unhindered by all the “rules” that we are supposed to follow on social media today. All of them are just gone forever now.

That is the difference between free or easily accessible and open. Jaiku was free and easy to use, but it was not very open in that I couldn’t take my stuff and save it easily where I wanted it, or use it again anywhere. To be open means that I control my stuff, my words, my identity – including to the point that I can take it off the original site with ease. Without that feature, I have a hard time calling something open.

That is why the new wave of open is so important. If your service is not open in this way, I would suggest using another (more accurate) term. Open should refer to power –  not cost, not access, not certification. Because you see, the things is – if  you get the power thing right, the cost, access, certification, and other issues will probably also follow suit.

Well, unless The Suits get in the way…. which is a whole other issue….