Who MOOCed My Cheese?

Conversations behind the scenes with the DALMOOC have turned to looking at the kind of feedback we have been getting on the course. George Siemens shared some of the things that he learned the first week. His post also deals with some of the feedback we have received.

The hard part with dealing with feedback is that most of us end up with a skewed view of it after a while. Positive feedback is usually pretty general and therefore easy to forget because its not very specific. Everything from “this looks great!” to “I am loving this!” to “I really like what you are doing here” serves as great feedback, but because of the general nature and lack of specifics tend to be easily forgotten. Negative feedback tends to be specific and direct. This makes it a lot easier to remember. People will tell you exactly what they don’t like and a dozen reasons why they don’t like it.

Because of this skew, the negative feedback tends to stick in our mind more easily and we also tend to get the impression that there is more negative than positive. This becomes a problem when we begin to make design decisions based on gut feelings rather than hard numbers. If you count up the positive and negative feedback, which one is higher? If you take a qualitative look at what was said, is there anything helpful either way? Saying “I love this” really just indicates a personal preference more than an actual analysis that a designer should take into consideration. In the same way, “I don’t like this” is also really just a personal preference that really doesn’t tell us much about design. Learning is not all puppy dogs and fairy tales – sometimes you have to create situations where learners have to choose to stretch and grow in order to learn. There is nothing wrong with some struggle in learning. Often, complaints about learners not liking something are actually good indicators that your design is successful.

If you disagree, that is fine. But don’t design anything that involves group work. A lot of people hate group work and if you create a lesson that requires group work, you have just acknowledged that sometimes you have to struggle through group dynamics in order to learn whether you like it or not :)

But sometimes when someone says “I don’t know what to do with this tool!” what they are really saying is “I am not sure what to do and I don’t want to try because in the past there have been huge penalties for trying and getting it wrong on the first try!” This is a sad indication of our educational systems in general. We don’t make it okay to experiment, fail, and learn from failure.  The reason so many people demand more tutorials, more hand-holding, more guidance is not because they afraid of chaos as much as they are afraid that they will get their hand slapped for not getting it right the first time. This is likely due to it almost always happening in the past.

So in something like DALMOOC, where you are free to get in and experiment and fail as much as you want to, most of us have been conditioned to panic in that kind of scenario. That’s what our excessive focus on instructivism does to us as a society. People are afraid to play around and possibly fail for a while. They want to know the one right way to do something, with 10 easy steps for doing it right the first time.

So, in a lot of ways, much of the feedback we are getting is along the lines of “who moved my cheese?” And that was expected. We are trying to jump in and explain things to those who are confused as much as possible. We are hoping that those who are bringing up personal preferences as negatives will see that we had to design for the widest range of learners. Or maybe to see that if they still figured something out, that this thing actually worked as designed (because its not always about personal preferences as much as learning).

But, to be quite honest, an objective examination of all feedback would seem to indicate that most of it is positive. Many of you like the challenges and the struggles. That is great – you get it. Most of the positive and negative feedback is along the lines of personal preferences – you don’t like rollover effects, you love Bazaar, this optional assignment is too hard, this required one is too easy. I’ll continue blogging on design decisions to clarify why they were made – not to justify them as right (instructional design is rarely about black and white, right and wrong decisions anyways), but to explain why they were made. And there are some genuine complaints about confusion that we are addressing.

Just as us instructors and designers can develop a negative skew, so can the learners. They can see a few specific negative tweets in a sea of general positive tweets and start to think “wow – maybe I should be having more problems?” Don’t worry – most people are doing just fine. Problems, praises, issues, suggestions, and complaints are welcome, but just remember they don’t necessarily apply to you as a learner. You are free to love or hate any part of the course you wish. You are also free to pick and choose the parts of the course you participate in, so don’t waste time with something that isn’t working for you. But also be careful not to label something as “not working for you” just because you don’t like it or are struggling with it. Sometimes the struggle is the exact thing that you need in order to learn.

MOOCs and Codes of Conduct

Even before the whole GamerGate thing blew up, I had been considering adding a Code of Conduct to the DALMOOC. UTA has always required an Honor Code in all course syllabuses, so to me this issue was a no-brainer (even though we aren’t treating DALMOOC as a specific UTA-only course). But I know others don’t see the need for Codes in general, so I wanted to dig more into the reasoning behind a Code of Conduct for online courses – especially MOOCs.

I know some feel that you can’t really stop bad people with just a statement, and that usually the community will rise up to get rid of the abusers and trolls anyways. Sometimes both of those are true. But not always.

I have been a part of Facebook groups that did not have a code and ended up leaving. You would think the group would have risen up to stop people from being abusive, but that was not the case. And when I spoke up? Well, it quickly became time to leave. I have also been in some groups that did have a code in them and witnessed first hand seeing someone asked to comply with the code and – believe it or not – they stopped, apologized, and changed. It does work sometimes.

But other times it doesn’t. So you can’t just say “be cool to everyone” and leave it at that. There has to be a threat of consequences from the people in charge for the Code to have teeth. The problem with using the UTA Honor Code in a MOOC was that it was designed for a small group of people in a closed system where you can ultimately with one click boot out people that don’t comply. And then send the police after them if they don’t get the message. Open online courses, though? A lot trickier to enforce.

So, I turned to the work of Ashe Dryden and her recommendations for conference codes of conduct. Since conferences are a bit more open than closed online courses, I thought that would be a good place to start. I also decided to add links to the privacy statements of all services we recommend, as well as links to reporting abuse on those services. I felt people needed to be aware of these issues, as well as know what one place to go to access the, all. If I should add anything else, please let me know.

So you might wonder why the language is so specific on the Code. Just tell people to be cool or else your out, right? The problem is that this is too vague. Some people can be very abusive in a way that flies under the radar of most gatekeepers, because they are looking for obvious hateful words and actions. True abusers have found ways to go under the radar. So we need to be as specific as possible in these codes as a way to empower our learning communities to know what to look for in the first place. You can’t just expect the community to rise up and fight abusers – you have to give them the tools and words to use in order to fight. And one of those tools needs to be an appeal to authority. You see, its one thing to say “I think you are being abusive, stop” and another to say “the rules say this: _____.” Trust me from experience: abusers rarely care when you come in and say “stop treating this person that way because I think you are wrong.” If we want our communities to rise up and stop abuse, we have to empower them with the tools and words they need from us as the leaders. Yes, they are able to come up with their own words; however, it is much more powerful when their words match ours instead of fill in our blanks.

And I know what many say: “this will never happen – I have never seen abuse happening in classes.” I hope that is true. But I would encourage you to look into recent cyber bullying research. Many people that experience abuse do not speak up because they feel no one will listen. So is the fact that you have never heard of abuse online a sign that there is none, or that no one thinks you are a safe person to discuss the issues with? An important difference there.

Think of it this way. The DALMOOC has over 18,000 people signed up last I heard. That is more people than thousands of small towns in America. Thousands of towns that also have a crime rate and an abuse rate. If even small towns can’t escape from attracting criminals and abusers, how sure are we that our MOOCs will?

And oh yeah: #stopgamergate. Call me a SJW or whatever you want. I wear it proudly.

Social Learning, Blending xMOOCs & cMOOCs, and Dual Layer MOOCs

For those who missed it, the Data, Analytics, and Learning MOOC (DALMOOC) kicked off orientation this week with two hang-outs – one as a course introduction and one as a discussion over course design. Also, the visual syllabus, the precursor of which you saw here in various blog posts, is now live. The main course kicks off on Monday – so brace yourselves for impact!

The orientation sessions generated some great discussion, as well as raised a few questions that I want to dive into here. The first question is one that came about from my initial blog posts (but continued into the orientation discussion), the second is related to the visual syllabus, and the third is in relation to the Hangout orientation sessions themselves:

  • Don’t most MOOCs blend elements of xMOOCs and cMOOCs together? The xMOOC/cMOOC distinction is too simple and DALMOOC is not really doing anything different.
  • Are the colors on the Tool flow chart mixed up? Blue is supposed to represent traditional instructivist instruction, but there are social tools in blue.
  • Isn’t it ironic to have a Google Hangout to discuss an interactive social learning course but not allow questions or interaction?

All great points, and I hope to explain a bit more behind the course design mindset that influenced these issues.

The first question goes back to the current debate over whether there are really any differences between xMOOCs or cMOOCs or whether this is a false binary (or not). I have blogged about that before, and continued by pointing out that the xMOOC/cMOOC distinction is not really about “binary” at all as much as where certain factors cluster (more specifically, power). I submitted a paper to AREA this year (that I hope gets accepted) with my major professor Dr. Lin that was basically a content analysis of the syllabuses from 30 MOOCs. I noticed that there were clusters of factors around xMOOCs and xMOOCs that didn’t really cluster in other ways. I am now working on some other studies that look at power issues and student factors like motivation and satisfaction. It seems like not matter what factor I look at, there still appears to be clusters around two basic concepts – xMOOCs and cMOOCs. But we will see if the research ends up supporting that.

So from my viewpoint (and I have no problem if you disagree – we still need research here), there are no hard fast lines between xMOOCs and cMOOCs. The real distinction between the xMOOCs and cMOOCs is where various forms of power (expert, institutional, oneself, etc) reside. For example, was any particular course designed around the students as source of expert power, or the instructor? You can have content in a course that has the student at the center. You can also have social tools in a course that sets the instructor as the center.

Our guiding principle with the DALMOOC was that there is nothing wrong with either instructivism / instructor-centered or connectivism / student-centered as long as the learner has the ability to choose which one they desire at any given moment.

That is also the key difference between our goal with course design and how most other blended xMOOC/cMOOCs are designed. Most blended MOOCs (bMOOCs? Sounds like something from the 80s) usually still have one option / one strand for learning. The content and the social aspects are part of the same strand that all learners are required to go through. Remember, just adding social elements to a course does not make it a social learning, student-centered, connectivist course (especially if you add 20 rules for the forum, 10 rules for blog entries, and then don’t allow other avenues beyond that). In the same manner, just adding some content or videos or one-way Hangout sessions does not make a cMOOC an instructor-centered, instructivist course.

Our design goal was to provide two distinct, separate layers that allow the learner to choose either one or the other, or both for the whole class, or mix the two in any manner they want. But the choice is up to the learner.

And to be clear, I don’t think there is anything wrong with blendedMOOCs. Some are brilliantly designed. Our goal with DALMOOC was just different from the blended approach.

So this goal led to the creation of a visual syllabus to help myself and others visualize how the course works. One comment that arose is that the colors on the tool flow page (explained here) are mixed up: the Quick Helper and Bazaar tools (explained here by George Siemens) are in blue and should be in red. I get that concern, but I think it goes back to my view of the distinction between xMOOCs and cMOOCs. The red color is not “social only” and the blue color is not “content only,” as some would classify the difference between cMOOCs and xMOOCs. The colors are about where the expert power lies. Quick Helper might have social aspects to it, but the main goal is to basically crowd-source course help when learners are trying to understand content or activities. And it is a really cool tool – I love both Quick Helper and Bazaar (and ProSolo, but the orientation Hangout for that one is coming up). But the focus of Quick Helper is to help learners understand the content and instructor-focused activities (again, nothing wrong with that since the choice is up to the learner to give that expert power to the instructor). In the same way, the Bazaar tool is social, but has a set of prompts that are written by the instructor for learners to follow.

I hope that clears that up a bit – the colors indicate where the expert power resides in the design – neither of which are bad in our design. Of course, you as the learner might use these tools differently than that and we are okay with that, too.

The third question is about the irony of using a Google Hangout to explain a student-centered course and then not allow any interaction. I kind of snickered at that one because I usually say the same thing at conference keynotes that talk about interactive learning but then don’t allow for interaction. So it sounds exactly like something I would say. Of course, at keynotes, you usually have the totality of the examination of that topic at that one keynote and then the speaker is gone. A course is different, obviously. But in explaining our reasoning for this issue I would point back to the differences between cMOOCs and xMOOCs and again bring up the point that being student-centered and connectivist does not mean that there are never any times of broadcast from the instructor. A 30 minute Hangout with no interaction fits into a student-centered mindset just fine as long as you don’t see hard fast lines between paradigms.

But I would also point out that the Google Hangout format is too limited for interaction at scale. You are only allowed 10 people in the actual Hang-out. In addition to that, going over 30 minutes gets a bit tedious, and you can’t really do much interaction with learners in 30 minutes even when using the Q&A feature. Not to mention that 30 minute window is set in stone – if a learner misses it because of work or different time zone or whatever: “no interaction for you!” Using a Google Hangout for a global course would be like being the ultimate “Interaction Nazi.” We also noticed a 30-60 second lag between live and broadcast, so that also hampers interaction. Howver, the biggest reason was that we were really looking at ProSolo, Twitter, our Facebook Page, and our Google+ Page as the true avenues for interaction with these Hangouts. Those avenues were active before, during, and after the Hangout for people in any time zone. So the interactivity was there during the orientation sessions, and you actually did see us responding to things from the social channels in both Hangouts. This may change in future Hangouts. The instructors may open up the Q&A function of Hangout. We’ll see.

So, if you have questions about DALMOOC content or design, be sure to post them to social avenues. Or comment here about this post. I am behind on comments (and blogging) due to the looming course launch, but I will get caught up :)