People are Not Generalizable Cogs in a Wheel

One of the issues that we are trying to get at with dual-layer/customizable pathways design is that human beings are individuals with different needs and ever-changing preferences.

That seems to be an obvious statement to many, but a problematic one when looking at educational research. Or more correctly, how we use and discuss research in practical scenarios.

For example, when ever I mention how instructivism and connectivism can also be looked at as personal choices that individual learners prefer at different times, the response from educators is usually to quote research generalizations as if they are facts for all learners at all times:

More advanced learners prefer connectivism.
People that lack technical skills are afraid to try social learning.
Learners with higher levels of self-regulation hate instructivism
Students that are new to a topic need instructor guidance.
Student-centered learning makes learners think more in depth.

While many of these statements are true for many people, the thing we often skip over in education is that these concepts are actually generalized from research. It is not the case that these concepts are true for all learners, but that they have been generalized from a statistically significant correlation. That distinction is important (and often ignored) – because studies rarely find that these concepts are 100% true for 100% of the learners 100% of the time.

But practitioners typically read these generalizations and then standardize them for all learners. We lose sight of the individual outliers that are not included in those numbers (and even of the fact that in the data there is variations that get smoothed over in the quest for “generalization”).

Then, of course, we repeat those experiments with different groups and rarely check to see if those outliers in the new experiment are different types of people or the same.

We also rarely research courses where learners have true choice in the modality that they engage the course content, so do we ever truly know of we are finding the best options for learning in general, or if we are just finding out what learners will do to make the best out of being forced to do something they would rather not?

Are we losing sight of the individual, the unique person at the center of educational efforts?

My research is finding that, when the given freedom to choose their learning modality (instructivism or connectivism), learners stop falling into such neat categories that often comes out of research. For example, those that are advanced learners with high self-regulation and well-developed tech skills will sometimes prefer to follow an instructivist path for a variety of reasons. Or, for another example, sometimes learners have already thought through an issue pretty well, and therefore forcing them to go through student-centered learning with that topic is a boring chore because they don’t need to be forced to think about it again. Or. for even another example, some learners with low self-regulation and low tech skills will jump head first into connectivism because they want to interact with others (even though the research says they should have been too afraid to jump in).

edugeek-journal-avatarWhen you actually dig into the pathways that individuals would choose to take if one is not forced on them, those individuals tend to defy generalization more often than expected. But when you point this out, the establishment of education tends to argue against those findings all kinds of ways. We like the comfort of large sample sizes, generalizable statistics, and cut and dry boxes to put everyone in. I’m not saying to abandon this kind of research – just put it in a more realistic context in order to make sure we aren’t losing the individual human behind those generalizations.

DALMOOC 2.0 Re-Design

At some point, there probably will be a DALMOOC 2.0. I can say that with confidence because we are already having discussions about it. But the timing, format, etc are still a bit fluid right now. However, we (the DALMOOC team, not a royal “we”) have put a lot of time and thought into improvements and redesign. From my end as instructional designer, there are several issues I would like to address. This post really serves as a list for me personally, but might be of interest to others.

Instructivist Layer

  • The content later of DALMOOC had a good amount of focus on the facts of Learning Analytics (pedagogy) as well as a good focus on the how to learn about Learning Analytics (heutagogy). There is more discussion on ramping up the heutagogical side of the equation some more, which I think is great.
  • Where the design mostly fell flat on this layer was the assessments. They were tied to multiple competencies per week and left a bit open as to how to participants would complete them. We really need to focus on one assessment activity per week (or even every two weeks). This may mean reducing the number of competencies to one per week, or utilizing sub-competencies.
  • Additionally, since this is the more guided layer, the one assignment we create probably will need to have more guidance for how to complete it. That is the point of instructivism after all.

Connectivist Layer

  • This layer probably suffered the most from not have a good solid “glue” between the layers, so a lot of the re-design will be addressed in “The Glue” section next.
  • The assessment/artifact part of this layer also suffered from having so many competencies to complete, so focusing those into one artifact will help immensely.
  • The assignment bank was unevenly utilized, and that needs to be fixed. Having more focus on the competencies would help with this, also. The idea would be that the assignment bank gives various scenarios, artifact ideas, or data sources to use when working on artifacts to complete each competency. But what the bank contains could actually be different each week. One week that looks at, say, the history of data analytics could have a bank of ideas how to explain the history (video, paper, interactive timeline, blog posts, etc). The next week that looks at how to perform SNA could have a bank of sample data sets to use. Or a bank of scenarios of where to get data from. Finally, each bank would have a “do your own thing” option to point out to learners that in this layer they can come up with their own ideas.
  • Group connections and formation needs a lot of work, but the “Behind the Scenes” section will look at some of that.

The Glue

  • The original intention was to have a weekly/daily email that provided a connection point for all participants, as well as stepping out points and scaffolding for people that wanted to try out social learning for the first time. These emails never happened. And we found out that not everyone reads email (shocking, I know). So a new idea is being floated around.
  • This idea is to have a centralized website for the class. This website only displays what is being worked on that week (but with a menu to get to older content and the syllabus, of course). Basically, think a blog with a simplified theme that only shows one post at a time. This site intros the competency for the week, with options to choose which layer the participant is interested in. Selecting a layer would display links and instructions for what to do next for that layer (view videos in EdX, create goals in ProSolo, etc.). There would also be a link to a scaffolding area for people that want to try the connectivst layer but need guidance.
  • The information on this site would be blasted out to email, Facebook, Twitter, etc. Whatever avenue people want to use to be informed that a new week of the course is posted.
  • Finally, this glue site would have a list of people that it recommends for you to connect with, as well as a list of Tweets and Blog Posts that interest you. Hopefully this could be personalized for each user – based on your activity, interests, skill levels, etc. More on that in the “behind the scenes.”

Behind the Scenes

  • The biggest changes to make all of this run smoothly need to be programmed behind the scene. For example, the glue site would need to support single sign-on between EdX, ProSolo, WordPress, Google, etc. Once you sign in, any link you click on should take you to something that you are already signed into.
  • Ultimately, it would be best to create the possibility for this sign-on could be handled by individual websites, so people can own their work and data for this course.
  • A more detailed Profile would be helpful. Using profile data along with course activity/posts/tweets/etc, various programs could recommend specific people for you to connect with, or even specific Tweets or blog posts you might like to read. These algorithms/programs/etc would be working behind the scenes to help find people and content for participants to connect with. At least, for those that choose to opt-in.
  • We are also pondering if we need to add better group tools into the glue site to help people with group activities. Or maybe add that to ProSolo. Plug-ins like BuddyPress for WordPress could create all kinds of tools for groups to use, at least for those that don’t want to find their own.
  • The teams working on QuickHelper and ProSolo also have some great ideas for improving their tools – but I won’t spill any beans on those because they can explain those ideas better than I can.

The Matrix

  • We had an initial course narrative based on The Matrix, but time prevented fuller development of that.
  • The red pill/blue pill metaphor seemed to help many understand course structure. We could possibly integrate that into the Glue website. For example, click on the red pill for one layer and the blue pill for the other. Maybe even create a purple pill metaphor for the scaffolding steps between the two.
  • Other things could be added – use quotes from the movie to explain things here and there. Add Matrix like graphics to the visual syllabus and videos. Have a distracting moving Matrix background. Someone could dress up as DALMorpheus and talk in riddles. And so on. I did make a mock-up of all of these ideas for a “Glue” website. As a warning, this takes the course narrative to Jim Groom extreme levels – which I love. But others don’t, so don’t expect DALMOOC 2.0 to look anything like this. But if we went full tilt on all of these ideas with the course narrative and glue website, it might look something like this.

edugeek-journal-avatarSo, any thoughts, ideas, suggestions, etc would be greatly appreciated.

(image credit: Flavio Takemoto, obtained from freeimages.com)

Ed Tech Skeuomorphs and Dual Layer MOOCs

Have you ever wondered where those tiny handles come from on some maple syrup bottles? The ones that are too little for most fingers that are strong enough to lift the bottle? This is a specific example of a skeuomorph – something that retains design elements from structures that were only necessary in the original design. Recently I read a random article about maple syrup bottles that reminded me of this word, and made me think how we have so many of these in ed tech.

Many LMSs are full of skeuomorphs. The LMS itself might be in danger of becoming a skeuomorph. We have tools online that host content much better. We have better tools for facilitating interaction. We certainly have tools that are more ADA compliant. We have better tools for creating social presence. We have better methods for protecting privacy online. And so on.

Of course, what might be a skeuomorph to one may not be a skeuomorph to another. Those that have some experience in a topic might find a linear instructivist path through course content to be a skeuomorphic design paradigm that hinders their ability to determine their own learning. A person that is new to the topic might find it a perfect fit.

Maybe I am stretching the analogy of skeuomorphs in Ed Tech a bit too far. But the reason I love the dual layer design that we have been working on with various LINK lab MOOCs is that it allows those that need certain design elements to still utilize them, while others that find them to be left over design structures from when they needed more instructivism (and now don’t) can skip them and dig into a more relevant learning designs.

The problem with our first stab at the dual-layer model was that the interface was too complex and difficult for many participants to find what they needed in the moment. So the next set of MOOCs to utilize a dual-layer design will focus on simplification of that user interface. What is needed is a system that will direct learners who need guidance and instruction to that instruction, while those that already know the content to some degree – who might find an instructivist path to be skeuomorphic – are led to the more connectivist, sense-making, chaotic side.

On one level – this is pretty easy. Find a clean, minimalist WordPress theme, remove a few elements that still might be distracting, and set it to only display the most recent post. That post will contain the basic outline of what needs to be accomplished that week, links to the the place where the two layers will occur, and links to previous posts for those that are behind. The other level that is more difficult is working behind the scenes to make sure that all of the tools that support the various layers integrate in a seamless way for a smooth end user experience. Well, not necessarily difficult as much as time consuming to map out logically and then program. Anyone that knows programming knows that is can happen as long as the tools that are used are willing to co-operate (and many are).

Looking at the graphic used in the previous dual-layer DALMOOC, this simplified interface would be in place of the “Main Weekly Email” at the top. There would also need to be some work to make sure that all of the tools in the rest of the diagram work seamlessly. This structure would probably also mean running a lot of the main registration and log-ins through this simplified interface (which may or may not be okay with some entities involved). Instructional design would also need some tweaking to make sure that the design also flows smoothly.

edugeek-journal-avatarThe basic idea is to simplify the end user experience as much as possible to let each participant decide how much complexity to add, what path to take, what is skeuomorphic to them in their context, and how to connect with others in the course.

(image credit: Sara Karges, obtained from freeimages.com)

Structuring the Initial Week of a MOOC (or An Online Course)

One of the topics that we get asked frequently about MOOCS is how to structure the initial / first / orientation week (including everything from the design of the syllabus to the “on-boarding” / scaffolding process to introductory content). And for good reason – that first series of interactions can seriously influence large numbers of learners to keep going or give up. Obviously, since you created the course, you want people to continue going and not give up. I can go through what we did with dalmooc and then offer some ideas that arose from that process.

Our general strategy was to have an orientation week (or “week 0”) before the first week of class officially started that was dedicated to on-boarding people with the course design and structure. This is generally when people would be looking at the syllabus anyways. But the reason that we didn’t make that Week 1 was that many people who take MOOCs are so self-directed that they are ready to jump in an figure it out as they go along (or advanced enough with MOOCs that there is nothing to figure out). We knew that we had to balance the needs of the new learners with the desires of experienced learners, without alienating either group or those in between. Having to sit through a bunch of orientation videos the first week would just chase off several advanced learners. They might intend to come back later, but that break in momentum would mean that many won’t. So the orientation was shifted to pre-course.

This orientation was basically a series of videos and Google Hangouts that covered course structure, introductions, a design, and assessments. I know that many courses do not cover the design process, but I am glad we did. In my opinion, part of creating an open course means that you also cover the design process. This way people can easily replicate your work at a later date. Here are the videos and Hangout archives for those that are curious:

 

 

 

 

The week 1 video was moved up to week 0 because we felt that it fit better in that spot, and the assessment video was created to answer some FAQs about that topic. These videos mainly served as ways to introduce the instructors and basic topics to the participants as well as to expand on the visual syllabus.

I have discussed the visual syllabus in this blog in several places. Some of the relevant posts to this topic are listed here:

Other posts in that series might also be of interest for other reasons depending on where you are in the design process. George Siemens’ reflection on the first week of DALMOOC also covers many issues that affected our design decisions and highly recommend reading his reflections.

A couple of quick notes about the technical side of creating the visual syllabus (which you can skip if you are not interested). It was built utilizing a self-hosted WordPress installation using the incredible services of Reclaim Hosting. We used the Pictorico theme, but I customized / hacked the front page that you see. I basically copied the source code from the original theme home page to a new file (index.html), cut out the distracting elements, and added the parts you see. Those are technically square images on the front page, but I used Fireworks to add a circle overlay to make them easier to distinguish. I also added the numbers in Fireworks, and found the background/header images on freeimages.com. The diagrams throughout the syllabus were also built in Fireworks (PhotoShop, GIMP, or other graphics programs would also work – probably even better) and sliced into squares to add the pop-up links. Those pop-ups are a WordPress plug-in called Fancybox that works well..

So, enough of the technical side. The actual instructional design of the syllabus is the important factor. George Siemens really wanted the visual syllabus aspect for dalmooc. He wanted to avoid the wall of text. I also went for scaffolding the syllabus in a way that should be easy for completely new learners to tell where to start, but had enough there to guide more experienced MOOC participants to the parts they wanted. Too much text on the first page would still mean “wall of text.” So the roll-over effect for those first page images (they came as part of the theme we picked) were perfect – you get the info when you need it, but its not there to stress you out when you don’t need it.

The visual diagram might seem to be a necessity for a complex course design like the dual-layer design, but in reality I would say that every course really needs one. Don’t assume that your learners can visualize your course flow just because it seems simple to you. What you see as simple might be due to a cultural norm that doesn’t translate to other cultures (not only your social/geographical culture but also your institution’s institutional culture).

I’m also a fan or roll overs and pop-ups that expand the content (as you can tell). That really helps with not only the new learners that need scaffolding, but also with non-linear thinkers that want to remix your intended order on the spot. With Federated Wikis gaining momentum, I think this ability to reorganize content as the learner sees fit will be a big deal. While pop-ups aren’t truly “remixable” content, they do allow a pick-and-choose method that comes close.

The general idea for the design of the syllabus was to scaffold participants into the overall structure of the course: start with “basic” and go deeper and deeper into various levels. Additionally, I took a cue from Jim Groom and DS106 and utilized a visual metaphor for the course. DS106 uses these metaphors (ds106zone, thewire106, etc) to add personality and presence to courses. Yes, even a course itself can have presence. Time constraints kept us from carrying this metaphor through the rest of the course, but I would suggest any course take this idea and fully implement it.

So to wrap up with some final thoughts on design and the first week of a MOOC (which could also apply to any course, really):

  • Never assume that something makes sense to everyone just because they signed up for your course. Keep the complete newbies in mind as much as the self-directed learners. Too many college courses are designed more on the self-directed learner end. Which means you probably end up explaining a lot of the same basic stuff over and over and over again, right? Think about it.
  • At the same time, don’t force the self-directed learners to go through everything that the newbies need to. Consider non-linear learning paths, dual-layer approaches, connectivism, etc. If you don’t have access to something like Ning or ProSolo to accomplish the connectivist layer, you can always use something like the BuddyPress plug-in for WordPress or a self-hosted installation of Known to accomplish the same structure.
  • While visual and video elements are important for breaking up content, don’t forget that not everyone has perfect sight. Keep accessibility mandatory.

edugeek-journal-avatarI think I am leaving out a few ideas that I had on this issue, but this post is already long enough. If you have any questions on specific design decisions or technical details, I would be glad to answer them. With access to a good instructional designer, knowledge of basic html, and basic experiences with a graphics program you should be able to accomplish much of what we did in your MOOC orientation/syllabus.

(image credit: Flavio Takemoto, obtained from freeimages.com)

The Mirage of Measurable Success

The last post that I wrote on measuring success in MOOCs created some good, interesting conversation around the idea of measurable success. The most important questions that were asked had to deal with “why even offer dalmooc if you don’t know what measurable success would look like?”

That’s a good question and one that I think can be answered in many ways. Honestly, the best answer to that question is “because four world-renowned experts wanted to teach it and a lot of people signed up to take it.” To me, especially in the informal realm of education where dalmooc existed, that is one of the biggest measurable signs of success. We live in a world that is so full of compulsory education and set degree plans that we forget that choosing to sign up for an informal voluntary learning experience is measurable success in itself. Over 19,000 people initially said “that sounds interesting, sign me up,” with over 10,000 signing in at one point or another to view the materials. Hundreds of participants were active on Twitter, Facebook, EdX forums, ProSolo, Google Hangouts, and other parts of the course. All voluntarily.  To me, that is measurable success.

Another area of measurable success, although definitely more on the qualitative side, is what I covered in the last post:

So maybe when the participants use words like “blended” or “straddled” or “combined,” those statements in themselves are actually signs of success. In the beginning, some claimed that DALMOOC was more cMOOC than xMOOC, but by the end others were claiming it was more xMOOC than cMOOC. Maybe all of these various, different views of the course are proof that the original design intention of truly self-determined learning was realized

To clarify this a bit more, there are those that thought that dalmooc was more instructivist / xMOOC:

https://twitter.com/cpjobling/status/555074861320921088

And then there were those that thought it was much more connectivist / cMOOC (myself included)

So to me, that is another realm of measurable success – learners came out of the experience with vastly different views on what happened. That was a goal we had.

However, I know that when people talk about “measurable success,” they are usually referring more to standardized test results, student satisfaction, completion rates, and – the holy grail of education – grades! The elephant in the room that many people won’t deal with, but we all know is true, is that these measures of success are often a mirage.

Standardized tests are probably the biggest mirage of all. The problem is that a score of 90% on a test really only means that a learner was able to mark 90% of the questions correctly, but not necessarily that they actually understood 90% of the material. They may have only understood 60% of it and guessed the next 30% correctly. The fact that the right answer is somewhere in a list of multiple choice answers should negate their usefulness as a way to measure success, but our society still chooses to ignore this problem. Then you can add into this mix that most multiple choice questions are poorly written in ways that give away the answers to people that are taught how to game them (like I was).

Then there is the problem of coming up with questions for tests. Some tests contain, say, two questions about the core knowledge that learners should have gained and then a whole lot of related trivia that they could just Google if needed. Yet they could still get the two essential questions wrong and all the rest correct and will be labeled as “mastering” the concept. Rubrics for papers or projects often do the same thing – giving most points to grammar and following instructions and few to actual content mastery. Someone could write a great paper that shows no knowledge of the topic at hand but still pass because they got all other areas perfect.

Add to this that we would compare two children to each other based on this false sense of “success.” One child could have tanked a test based on the trivia but got all of the core content correct and still be labeled as less successful than the one that got the trivia right and core knowledge wrong…. just because its all on the same test. Oh, and let’s not forget the practice of giving similar or equal weights to all questions on a test when not all questions are really equal. Again, two learners could get the same score, but one only answered the easy questions correctly while the other answered all of the challenging ones correctly.

And speaking of different learners, there is always the oft-ignored problems of cultural bias in testing and learning.  Are learners not testing well because they didn’t learn, or were there cultural references on the test they didn’t get? Did a learner really learn the content, or were they just able to quickly memorize some factoids because of some weird thing Aunt Ida said about planets that helped them connect the new information to this weird family quirk? Are they being labeled as smarter because they are or because their weird Aunt Ida gave them a memory that helped them memorize?

Most of what we call “measurable success” in education is really just a mirage of numbers games. For those like me that fell on the privileged side of those games, it was a great system that we probably want to fight to keep. And we are probably most likely the ones now in control, so….

Now, of course, this is not to say that learning isn’t happening. This is more about how most institutions measure learning and success. I believe people are always learning formally and informally, even if its not always what they had intended to learn. It just takes a lot of time, effort, and money (yes – money!) to truly assess learning, and the educational field in general is being tasked with the opposite. “Do better assessment with less money, less time, and less effort (ie people power)!” No real easy answers, but there is a problem with the system and the culture that drives that system that needs to be addressed before “measurable success” becomes a trustworthy idea.

Measuring Success in MOOCs (or More Specifically, DALMOOC)

I was asked last week how we knew whether or not DALMOOC was successful. Seems a fair question since “success” in MOOCs seems to be measured by everything from completion rates to enrollment numbers to certificate numbers to alignment of the stars on the Wednesday of the first week the MOOC is offered. I had to be honest about that question and say that since everyone that work on DALMOOC lived and are still on speaking terms (at least as far as I know), then we were successful. Running a MOOC can be far more intensive and stressful than many people realize. We almost didn’t make it out the other side with everyone still happy.

In some ways, when I see people saying that we were blending xMOOCs and cMOOCs, or combining the two, I think we might have failed in the communication of what we were doing. Maybe I can blame that somewhat on our current educational system that only thinks in linear ways; therefore any course with more than one layer is not scene as complex, but “blended” or “combined.” Words like “straddled” seem closer, but to be honest we didn’t feel the need to straddle. We just had two layers that were not walled in, allowing learners to choose one or the other or both or to move back and forth as they felt. Infinite possibilities. A Multiverse Open Online Course maybe even. But not really a linear mix of the two from the design side.

Of course, the learner experience was linear even if they skipping back and forth between both layers. So maybe when the participants use words like “blended” or “straddled” or “combined,” those statements in themselves are actually signs of success. In the beginning, some claimed that DALMOOC was more cMOOC than xMOOC, but by the end others were claiming it was more xMOOC than cMOOC. Maybe all of these various, different views of the course are proof that the original design intention of truly self-determined learning was realized. At the very least, DALMOOC feedback was an interesting study in how bias and ontologies and epistemologies and all those -ologies affect participant perception of design. Maybe it doesn’t matter than participants can’t articulate the basics of dual layer architecture because the point all along should have been to hide the architecture and let learners learn how to learn.

So, at the end of the day, I will be able to come up with some philosophical jargon to “prove” that DALMOOC was a success to the powers-that-be who ask – all of which will be true. But to be honest, the only thing I really want to do is shrug my shoulders and say “beats me – go look the participant output itself and see if that looks like success to you.”

Looks like success to me.

DALMOOC Design and Scaffolding

Returning again to the design of DALMOOC and more specifically the visual syllabus, I wanted to take a look at the scaffolding decisions that were made. In some ways, this course was a unique challenge because we had to do some true scaffolding that could possible span several different levels of experience, from complete newbie to seasoned expert. This is different than most instances of scaffolding, because typically college curses are really more along the lines of linear constructivism than scaffolding. What I mean is this: for most courses on the college level, you assume that your learners have a prerequisite level of knowledge from either high school or earlier pre-req courses. You aren’t introducing, say, physics for the first time ever – but instead you are building on the Intro to Physics course in order to help students learn how to build rockets and predict launch patterns. So you don’t scaffold as much as take chunks of knowledge and skills and plug them into (linear constructivism) existing knowledge. This is scaffolding at the basic level, but you may or may not go beyond one level of scaffolding.

With DALMOOC, we knew that learning analytics is still new for many, but old news for some that may just want to learn some different tools. Additionally, we were adding in new technology that we knew for a fact that no one had ever used. Throw in that mix an international crowd that won’t all be speaking English and then even add the idea to create a visual syllabus (which few are familiar with). This is a tall order covering a huge range that most courses don’t have to consider.

So where to start? Well, with the first page of the syllabus. It needed to be visual, with minimal text, but clear where to start. A wall of text that basically says “start here” kind of violates a lot of what it needed to be. But if you look at anything from OLC to Quality Matters, most online course evaluation tools recommend having a clear and simple indication of where to start. What is more simple and easy to understand than a basic “1, 2, 3, etc”? I have traveled to Asia and Europe and Africa and even people who don’t know English still understand enough about our number system to know that a web page with those numbers on them would indicate you start with number 1.

Of course, a small number of people felt that this was still too confusing. I’m not sure what to say to that. You are presented with a page that says “Welcome to the Class” and then some big buttons that are labeled 1, 2, 3, etc. I’m not sure what is simpler than that.

Of course, I realize that there are those that really, really need the text because they have been programmed to look for it their whole lives. The buttons were given a rollover effect that gives a more detailed description of what they are leading to. This serves two purposes. One, it gives detailed descriptions that are there when you need them, but aren’t filling the screen and overwhelming people that are completely new. Two, they make you actually do something with the syllabus instead of just passively reading a wall of text. You have to mouse over or click on various items to get more details. This moves you from passive to (slightly) active. This was on purpose to get learners engaging more with the content on the very first page. Additionally, this idea was continued on the other various visuals.

For those that are not new to all of this, links were provided on the upper right hand corner – where they usually are on most websites. We don’t expect people to follow the path laid out for them. In fact, we encouraged learners to skip around and make their own path as needed. And that was also possible from the design.

As expected, there was some push back from a few learners (about 5-10 out of the 6,000 that were active at some point) on the design. The basic feedback was that they didn’t like the roll over effects. They wanted the text to be there without rolling over. This probably tells me that the right decision was made, because that was exactly what the rollover effects were designed to do: make the learner do something instead of passively absorbing text. Of course, there are other ways to accomplish the same goal, so other ideas might be used in the future.

The biggest challenge in describing the structure was how to explain the nature of the dual layer course. Course themes are always helpful, as the many versions of ds106 have proven. Of course, it would have been nice to have enough time to have carried out the theme for the whole class. Many of the problems with understanding the structure can probably be traced to the fact that we were not able to inject this theme throughout the entire course (of course, those problems could also come from the fact that we initially designed the daily email to be the central anchor for the course (and therefore, the place were scaffolding and structure sense-making would happen)). It seems like that aspect fell short. However, I think that a consistent theme that is carried throughout the course as a true sensemaking/guidance tool would alleviate many of these issues. Of course, scaffolding in general is a problematic concept in this dual layer approach, but that will have to be a topic for another blog post.

The theme itself was chosen early as an initial idea that ended up sticking. I think the Matrix “blue pill/red pill” theme was a good place to start, but gets a little murky once people start mixing the two and bringing in their own ideas (which is, of course, what we want). My first idea was actually a table full of play-dough – all kinds of colors just waiting for learners to pick and choose and make into whatever they like. Ultimately, this leaned too much towards my connectivism bias and was probably too unstructured for new learners who wanted instructivism. I think that a mixture of the two ideas might work as a better theme in the future: learners are initially given the choice of blue or red play-dough, but they can choose one or the other or mix together to make their own shade of purple – or bring in their own colors to create what they want.

Of course, some of the more complex ideas that were thrown around earlier, like creating scaffolding out of objectives or dynamic group formation and changing, never made it into the course. Interestingly enough, some learners (around 10-15) asked for various versions of these ideas, so they may bear exploration in the future.

Underlying these design decisions were some different theoretical perspectives that go beyond Connectivism and Instructivism (LTCA Theory, Heutagogy, Metamodernism, etc) that will need to be explored in a future blog post.

Who MOOCed My Cheese?

Conversations behind the scenes with the DALMOOC have turned to looking at the kind of feedback we have been getting on the course. George Siemens shared some of the things that he learned the first week. His post also deals with some of the feedback we have received.

The hard part with dealing with feedback is that most of us end up with a skewed view of it after a while. Positive feedback is usually pretty general and therefore easy to forget because its not very specific. Everything from “this looks great!” to “I am loving this!” to “I really like what you are doing here” serves as great feedback, but because of the general nature and lack of specifics tend to be easily forgotten. Negative feedback tends to be specific and direct. This makes it a lot easier to remember. People will tell you exactly what they don’t like and a dozen reasons why they don’t like it.

Because of this skew, the negative feedback tends to stick in our mind more easily and we also tend to get the impression that there is more negative than positive. This becomes a problem when we begin to make design decisions based on gut feelings rather than hard numbers. If you count up the positive and negative feedback, which one is higher? If you take a qualitative look at what was said, is there anything helpful either way? Saying “I love this” really just indicates a personal preference more than an actual analysis that a designer should take into consideration. In the same way, “I don’t like this” is also really just a personal preference that really doesn’t tell us much about design. Learning is not all puppy dogs and fairy tales – sometimes you have to create situations where learners have to choose to stretch and grow in order to learn. There is nothing wrong with some struggle in learning. Often, complaints about learners not liking something are actually good indicators that your design is successful.

If you disagree, that is fine. But don’t design anything that involves group work. A lot of people hate group work and if you create a lesson that requires group work, you have just acknowledged that sometimes you have to struggle through group dynamics in order to learn whether you like it or not :)

But sometimes when someone says “I don’t know what to do with this tool!” what they are really saying is “I am not sure what to do and I don’t want to try because in the past there have been huge penalties for trying and getting it wrong on the first try!” This is a sad indication of our educational systems in general. We don’t make it okay to experiment, fail, and learn from failure.  The reason so many people demand more tutorials, more hand-holding, more guidance is not because they afraid of chaos as much as they are afraid that they will get their hand slapped for not getting it right the first time. This is likely due to it almost always happening in the past.

So in something like DALMOOC, where you are free to get in and experiment and fail as much as you want to, most of us have been conditioned to panic in that kind of scenario. That’s what our excessive focus on instructivism does to us as a society. People are afraid to play around and possibly fail for a while. They want to know the one right way to do something, with 10 easy steps for doing it right the first time.

So, in a lot of ways, much of the feedback we are getting is along the lines of “who moved my cheese?” And that was expected. We are trying to jump in and explain things to those who are confused as much as possible. We are hoping that those who are bringing up personal preferences as negatives will see that we had to design for the widest range of learners. Or maybe to see that if they still figured something out, that this thing actually worked as designed (because its not always about personal preferences as much as learning).

But, to be quite honest, an objective examination of all feedback would seem to indicate that most of it is positive. Many of you like the challenges and the struggles. That is great – you get it. Most of the positive and negative feedback is along the lines of personal preferences – you don’t like rollover effects, you love Bazaar, this optional assignment is too hard, this required one is too easy. I’ll continue blogging on design decisions to clarify why they were made – not to justify them as right (instructional design is rarely about black and white, right and wrong decisions anyways), but to explain why they were made. And there are some genuine complaints about confusion that we are addressing.

Just as us instructors and designers can develop a negative skew, so can the learners. They can see a few specific negative tweets in a sea of general positive tweets and start to think “wow – maybe I should be having more problems?” Don’t worry – most people are doing just fine. Problems, praises, issues, suggestions, and complaints are welcome, but just remember they don’t necessarily apply to you as a learner. You are free to love or hate any part of the course you wish. You are also free to pick and choose the parts of the course you participate in, so don’t waste time with something that isn’t working for you. But also be careful not to label something as “not working for you” just because you don’t like it or are struggling with it. Sometimes the struggle is the exact thing that you need in order to learn.

MOOCs and Codes of Conduct

Even before the whole GamerGate thing blew up, I had been considering adding a Code of Conduct to the DALMOOC. UTA has always required an Honor Code in all course syllabuses, so to me this issue was a no-brainer (even though we aren’t treating DALMOOC as a specific UTA-only course). But I know others don’t see the need for Codes in general, so I wanted to dig more into the reasoning behind a Code of Conduct for online courses – especially MOOCs.

I know some feel that you can’t really stop bad people with just a statement, and that usually the community will rise up to get rid of the abusers and trolls anyways. Sometimes both of those are true. But not always.

I have been a part of Facebook groups that did not have a code and ended up leaving. You would think the group would have risen up to stop people from being abusive, but that was not the case. And when I spoke up? Well, it quickly became time to leave. I have also been in some groups that did have a code in them and witnessed first hand seeing someone asked to comply with the code and – believe it or not – they stopped, apologized, and changed. It does work sometimes.

But other times it doesn’t. So you can’t just say “be cool to everyone” and leave it at that. There has to be a threat of consequences from the people in charge for the Code to have teeth. The problem with using the UTA Honor Code in a MOOC was that it was designed for a small group of people in a closed system where you can ultimately with one click boot out people that don’t comply. And then send the police after them if they don’t get the message. Open online courses, though? A lot trickier to enforce.

So, I turned to the work of Ashe Dryden and her recommendations for conference codes of conduct. Since conferences are a bit more open than closed online courses, I thought that would be a good place to start. I also decided to add links to the privacy statements of all services we recommend, as well as links to reporting abuse on those services. I felt people needed to be aware of these issues, as well as know what one place to go to access the, all. If I should add anything else, please let me know.

So you might wonder why the language is so specific on the Code. Just tell people to be cool or else your out, right? The problem is that this is too vague. Some people can be very abusive in a way that flies under the radar of most gatekeepers, because they are looking for obvious hateful words and actions. True abusers have found ways to go under the radar. So we need to be as specific as possible in these codes as a way to empower our learning communities to know what to look for in the first place. You can’t just expect the community to rise up and fight abusers – you have to give them the tools and words to use in order to fight. And one of those tools needs to be an appeal to authority. You see, its one thing to say “I think you are being abusive, stop” and another to say “the rules say this: _____.” Trust me from experience: abusers rarely care when you come in and say “stop treating this person that way because I think you are wrong.” If we want our communities to rise up and stop abuse, we have to empower them with the tools and words they need from us as the leaders. Yes, they are able to come up with their own words; however, it is much more powerful when their words match ours instead of fill in our blanks.

And I know what many say: “this will never happen – I have never seen abuse happening in classes.” I hope that is true. But I would encourage you to look into recent cyber bullying research. Many people that experience abuse do not speak up because they feel no one will listen. So is the fact that you have never heard of abuse online a sign that there is none, or that no one thinks you are a safe person to discuss the issues with? An important difference there.

Think of it this way. The DALMOOC has over 18,000 people signed up last I heard. That is more people than thousands of small towns in America. Thousands of towns that also have a crime rate and an abuse rate. If even small towns can’t escape from attracting criminals and abusers, how sure are we that our MOOCs will?

And oh yeah: #stopgamergate. Call me a SJW or whatever you want. I wear it proudly.

Social Learning, Blending xMOOCs & cMOOCs, and Dual Layer MOOCs

For those who missed it, the Data, Analytics, and Learning MOOC (DALMOOC) kicked off orientation this week with two hang-outs – one as a course introduction and one as a discussion over course design. Also, the visual syllabus, the precursor of which you saw here in various blog posts, is now live. The main course kicks off on Monday – so brace yourselves for impact!

The orientation sessions generated some great discussion, as well as raised a few questions that I want to dive into here. The first question is one that came about from my initial blog posts (but continued into the orientation discussion), the second is related to the visual syllabus, and the third is in relation to the Hangout orientation sessions themselves:

  • Don’t most MOOCs blend elements of xMOOCs and cMOOCs together? The xMOOC/cMOOC distinction is too simple and DALMOOC is not really doing anything different.
  • Are the colors on the Tool flow chart mixed up? Blue is supposed to represent traditional instructivist instruction, but there are social tools in blue.
  • Isn’t it ironic to have a Google Hangout to discuss an interactive social learning course but not allow questions or interaction?

All great points, and I hope to explain a bit more behind the course design mindset that influenced these issues.

The first question goes back to the current debate over whether there are really any differences between xMOOCs or cMOOCs or whether this is a false binary (or not). I have blogged about that before, and continued by pointing out that the xMOOC/cMOOC distinction is not really about “binary” at all as much as where certain factors cluster (more specifically, power). I submitted a paper to AREA this year (that I hope gets accepted) with my major professor Dr. Lin that was basically a content analysis of the syllabuses from 30 MOOCs. I noticed that there were clusters of factors around xMOOCs and xMOOCs that didn’t really cluster in other ways. I am now working on some other studies that look at power issues and student factors like motivation and satisfaction. It seems like not matter what factor I look at, there still appears to be clusters around two basic concepts – xMOOCs and cMOOCs. But we will see if the research ends up supporting that.

So from my viewpoint (and I have no problem if you disagree – we still need research here), there are no hard fast lines between xMOOCs and cMOOCs. The real distinction between the xMOOCs and cMOOCs is where various forms of power (expert, institutional, oneself, etc) reside. For example, was any particular course designed around the students as source of expert power, or the instructor? You can have content in a course that has the student at the center. You can also have social tools in a course that sets the instructor as the center.

Our guiding principle with the DALMOOC was that there is nothing wrong with either instructivism / instructor-centered or connectivism / student-centered as long as the learner has the ability to choose which one they desire at any given moment.

That is also the key difference between our goal with course design and how most other blended xMOOC/cMOOCs are designed. Most blended MOOCs (bMOOCs? Sounds like something from the 80s) usually still have one option / one strand for learning. The content and the social aspects are part of the same strand that all learners are required to go through. Remember, just adding social elements to a course does not make it a social learning, student-centered, connectivist course (especially if you add 20 rules for the forum, 10 rules for blog entries, and then don’t allow other avenues beyond that). In the same manner, just adding some content or videos or one-way Hangout sessions does not make a cMOOC an instructor-centered, instructivist course.

Our design goal was to provide two distinct, separate layers that allow the learner to choose either one or the other, or both for the whole class, or mix the two in any manner they want. But the choice is up to the learner.

And to be clear, I don’t think there is anything wrong with blendedMOOCs. Some are brilliantly designed. Our goal with DALMOOC was just different from the blended approach.

So this goal led to the creation of a visual syllabus to help myself and others visualize how the course works. One comment that arose is that the colors on the tool flow page (explained here) are mixed up: the Quick Helper and Bazaar tools (explained here by George Siemens) are in blue and should be in red. I get that concern, but I think it goes back to my view of the distinction between xMOOCs and cMOOCs. The red color is not “social only” and the blue color is not “content only,” as some would classify the difference between cMOOCs and xMOOCs. The colors are about where the expert power lies. Quick Helper might have social aspects to it, but the main goal is to basically crowd-source course help when learners are trying to understand content or activities. And it is a really cool tool – I love both Quick Helper and Bazaar (and ProSolo, but the orientation Hangout for that one is coming up). But the focus of Quick Helper is to help learners understand the content and instructor-focused activities (again, nothing wrong with that since the choice is up to the learner to give that expert power to the instructor). In the same way, the Bazaar tool is social, but has a set of prompts that are written by the instructor for learners to follow.

I hope that clears that up a bit – the colors indicate where the expert power resides in the design – neither of which are bad in our design. Of course, you as the learner might use these tools differently than that and we are okay with that, too.

The third question is about the irony of using a Google Hangout to explain a student-centered course and then not allow any interaction. I kind of snickered at that one because I usually say the same thing at conference keynotes that talk about interactive learning but then don’t allow for interaction. So it sounds exactly like something I would say. Of course, at keynotes, you usually have the totality of the examination of that topic at that one keynote and then the speaker is gone. A course is different, obviously. But in explaining our reasoning for this issue I would point back to the differences between cMOOCs and xMOOCs and again bring up the point that being student-centered and connectivist does not mean that there are never any times of broadcast from the instructor. A 30 minute Hangout with no interaction fits into a student-centered mindset just fine as long as you don’t see hard fast lines between paradigms.

But I would also point out that the Google Hangout format is too limited for interaction at scale. You are only allowed 10 people in the actual Hang-out. In addition to that, going over 30 minutes gets a bit tedious, and you can’t really do much interaction with learners in 30 minutes even when using the Q&A feature. Not to mention that 30 minute window is set in stone – if a learner misses it because of work or different time zone or whatever: “no interaction for you!” Using a Google Hangout for a global course would be like being the ultimate “Interaction Nazi.” We also noticed a 30-60 second lag between live and broadcast, so that also hampers interaction. Howver, the biggest reason was that we were really looking at ProSolo, Twitter, our Facebook Page, and our Google+ Page as the true avenues for interaction with these Hangouts. Those avenues were active before, during, and after the Hangout for people in any time zone. So the interactivity was there during the orientation sessions, and you actually did see us responding to things from the social channels in both Hangouts. This may change in future Hangouts. The instructors may open up the Q&A function of Hangout. We’ll see.

So, if you have questions about DALMOOC content or design, be sure to post them to social avenues. Or comment here about this post. I am behind on comments (and blogging) due to the looming course launch, but I will get caught up :)