Using Learning Analytics to Predict Cheating Has Been Going on for Longer Than You Think

Hopefully by now you have heard about the Dartmouth Medical School Cheating Scandal, where Dartmouth College officials used questionable methods to “detect” cheating in remote exams. At the heart of the matter is how College officials used click-stream data to “catch” so-called “cheaters.” Invasive surveillance was used to track student’s activity during the exams, officials used the data without really understanding it to make accusations, and then students were pressured to quickly react to the accusations without much access to the “proof.” Almost half of those accused (7 of 17 or 41%) have already had their cases dismissed (aka – they were falsely accused. Why is this not a criminal act?). Out of the remaining 10, 9 plead guilty, but 6 of those have now tried to appeal that decision because they feel they were forced to plead guilty. FYI – that is 76%(!) that are claiming they are falsely accused. Only one of those six wanted to be named – the other 5 are afraid of reprisals from the College if they speak up.

That is intense. Something is deeply wrong with all of that.

The frustrating thing about all of this is that plenty of people have been trying to warn that this is a very likely inevitable outcome of Learning Analytics research studies that look to detect cheating from the data. Of course, this particular area of research focus is not a major aim of Learning Analytics in general, but several studies have been published through the years. I wanted to take a look at a few that represent the common themes..

The first study is a kind of pre-Learning Analytics paper from 2006 called “Detecting cheats in online student assessments using Data Mining.” Learning Analytics as a field is usually traced back to about 2011, but various aspects of it existed before that. You can even go back to the 1990s – Richard A. Schwier describes the concept of “tracking navigation in multimedia” (in the 1995 2nd edition of his textbook Instructional Technology: Past, Present, and Future – p. 124, Gary J. Anglin editor). Schwier really goes beyond tracking navigation into foreseeing what we now call Learning Analytics. So all of that to say: tracking students’ digital activity has a loooong history.

But I start with this paper because it contains some of the earliest ways of looking at modern data. The concerning thing with this study is that the overall goal is to predict which students are most likely to be cheating based on demographics and student perceptions. Yes – not only do they look at age, gender, and employment, but also a learner’s personality, social activities, and perceptions (did they think the professor was involved or indifferent? Did they find the test “fair” or not? etc).

You can see by the chart on p.207 that males with lower GPAs are mostly marked as cheating, while females with higher GPAs are mostly marked as not cheating. Since race is not considered in the analysis, systemic discrimination could create incredibly racist oppression from this method.

Even more problematic is the “next five steps to data mining databases,” with one step recommending the collection of “responses of online assessments, surveys and historical information to detect cheats in online exams.” This includes the clarification that:

  • “information from students must be collected from the historical data files and surveys” (hope you didn’t have a bad day in the past)
  • “at the end of each exam the student will be is asked for feedback about exam, and also about the professor and examination conditions” (hope you have a wonderful attitude about the test and professor)
  • “professor will fill respective online form” (hope the professor likes you and isn’t racist, sexist, transphobic, etc if any of that would hurt you).

Of course, one might say this is pre-Learning Analytics and the current field is only interested in predicting failure, retention, and other aspects like that. Not quite. Lets look at the 2019 article “Detecting Academic Misconduct Using Learning Analytics.” The focus in this study is bit more specific: they seek to use keystroke logging and clickstream data to tell if a student is writing an authentic response or transcribing a pre-written one (which is assumed to only be from contract cheating).

The lit review of this study also shows that this study is not the only one digging into this idea. The idea goes back several years through multiple studies.

While this study does not get to the same Minority Report-level concerns that the last one did, there are still some problematic issues here. First of all is this:

“Keystroke logging allows analysis of the fluency and flow of writing, the length and frequency of pauses, and patterns of revision behaviour. Using these data, it is possible to draw conclusions about students’ underlying cognitive processes.”

I really need to carve out some time to write about how you can’t use clickstream data of any kind to detect cognitive processes in any way, shape or form. Most people that read this blog know why this is true, so I won’t take the time now. But the Learning Analytics literature is full of people that think they can detect cognitive activities, processes, or presence through clickstream data… and that is just not possible.

The paper does address the difficulties in using keystroke data to analyze writing, but proposes analysis of clickstream data as a much better alternative. I’m not really convinced by the arguments they present – but the gist is they are looking to detect revision behaviors, because authentic writing involved pauses and deletions.

Except that is not really true for everyone. People that write a lot (like, say, by blogging) can get to a place where they can write a lot without taking many pauses. Or, if they really do know the material, they might not need to pause as much. On the other hand, the paper assumes that transcription of an existing document is a mostly smooth process. I know it is for some, but it is something that takes me a while.

In other words, this study relies on averages and clusters of writing activities (words added/deleted, bursts of writing activity, etc) to classify your writing as original or copied. Which may work for the average, but what about students with disabilities that affect how they write? What about people that just work differently than the average? What about people from various cultures that approach writing in a different method, or even those that have to translate what they want to write into English first and then write it down?

Not everyone fits so neatly into the clusters.

Of course, this study had a small sample size. Additionally, while they did collect demographic data and had students take self-regulated learning surveys, they didn’t use any of that in the study. The SRL data would seem to be a significant aspect to analyze here. Not to mention at least mentioning some details on the students who didn’t speak English as a primary language.

Now, of course, writing out essay exam answers is not common in all disciplines, and even when it is, many instructors will encourage learners to write out answers first and then copy them into the test. So these results may not concern many people. What about more common test types?

The last article to look at is “Identifying and characterizing students suspected of academic dishonesty in SPOCs for credit through learning analytics” from 2020. There are plenty of other studies to look at, but this post is already getting long. SPOC here means “Small Private Online Course”… a.k.a. “a regular online course.” The basic gist is that they are clustering students by how close their answers are to each other and how close their submission times are. If they get the exact same answers (including choosing the same wrong choice) and turn in their test at about the same time, they are considered “suspect of academic dishonesty.” It should also be pointed out that the Lit Rreview here also shows they are the first or only people to be looking into this in the Learning Analytics realm.

The researchers are basically looking for students that meet together and give each other answers to the test. Which, yes – it is suspicious if you see students turn in all the same answers at about the same time and get the same grade. Which is why most students make sure to change up a few answers, as well as space out submissions. I don’t know if the authors of this study realized they probably missed most cheaters and just caught the ones not trying that hard.

Or… let me propose something else here. All students are trying to get the right answers. So there are going to be similarities. Sometimes a lot of students getting the same wrong answer on a question is seen as a problem to fix on the teaching side (it could have been taught wrong). Plus, students can have similar schedules – working the same jobs, taking the same other classes that meet in the morning, etc. It is possible that out of the 15 or so they flagged as “suspect,” 1 or 2 or even 3 just happened to get the same questions wrong and submit at about the same time as the others. They just had bad luck.

I’m not saying that happened to all, but look: you do have this funnel effect with tests like these. All of your students are trying to get the same correct answer and finish before the same deadline. So its quite possible there will be overlap that is very coincidental. Not for all, but isn’t it at least worth a critical examination if even a small number of students could get hurt by coincidentally turning in their test at the same time others are?

(This also makes a good case for ungrading, authentic assessment, etc.)

Of course, the “suspected” part gets dropped by the end of the paper: “We have applied this method in a for credit course taught in Selene Unicauca platform and found that 17% of the students have performed academic dishonest actions, based on current conservative thresholds.” How did they get from “suspected” to “have performed?” Did they talk to the students? Not really. They looked at five students and felt that there was no way their numbers could be anything but academic dishonesty. Then they talked to the instructor and found that three students had complained about low grades. The instructor looked at their tests, found they had the exact same wrong answers, and… case closed.

This is why I keep saying that Learning Analytics research projects should be required to have an instructional designer or learning research expert on the team. I can say after reviewing course results for decades that it is actually common for students to get the same wrong answers and be upset about it because they were taught wrong. Instructors and Instructional Designers do make mistakes, so always find out what is going on. Its also possible that there was a conversation weeks ago where one student with the wrong information spread that information to several students when discussing the class. It happens.

But this is what happens when you don’t investigate fully and assume the data is all you need. Throwing in a side of assuming that cheaters act a certain way certainly goes a long way as well. So you can see a direct line from assumptions made about personality and demographics of who cheaters are, to using clickstream data to know what is going on in the brain, to assuming the data is all you need…. all the way to the Dartmouth Medical School scandal. Where there is at least a 41%-76% false accusation rate currently.

Crash Course in Online Teaching 1: Starting With Theory (Wait, Wait – Give It a Chance! Really!)

With several universities now coming to grips with the fact that they will still be online in the Summer (and most likely the Fall), several are turning to how to quickly train their entire faculty in online teaching in a hurry. There really isn’t one ideal way to do this, but I want to offer up the way I would do it if given the chance to design a Crash Course in Online Learning (insert your Budgie/Metallica song parody here).

I would personally start off with a very basic intro to… stay with me here… learning theory. Wait – don’t click away just yet. Many would balk at the idea of starting with theory and not practical tool/building skills, while learning theorists would cry foul at thinking that a basic intro to learning theory is even possible. But I have found that a few basic concepts taught at a very intro level can help faculty not only understand how they have been taught in the past, but also how they can try new ideas and designs they may have never thought of.

So give it chance if you are tired of academia and theory, or give me some wide wiggle room for scaffolding if you are deep into learning theory. I’m going to combine and give examples that will make theory easy to digest, but also blur the lines of complexity that exist. Please keep that in mind as we go forward.

First of all, I will point out that I have published and taught this before. I will give the basics below, but if you have 30 minutes to an hour to dig in, there are slightly more explanatory resources out there. The first is the paper that I published called “From Instructivism to Connectivism: Theoretical Underpinnings of MOOCs.” Yes, it is about MOOCs, but the ideas can be applied to any course. Plus, it comes with a worksheet that you can use to plan your course. The second is a video archive of a training session I led from last year on the paper. If you prefer video more than blog posts or papers, this might work for you:

So, for those that are wanting the summary version, here we go….

Overall Power Dynamics

Out of all the different ways to approach learning theory, I like focus on power dynamics first when it comes to designing a course. So think about the overall power dynamic you want to see happening in your course. This can change from week to week, but in general most courses stick to one for the most part. The question is: who determines what learners will learn in your course, and who directs how it is learned? There are many ways to look at this, but I like to focus on three different -isms:

  • Instructivism: The course is controlled by an expert instructor, determined and directed by that expert.
  • Constructivism: Constructed self-discovery, sometimes determined by the expert, but usually directed by the learner.
  • Connectivism: Learner-determined and directed by the learner, enhanced by networking (connecting) with others (including other learners, the instructor, online resources, etc).

Yes, these can mix, and you can move between different ones. The main thing to think about is who is determining the overall knowledge and/or skills to be learned, and then who is directing how the learners will learn those knowledge and skills and then how they will prove they learned them.

Again, there are many different ways of looking at these terms, many other -isms you can use, and a lot of ambiguity that I am glossing over here. These three -isms (and how I described them) are just a good place to start for those that are new. Generally you pick one, but also think how elements of others might also be utilized in your course.

Methodology of Course Design

Course design methodology often overlaps with power structures. However, within various power structures, there still is room for different design methodologies. For example, even in connectivism there it is still possible to design a course that focuses on transmission of knowledge from experts, even if those experts are not always the instructor.

In this stage, you are thinking about where knowledge and/or skills training comes from, not just who controls the overall power dynamics. Again, there are many, many different ways to look at this. I want to start with three popular -agogies:

  • Pedagogy: many people use this as a catch-all phrase for all teaching design, but in a traditional sense for several centuries it has meant focusing on knowledge transfer from an expert (Update: please note there are many different ways of looking at the term that have gained traction over the past 80-100 years that  hope to cover in upcoming parts – see viewing pedagogy as a philosophy rather than a theory in fields like critical pedagogy).
  • Andragogy: learners draw upon their experience to connect what they already know with new content / knowledge / skills / etc (some have advocated to use the term “Anthropagogy” in place of Andragogy to be more inclusive, but I use the more common term here).
  • Heutagogy: learners focus on learning how to learn about a particular topic rather than just what to learn. Heutagogy is often seen as a critical response to the limitations of other -agogies.

Typically, you see pedagogy matching with instructivism, andragogy matching with constructivism, and heutagogy matching with connectivism. But it is possible for other combinations, such as constructivist heutagogy or connectivivist pedagogy. There is a chart of page 94 of the article above that explains the nine different combination and gives examples.

The main idea is that you choose which combination of the two you want for your class most often (even if if changes from time to time). I tend to advocate for a connectivist heutagogical approach the most often, as that is what more and more people need in the world today. Rather than memorizing expert facts as determined by a the instructor, we need more learners that know how to grow and learn about a topic by connecting with the people and resources that can teach them what they need to know.

At this point, you will start considering what activities and assignments you will be using in your course. It is also good to have some well-written and aligned goals, objectives, competencies, or other standards. I will cover that as a separate post next, but more than likely, you are transitioning a course that already exists on campus into an online course. So I will continue with the theory first, but keep in mind that even if you already have goals and objectives, it would be a good idea to review them after you work through learning theory.

Types of Interaction

Of course, your class will have all types of interaction. However, I have found that once people jump into creating activities and assignments and content first, they leave out interaction until after the bulk of the course activities have been created. At that point, interaction becomes an afterthought or an add-on addition to what has already been created. Which is not ideal.

Thinking through the types of communication that can happen in a course is a good way to proactively plan out different ways to foster interaction as you create content and activities. Most of us think of different types of communication like student-to-student, teacher-to-student, student-to-content, etc. There are already 12 types that have been identified in the literature, but there could be up to 20 emerging. I gave my run-down of communication types that currently exist and how they might change in the future here:

Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

You will probably want to pick several types of interaction for different parts or times in your course. Again, if you don’t plan for it from the beginning, it may never make it into your goals, objectives, or lesson plans. However, please keep in mind that you can come back and change/add/remove types throughout the design process.

Also, make sure to match the different types of interaction with the methodology and power structure you selected earlier, at least as you initially see them working out in your course. If you don’t have a good match for your previous choices, then you probably need to consider adding some appropriate interaction types.

Communicative Actions

The final theoretical part to think about will probably be something that you consider now, but come back to once you have an idea of what activities you want in your course. But I will cover it here since it is also in the article above, and it helps to think about it from the beginning as well. Once you know the power structure, methodology, and types of interaction you want, you will need to think through the form that various communication acts will take in your course.

There are many different theories of communication – one that I have found works well for instructors is Learning and Teaching as Communicative Actions (LTCA) theory (based on the work of Jurgen Habermas, but created mainly by one of doctoral committee members Dr. Scott Warren for full disclosure). Current LTCA theory proposes four types of communicative actions:

  • Normative communicative actions: communication of knowledge that
    is based on past experiences (for example, class instructions that
    explain student learning expectations).
  • Strategic communicative actions: communication through textbooks,
    lectures, and other methods via transmission to the learner (probably
    the most utilized educational communicative actions).
  • Constative communicative actions: communication through
    discourses, debates, and arguments intended to allow learners to make
    claims and counterclaims (utilizing social constructivism and /or
    connectivism).
  • Dramaturgical communicative actions: communication for purposes
    of expression (reflecting or creating artifacts individually or as a group
    to demonstrate knowledge or skills gained).

As you can see, you will most likely need to mix these during the class – even for each lesson. The goal on this part is not to pick one or two, but to think through how you communicate what is happening in your course. Think through the activities you will have in your course, and then match those with at least one communicative action and power dynamic/methodological combination.

Pulling It All Together

So that is really it for a quick run through some of the basics of theory that can help you begin to design an online course. Like I have said, there are many other theories than those covered here, and deeper/more complex ways of looking at the ones that were covered. This is meant to be a quick guide to just get started, whether you are designing a new online course from scratch or converting an existing on-campus course to an online version. If you looked at the article, you saw that there is a one-page worksheet at the end to help you work through all of these theories in a fairly quick manner. I have also created a Word Doc version, a Google Doc version, and a PDF form version that you can use to fill out and use as you like.

In parts 2 and 3, I want to go back to some topics I have covered before – but for now, here are links to past posts that cover those basics if you can’t wait:

Goals, Objectives, and Competencies

An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)

Update: I wasn’t clear enough that this is a basic beginner’s way to look at the terms and ideas that are used in learning theory. This will continue to go deeper as I look at other areas of theory in future posts. Some people are not happy that I avoided using the term “Critical Pedagogy” anywhere in the article. I apologize for that. My main thought was that Critical Pedagogy is often classified as an educational philosophy because it puts theory into action, and therefore it would be better to cover that in practical areas like formative evaluation, writing objectives, creating content, etc. Examining power dynamics, who controls communication, and what forms communication take are one of the foundations of being critical about education, so it is still the foundation of everything in this post.

Building a Self-Mapped Learning Pathways Micro-Lesson: H5P vs Twine

One of the issues that I often bemoan in relation to creating Self-Mapped Learning Pathways lessons is how there really isn’t simple technology that will let you quickly build non-linear, interactive, open-ended content. I have been keeping my eye on H5P, and building a few things with Twine or SAP Chatbots, so I decided to take them all out for a spin in trying to build something that allows for learners to build their own learning pathway.

So how did it turn out? In general, there were some interesting affordances of the tools, but they still don’t get me to where I would like to be with the lesson design. And none of them really did much for the open-ended part. But I did create some OERs that you can use if you like (details at the end). First, some of the process.

SAP Chatbots have some pretty robust tools for creating interactive chats. In theory, I think I could have built everything within a bot, but didn’t get around to it this time because it would have taken some deep dives. I’m also not convinced that a chatbot interface is the way to go, but more about that later. I decided to use a chatbot at a specific point in the learning pathways lesson to help learners think through modality options.

With H5P, I used the Branching Scenario and Course Presentation tools mainly. With H5P, you get a more intuitive interface that looks nice (and we are told is completely accessible), but very little options for customizing anything. I couldn’t change the look, program variables, or embed things like the SAP Chatbot anywhere into the lesson. So I came up with a way to get around that. It seems to be a good basic option for those that don’t want to get into the weeds of programming variables, but it still is mainly a way to create a Choose Your Own Adventure book. Which is what some call “personalized” these days, even though its really not.

With Twine, there were many options to customize, add variables, manipulate code, and embed what you want. I am not sure how accessible everything in Twine is, but it does give you a lot more flexibility for customization. Also, the option to set variables means you can let learners choose some options that would reformat what they see based on their selections. I did a little bit of that, but I need to dig into this some more. Since I could embed more things in Twine, I was able to build the entire lesson from beginning to end in Twine (with a chatbot embedded near the beginning, and an H5P assessment embedded at the end of one modality).

So I ended up with two different versions of the same lesson that will allow you to compare the two options. Before I share those, a few thoughts on building the lesson.

It took a long time to think through the options and build the simple choices that I did below. A lot of this could be attributed to the fact that I was building an entire lesson from scratch. I decided to dig some into Goals, Objectives, and Competencies because so many of my students struggle with these concepts. Someone that already has a complete lesson built would probably save a lot of time on that front.

Also, I will say that I ran out of time to re-record the videos. There are some mistakes and poorly chosen words here and there (like me saying “behaviorist” when I mean “behavioral”). Maybe I will fix that in the future.

Ultimately, it a took a lot of time to build the options and think through how to navigate the options, while also trying to find ways to get people who choose to take their own path to the tools they need. This is the open ended part I still struggle with. It really comes down to this: learners will step out on their own into the garden, or not. I can’t do much to pre-program those options into a system. I could be there in person to discuss their pathways if they needed it, but that is hard to pre-design for. You would spend hours creating the ability for each option, and then maybe have one or two people choose it.

I should point point out that this lesson uses a modification of the course metaphor idea that asks learners to choose between the “sidewalk” or the “garden” (or to mix both if they like). The metaphor is based on the botanical garden concept, where sidewalks guide those on pre-determined paths to show the highlights of the garden, while the gardens themselves can be explored as you like by leaving the sidewalk. The sidewalk represents the instructor-centered pathway, while the garden represents the student-centered, heutagogical option.

What I don’t like is the modular way all of these parts feel. I wish there was a way to combine all of the elements so that learners only see one page that re-loads new content based on their input. In other words, instead of a chatbot that tries to mimic human conversation (which some like, but others don’t), why not have a conversational interface that would ask questions and then supply new content, videos, activities, etc based on the learner input?

Plus, chatbots tend to be cloud-based, meaning everything you put in them is stored on someone else’s computer. Why can’t that be a local tool that protects your privacy better?

Anyways, these lesson are some basic ideas of what a self-mapped learning pathways micro-lesson could look like. I still feel there is more that could be done with the garden pathway in using the coding/variables option in Twine. I also utilized some tools like Hypothes.is and Wakelet in the garden modality (just because I like them), but I need to ponder more about how those tools can be utilized as a mapping space themselves.

So here is what I have:

Goals, Lesson, and Competencies Self-Mapped Learning Pathways micro-lesson in Twine

or

Goals, Lesson, and Competencies Self-Mapped Learning Pathways micro-lesson in H5P

The H5P tool does use plain html pages for the first three pages – you will see when the switch happens. Also, the Twine tool still uses some H5P activities for the sidewalk modality assessments at the end of that modality. Since this is a stand-alone lesson, I needed some kind of assessment option and decided to re-use what I had created already.

A few design notes: The Sidewalk modality is designed so that there is always a main option to choose from for those that need the most guidance, but also links to other options for those that want to skip around. My goal is to always encourage non-linear thinking and learner choice in small or large ways whenever possible. In the Twine version of the lesson, if you choose the Sidewalk option, that is what you see. If you go to the Sidewalk + Garden option, then there is code that inserts links back to the Garden section into the Sidewalk. This is some of the customization I would like to explore more in the future.  Also, the Garden and Sidewalk + Garden option have some examples and ideas for learners to choose from (basically, custom links to Twitter, Wikipedia, etc to show specific evolving searches there). This obviously isn’t much, but it is a self-determined option and therefore I didn’t want to offer too much. But maybe its not enough?

But, this is a full micro-lesson, and I am designating it as an OER with a CC Attribution-NonCommercial-ShareAlike 4.0 International license for those that want to use it:

  • The videos are on YouTube if you just want to use those.
  • I have created a zip file with all of the html files that you can download and edit. The Twine file in that zip archive (“goals-objectives-competencies.html”) can be loaded into Twine itself and edited as you need.
  • You can also download and update the two H5P files by going either to the full lesson or the assessment portion and clicking on the “Reuse” link in the bottom left corner.
  • The chatbot itself can even be forked and customized by creating an account with SAP and using the fork function on the main page for the bot.

I may even create a badge for those that complete the lesson – who knows? If you want to send a few people through the lesson, feel free to do so with the links above. If you want to send a lot of people through it, maybe consider hosting it on your server. :)

The Great OPM Controversy

So if you have been following OPMs for a while, you are probably asking yourself “which particular controversy are you referring to?” Good point. Over the past week, there has been some controversy over an article by Kevin Carey that takes a harsh look at the pricing and income from online courses, especially related to OPMs. I took issue with the way the article throws all OPMs into the same bucket – Carey mentions 2U and iDesign in the same sentence, but doesn’t cover the massive differences between the two companies. Personally, I have concerns over even labeling companies like iDesign as OPMs, because they don’t offer to take over the entire online program creation process. They serve more as a specialty service for contract, a type of company that has existed for a long time in HigherEd and that adds great value when priced right.

(also, full disclosure: I have worked for iDesign in the past as a part-time side gig, and still would if their current employment model allowed for work on nights and weekends).

Carey also falls for the assumption that online courses should be cheaper, something that Matt Reed effectively discusses in his own response (just ignore where he briefly falls into the “MOOC attrition rate” misunderstanding). Despite these two points of disagreement, Carey does raise some legitimate hard questions about OPMs that we as a field should discuss.

Of course, with all of this attention, 2U was bound to respond. Today their CEO Chip Paucek wrote an article for Inside HigherEd. While I am glad that Paucek wants to have a constructive dialogue, there were problems with his response as well. Paucek starts off (after selling his company some) by stating that any real conversation about cost or value in online education has to be “grounded” in four specific principles: quality, access, outcomes, and sustainability (personally, I would add ethics and privacy concerns as well). But those are four good ones, and Paucek states that Carey’s article did not focus on those.

Okay, the quality aspect – as related to costs – he did miss. But access, outcomes, and sustainability are all important aspects of the cost of online access – and by addressing cost Carey is also focusing on those three aspects. I think it would be more accurate to say that Paucek felt that Carey did not focus on those aspects the way he wanted him to. They were still there, just not in a format that Paucek recognized maybe? Hard to say. But I felt that point was too forced in Paucek’s response. You can’t separate any discussion of cost from access, outcomes, and sustainability.

Paucek goes on to point out that face-to-face returning students typically have to quit their job and lose income to get a degree while still paying for living expenses. Which is still the case in some places, but not as much as it used to be. For example, I earned my Ph.D. while still working full time because the traditional on-campus program I was a part of adjusted their courses to be on nights and weekends. But the point by Paucek is:

Most master’s and doctorate-level students are working adults who historically had to quit a job, and often move, in order to attend a top-tier university for graduate school…. the average actual cost and debt burden of attending a 2U-powered program is significantly less once you factor in ongoing income and the room and board savings, which in some cities can be as high as 25 to 40 percent of tuition.

Which is true – for all online programs. If the 2U partner schools had built their own online programs, this statement would still be true. Its a bit disingenuous for an OPM to claim a historical benefit of all online / distance education as their own like this. It would be like a website designer claiming they personally are saving clients money by using WordPress, even though WordPress was free long before they started a web design company. Paucek also does this again by claiming that, on average, their partner programs students “are more diverse from both a race and gender perspective than students in comparable on-campus programs.” Again, that was typically true of many online programs long before OPMs came along.

Paucek also goes on the attack against schools that want to build in-house capability for online programs, because he sees this as being wasteful of institutional funds. This is partially true and partially not true. Paucek’s point is that

“…it’s also critical to discuss whether it’s reasonable, rational and appropriate for that investment and risk capital to be shouldered exclusively by schools or in collaboration with a strategic partner like 2U…. each one of our program partners would need to invest their own scarce capital and hire in-house talent to expertly deliver what we deliver.”

Yes, it is true that it takes a lot of money to build online programs in-house. But it also takes a lot of money to hire an OPM like 2U. However, here is the counterpoint: you can hire people that are already experts in online course design, online program management, accessibility, privacy, cybersecurity, etc. You don’t have to start from scratch even if you go the in-house route. I know this, because I am one of many, many experts out there that has the ability to do so. And we are not as expensive as one would think :)

And while Paucek tried to make it seem like it takes 10 years and nearly a billion dollars to develop a quality online program, the truth is that a lot of that went towards building a company – which is different from building an online program. Yes, it does take a lot of time and money to build a quality online program, but it takes a whole lot more to build a national / international company – and those are mostly costs that HigherEd programs will not have to shoulder. To be cliche, it is comparing apples to oranges to make this point. There is some financial overlap between building an OPM and building an online program at an existing institution, but there is a lot that is extra to build a company from scratch.

There are also many other important benefits to building programs in-house that few are talking about. Usually, these programs are built in-house by hiring local talent, which helps local economies. Then there are all of the schools that hire GRAs, GTAs, student assistants of all kinds to help build and administrate and even teach the courses. This helps to empower students by giving them valuable life and employment skills of all kinds. Then there are all of the research articles, blog posts, think pieces, etc that various instructors, staff, and students produce while participating in the process. When these are published through OER models, the additions to the global knowledge space of online learning are immense. Some OPMs participate in some of these benefits, but many keep the whole process behind closed doors to protect proprietary processes and products.

Of course, Paucek’s overarching points that creating quality online courses is expensive, and that we need to have open conversations about the process, are both important. However, I am of the opinion that OPMs should not be the one’s hosting this conversation (as Paucek suggests), as the points outlined in this article make apparent. We as the education community have been hosting it, and all disagreements aside, we have been doing a pretty good job of doing so.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.

Can You Automate OER Evaluation With The RISE Framework?

The RISE Framework is a learning analytics methodology for identifying OER resources in a course that may need improvement. On one level, this is an interesting development, since so few learning analytics projects are actually getting into how to improve the actual education of learners. But on the other hand, I am not sure if this framework has a detailed enough understanding of instructional design, either. A few key points seem to be missing. It’s still early, so we will see.

The basic idea of the RISE Framework is that analytics will create a graph that plots page clicks in OER resources on the x-axis, and grades on assessments on the y-axis. This will create a grid that shows where there were higher than average grades with higher than average clicks, higher than average grades with lower than average clicks, lower than average grades with higher than average clicks, and lower than average grades with lower than average clicks. This is meant to identify the resources that teachers should consider examining for improvement (especially focusing on the ones that got a high number of clicks but lower grade scores). Note that this is not meant to definitely say “this is where there is problem, so fix it now” but more ” there may or may not be a problem here, so check it out.” Keep that in mind while I explore some of my doubts here, because I would be a lot harsher on this if it was presented as a tool to definitely point out exact problems rather than what it is: a way to start the search for problems.

Of course, any system of comparing grades with clicks itself is problematic on many fronts, and the creators of the RISE Framework do take this into consideration when spelling out what each of the four quadrants could mean. For example, in the quadrant that specifies high grades with low content usage, they not only identify “high content quality” as the cause of this, but also “high prior knowledge,” “poorly written assessment,” and so on. So this is good – many factors outside of grades and usage are taken into account. This is because, on the grade front, we know that scores are a reflection of a massive number of factors – the quality of the content being only one of those (and not always the biggest one). As  noted, prior knowledge can affect grades (sometimes negatively – not always positively like the RISE framework appears to assume). Exhaustion or boredom or anxiety can impact grades. Again, I am glad that these are in the framework, but the affect these have on grades is assumed in one direction – rather than the complex directions they take in real life. For example, students that game the test or rubric can inflate scores without using the content much – even on well-designed assessments (I did that all of the time in college).

However, the bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores. Anyone that has analyzed assessment data can tell you that the final score on a test is actually an aggregate of many smaller items (test questions). That aggregate grade can mask many deficiencies at the micro level. That is why instructors prefer to analyze individual test questions or rubric lines than the aggregate scores of the entire test. Assessments could cover, say 45 questions of content that were well covered in the resources, and then 5 questions that are poorly covered. But the high scores on the 45 questions, combined with the fact that many will get some questions right by random guessing on the other 5, could result in test scores that mask a massive problem with those 5 questions. But teachers can most likely figure that out quickly without the RISE framework, and I will get to that later.

The other concern is with clicks on the OER. Well, they say that you can measure “pageviews, time spent, or content page ratings”… but those first two are still clicks, and the last one is a bit too dependent on the happiness of the raters (students) at any given moment to really be that quantitative. I wouldn’t outright discount it as a factor, but I will state that you are always going to find a close alignment with the test scores on that one for many reasons. In other words, it is a pre-biased factor – students that get a high score will probably rate the content as effective even if it wasn’t, and students that get a low score will probably blame the content quality whether it was really a factor or not.

Also, now that students know their clicks are being recorded, they are more and more often clicking around to make sure they get good numbers on those data points. I even do that when taking MOOCs, just in case: click through the content at a realistic pace even if I am really doing something else other than reading. People have learned to skim resources while checking their phone, clicking through at a pace that makes it seem like they are reading closely. Most researchers are very wary of using click data like pageviews or time spent to tell anything other than where students clicked, how long between clicks, and what was clicked on. Guessing what those mean beyond that? More and more, that is being discouraged in research (and for good reason).

Of course, I don’t have time to go into how relying on only content and assessment is poor way to teach a course, but I think we all know that. A robust and helpful learning community in a class can answer learning questions and help learners overcome bad resources to get good grades. And I am not referring to cheating here – Q&A forums in courses can often really help some learners understand bad readings – while also possibly making them feel like they are the problem, not the content.

Still, all of that is somewhat or directly addressed in the framework, and because it is a guide rather than definitive answer, variations like those discussed above are to be expected. I covered them just to make sure I was covering all critical bases.

The biggest concern I have with the RISE framework really comes here: “The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate.”

Well, since that doesn’t happen in many courses due to time constraints, that eliminates large chunks of courses. I can also tell you as an instructional designer, many people think they have well-aligned outcomes…. but don’t.

But, let’s assume that you do have a course with “content and assessment items have been explicitly aligned with learning outcomes.” If you have explicitly aligned assessments, you don’t need the RISE framework. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number. Most testing systems today will give you an item-by-item breakdown of each assessment (because teachers have been asking for it). Any low course score on any specific question indicates some problem. At that point, it is best (and quickest) to just ask your learners:

  1. Did the question make sense? Was it well written?
  2. Did it connect to the content?
  3. Did the content itself make sense?

Plus, most content hosting systems have ways to track page clicks, so you can easily make your own matrix using clicks if you need to. The matrix in the framework might give you a good way to organize the data to see where your problem lies…. but to be honest, I think it would be quicker and more accurate to focus on the assessment questions instead of the whole test, and ask the learners about specific questions.

Also, explicit alignment can itself hide problems with the content. An explicit alignment would require that you test what is in the content, even if the content is bad. This is one of the many things you learn as an ID: don’t test what students don’t learn; write your test questions to match the content no matter what. A decently-aligned assessment can still produce grades from a very bad content source. One of my ID professors once told me something along the lines of “a good instructional designer can help students pass even with bad textbooks; a bad instructional designer can help them fail with the best textbook.”

Look – instructional designers have been dealing with good and bad textbooks for decades now. Same goes for instructors that serve as their own IDs. We have many ways to work around those.

I may be getting the RISE framework wrong, but comparing overall scores on assessments to certain click-stream activity in OER (sometimes an entire book) comes across like shooting fish in a barrel with a shotgun approach. Especially when well-aligned test questions can pinpoint specific sources of problems at a fairly micro-fine level.

Now then, if you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something. Then, if you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant:

We see that you missed Question 4, but you did spend a good amount of time on page 25 of the book, where this question was taken from. Would you say that:

  • The text on page 25 was not well-written
  • Question 4 was not well-written
  • The text on page 25 doesn’t really match Question 4
  • I visited page 25, but did not spend the full time there reading the text

Of course, writing it out this ways sounds creepy. You would have to make sure that learners opt-in for this after fully understanding that this is what would happen, and then you would probably need to make sure that the responses go to someone that is not directly responsible for their grade to be analyzed anonymously. Then report those results in a generic way: “survey results identified that there is probably not a good alignment between page 25 and question 4, so please review both to see if that is the case.”

In the end, though, I am not sure if you can get detailed enough to make this framework effective without diving deep into surveillance monitoring. Maybe putting the learner in control of these tools, and give them the option of sharing the results with their instructor if they feel comfortable?

But, to be honest, I am probably not in the target audience for this tool. My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

“Creating Online Learning Experiences” Book is Now Available as an OER

Well, big news in the EduGeek Journal world. I have been heading up a team of people to work on new book that was released as an OER through PressBooks today:

Creating Online Learning Experiences: A Brief Guide to Online Courses, from Small and Private to Massive and Open

Book Description: The goal of this book is to provide an updated look at many of the issues that comprise the online learning experience creation process. As online learning evolves, the lines and distinctions between the various classifications of courses has blurred and often vanished. Classic elements of instructional design remain relevant at the same time that newer concepts of learning experience are growing in importance. However, problematic issues new and old still have to be addressed. This book aims to be a handbook that explores as many of these issues and concepts as possible for new and experienced designers alike, whether creating traditional online courses, open learning experiences, or anything in between.

We have been working on this book on and off for three or more years now, so I am glad to finally get it out to the world. In addition to me, there were several great contributing writers: Brett Benham, Justin Dellinger, Amber Patterson, Peggy Semingson, Catherine Spann, Brittany Usman, and Harriet Watkins.

Also, on top of that, we recruited a great group of reviewers that dug through various parts and gave all kinds of helpful suggestions and edits: Maha Al-Freih, Maha Bali, Autumm Caines, Justin Dellinger, Chris Gilliard, Rebecca Heiser, Rebecca Hogue, Whitney Kilgore, Michelle Reed, Katerina Riviou, Sarah Saraj, George Siemens, Brittany Usman, and Harriet Watkins.

Still skeptical? How about an outline of topics, most of which we did try to filter through a critical lens to some degree:

  1. Overview of Online Courses
  2. Basic Philosophies
  3. Institutional Courses
  4. Production Timelines and Processes
  5. Effective Practices
  6. Creating Effective Course Activities
  7. Creating Effective Course Content
  8. Open Educational Resources
  9. Assessment and Grading Issues
  10. Creating Quality Videos
  11. Utilizing Social Learning in Online Courses
  12. Mindfulness in Online Courses
  13. Advanced Course Design
  14. Marketing of an Online Course

So, please download and read the book here if you like: Creating Online Learning Experiences

There is also a blog post from UTA libraries about the release: Libraries Launch Authoring Platform, Publish First OER

And if you don’t like something you read, or find something that is wrong, or think of something that should have been added – let me know! I would love to see an expanded second edition with more reviewers and contributing authors. There were so many more people I wanted to ask to contribute, but I just ran out of time. I intentionally avoided the “one author/one chapter” structure so that you can add as much or as little as you like.

Hybrid MOOCs and Dual-Layer/Self-Mapped Learning Pathways MOOCs: My Perspective on the Differences

A recent tweet from Aras Bozkurt highlights a question we often get about the work we do with dual-layer/self-mapped learning pathways courses (most often in MOOCs, but also starting to bleed over into traditional courses as well):

As soon as we started using the term “dual-layer MOOC” in 2014, people pointed out the similarities between that idea and “Hybrid MOOCs.” These are important points because they do share many concepts. However, there were some key differences as well. In my mind at least, there are some differences that exist along various continuums rather than hard divisions into two distinct ideas.

The original distinction between layers into “instructivist layer” and “connectivist layer” proved to be problematic, as many courses have aspects of both, and learners tend to mix both at different times (if given the choice) instead of choosing one or the other. So I think it is better to look at the distinction as one that focuses on who does most of the decisions of what to mix together in the course. If most of the decisions to mix together/hybridize the course content and activities lies with the instructor, I tend to look at those as “Hybrid MOOCs” because it is the MOOC itself that is a hybrid. Even if there are choices (“write a paper or create a blog post or Tweet a thread”) and some of those are connectivist in nature, if those choices are more restricted and designed into the course, I see it more as a Hybrid MOOC. If the learner is more in control of those choices and how they mix the hybrid layers together, I see it more as the dual-layer concept we tried with DALMOOC. Of course, the layer idea focuses on the design too much, so that is why I like to refer to those courses now as “self-mapped learning pathways” because the focus should be on the pathway that the learner maps instead of the layers.

This is a continuum, of course – with a completely instructor controlled course on one side (all possible activities, even social/connectivist ones, chosen by the instructor) and a completely learner-driven course (like RhizoMOOC) on the other end. The DALMOOC and HumanMOOC courses I worked with/co-taught lean heavily towards the learner driven side, for example, while YogaMOOC leaned slightly more towards the instructor-driven side. All of those mix elements of xMOOCs and cMOOCs together in different ways (with RhizoMOOC most likely technically existing off the spectrum because it was all community driven – but it makes a good frame of reference. In contrast, typical xMOOCs exist off the other side of the spectrum because they are all instructor controlled and usually not that complex).

Additionally, I think an important dimension to look at with these courses is one that would exist on a perpendicular axis that measures the complexity with which the course organizes or scaffolds the choice for learners. For example, courses like DALMOOC were highly organized and complex – with maps of course structure, activity banks, course metaphors for descriptions of what that structure looks like, etc. Other courses like the EngageMOOC were less complex in that aspect of the structure, with the linear content in place – but learners were told they could do various other activities as they liked. There was some structure there as well, so it was not as far down that continuum as the RhizoMOOC would be.

So you would probably end up with a grid like this for explaining where courses fell on these continuums – some courses would probably shift from place to place as the course progresses:

Note: there was no scientific method for where I placed the example courses above – I just took a guess where they seemed to fall by my estimation. Feel free to disagree. The basic idea is that courses that mix various epistemologies tend to exist more on a continuum than at defined poles. Hybrid MOOCs are what I see as courses that lean towards the instructor deciding what this mixture is, and/or what the specific choices for the mixing are. Dual-layer/learning pathways courses are those that lean towards the learner deciding what this mixture is, and/or what the specific choices for mixing are. Either type can do so in more complex or less complex ways depending on the needs of the course.

Engaging Your Local Community Online: The Overlooked Hard Work of #EngageMOOC

“What does polarization currently look like in YOUR workplace, or campus, or community…online and off? What resources are you turning to in order to try to deal with it? Is there anything you are currently engaged with that you can share with us?” These questions from the last week of #EngageMOOC are a bit difficult for me to answer. When most people read these, they probably think of things like block walking, or soup kitchens, or community groups, or things that are in our physical communities around us.

I certainly find those things important. My whole family climbed in a car to travel in the pouring rain to the meeting of local chapter of a political party in the new town we just moved to temporarily… only to find it canceled due to rain. What a bunch of snowflakes!

(it was actually pretty heavy and we should have known better ourselves)

Our attempts to get connected with people in our area have been a bit of a bust, as we just miss finding out about activities the day after or they get rained out. However, even once we find those activities, they will still be events for a specific political party. Polarization in our area currently looks like everyone doing political stuff with those that they agree with, and then not talking about political issues the rest of the time to avoid arguments.

Oh, sure – you ask any Republican if they know any Democrats and they will respond with “I have plenty of liberal friends!” or vice versa for Democrats. This will usually be followed by some statement that indicates they really don’t understand the other side.

A few weeks ago I saw our local HOA representative raving all over Facebook about “silly liberals.” I decided to message him about his activities, how public they were, and how they may make the few liberals in our community feel. Nothing accusatory, just asking him to consider their viewpoint. It was not a hostile conversation through DM, but he was pretty assured there was no harm in his words. Mostly just “liberals do it too” and “I have lots of liberal friends are okay with it” and so on. I don’t really think I got anywhere with him.

He is now leading a grassroots “community task force” to take a look at security at our community schools – and he has been clear he wants to push for armed teachers like neighboring school districts already have.

You see, the “arming teachers debate” is not theoretical to us in Texas. We already have schools that have armed teachers for years now (many of the “staff” that are armed there are teachers). This is the school district next to ours. People in my child’s school district are now asking “why can’t we have armed teachers like Argyle ISD?” People in Argyle ISD are also not content to just keep it there:

“I see a future where schools will be lumped into two categories. Gun free zones and ones that are not.”

“Argyle ISD and the Chief have done exactly what is needed to protect against the evils and evil people of this world!”

“Where Argyle is now, and where they started, and where they are headed is the future of safety in our world. They are not following, they and leading by example and showing everyone what must be done to protect our children at school.”

“Arming teachers is safety – they will not shoot without reason! Grow up people!!! Welcome to the millennial generation!!!”

To be honest, there really isn’t much I can do to change these people’s minds. But I have gotten through to some through debates on Facebook.

Yes, I said debates on Facebook.

Look, I know I am not going to change the world by debating on Facebook. I know that it is not for everyone. But so many people are so rarely exposed to ideas outside of their comfort zone – that silently reading a debate on Facebook might be the only time they are exposed to opposing viewpoints. You see, I bring up different points not to win the argument, but to expose the larger number of those reading the posts to different viewpoints.

Of course, I am not talking about arguing with “that uncle” on my private Facebook wall. I go to local newspaper and community groups and pages to bring up different views for consideration – from pro-vaccination to stricter gun regulation to transgender bathroom access to Black Lives Matters. Yeah, its not exactly what anyone would call “fun.” Usually it goes nowhere. But then there is that random DM from someone that tells me I have changed their mind on something. So I know it is getting through in some ways to some people, even though they might not let me know every time.

Look, if my strongly pro-Trump cousin can suddenly come out and post a rant on Facebook about how he is tired of Trump and will no longer vote Republican until they clean up their act… and he is quoting some ideas that I know I posted earlier… you know that I or someone else he is following on Facebook are getting through to him. We can’t just write these people off as extreme viewpoints that will never change. I get that it is hard work to get through to people, especially in online environments. It is not for everyone. But if that is something you feel you can do (and I wouldn’t recommend doing it constantly – I frequently will just get off social media for days at a time to recover from debates)… don’t feel bad for doing it. Don’t feel like your part is “less than” or “not as hard.” We need people to engage with different viewpoints, especially those where we are standing on issue of equality or safety that should be the baseline middle point (but has been labeled as “polarized” by others).

Getting Lost in the Four Moves of #EngageMOOC

This week we are looking at what to do about polarization and fake news in EngageMOOC. Our assignment this week was to look at Mike Caulfield’s Four Moves and use it to evaluate a web source. The Four Moves idea is a response to what Mike sees as the inadequacies of other information literacy checklists like CRAAP. Admittedly, these checklists do get long and cumbersome. For many people, this is not a problem. For others, it is. But in the end, my concern is that neither one will help with polarization.

So I am going through the Four Moves idea with common arguments that  I often see getting polarized online. To be honest, I really like the Four Moves idea… under certain conditions. I have not read through the longer book that is linked in the post above, so maybe all of this is addressed in there. For now, I will just focus on the blog post. The first step of the Four Moves process (which is not a check list… even though it technically is :) ) starts off with this:

Check for previous work. Most stories you see on the web have been either covered, verified, or debunked by more reputable sources. Find a reputable source that has done your work for you. If you can find that, maybe your work is done.

So this is great when dealing with a really simple new piece of news, like the example given of “Jennifer Lawrence died.” But the problem quickly becomes: what counts as a “reputable” source? Things like the CRAAP method are supposed to be about helping people determine what is reputable, so I am a bit confused as how the Four Moves would replace CRAAP when it technically starts after CRAAP is finished (yeah, I am giggling at that too). In today’s polarized climate, people look to very bad websites like Brietbart, The Blaze, and dozens of other extreme left and right organizations as “reputable.” Millions see these websites as “a reputable source that done your work for you”… even though they aren’t. Then there is the idea of being “debunked.” Of course someone that is anti-vaccination could look at Mercola as “reputable”… but that has been debunked, right? Yes, it has. But then the anti-vaxxers debunked that debunkation (is that a word?). Then the pro-vaccination side debunked that debunkination… and it has been going back and forth for a long time. Years. Decades. There are so many competing debunkinations that it is impossible to keep up with at times. The problem is, everything from the flat earth theory to the alt right to the anti-vaccination movement to the anti-gun control crowd have created an extensive network of websites that cite their own network of research, debunkinators, and reliable/credible sources. The problem is no longer “is this a reputable source” but “who do you say the reputable sites are out of all the competing ecosystems of so-called reputable sources”?

Go upstream to the source. If you can’t find a rock-solid source that has done your verification and context-building for you, follow the story or claim you are looking at to it’s origin. Most stories shared with you on the web are recoverage of some other reporting or research. Follow the links and get to the source. If you recognize the source as credible, your work may be done.

This flows from the same problem as the one above – going back to the source on most of the issues that polarize us will just end up at competing websites that all claim credibility and research. Even if you pull out Snopes or Politifact or Wikipedia, the response will often be “oh, those are leftist sites and I want something unbiased like Fox News.”

Read laterally. If you have traced the claim or story or research to the source and you don’t recognize it, you will need check the credibility of the source by looking at available information on its reliability, expertise, and agenda.

Looking at available information on reliability, expertise, and agenda is technically part of CRAAP… but again, some people see all of this through different lenses. When I look at Mercola’s website, I see an obvious agenda from people without expertise and lacking in reliability. But the anti-vaxxers sees a website that is full of reliability and expertise, with “no agenda but the truth.” The things is, if you see a new article questioning the safety of the flu vaccine, you can go through each of these steps and end up on Mercola and deem the flu vaccine as deadly.

Circle back. A reminder that even when we follow this process sometimes we find ourselves going down dead ends. If a certain route of inquiry is not panning out, try going back to the beginning with what you know now. Choose different search terms and try again.

Selecting different search terms on Google will pretty much give you similar results, because Google looks past those terms and gives you what it thinks you want based on past searches. Of course, using CRAAP you wouldn’t make that mistake… but that doesn’t automatically make CRAPP better.

(hopefully you are giggling as much as I am every time I use CRAAP. Oh wait…)

So the thing is, I really like Four Moves in place of CRAAP and other methods… when dealing with someone that would have the same version of “reliable” and “credible” that I do. And I am sure that someone with a very extreme conservative outlook on life would say the same thing… and would not trust me because of my views on what sites are “reliable” (that is actually not hypothetical – my name was released on the “list of worst pro-vaccination trolls” years ago because I have butted heads with so many anti-vaxxers online through the years). Polarization will continue as long as we can’t deal with the core issue that the different sides have a fundamentally different understanding of what counts as “credible, reliable sources.”