So Why Do So Many Learners Drop Out of Open Courses?

One of the big questions surrounding open learning (or more specifically, the massive popular version of open learning) is “why do so many people drop out?” In a lot of ways, I’m not sure we need to find the answer to that question. If admin types at the University level find a formula for keeping students in massive open courses, they will pretty much have the perfect excuse for getting rid of a huge chunk of those pesky overpaid teacher types (sarcasm fully intended on that pesky overpaid part).

But at the same time – you have to wonder… if students can look at an entire course (since it is open) and see what they are getting into from the beginning – why are so many dropping out?

Well, to be honest we all know they probably aren’t looking at the whole class. They are reading the Syllabus or About sections, which is probably about the same as all syllabuses have been for centuries. You get an abstract of the content along with a summary of expectations for assignments and a few campus policies.

How many times have you read through a list of assignments and still get in the class and find yourself surprised at what you actually end up doing? For example, you read what assignments were going to be group-based, but you still get a little behind when it turns out that those group assignments take up 50% of your work?

Most course designers don’t do a really good job of rating overall work load for courses. How much of the course is passive? How much is active? How much is discussion? What percentage is individual? What about group work? I think academia is in dire need of a better system to communicate to students exactly what they are getting into with any course so that students can predict workload before signing up. Of course, students won’t be able to avoid things they don’t like (such as group work), but they might be able to balance classes so they don’t end up with, say, too much group work across all of their courses combined. This is a system I am currently working on and hope to have a good idea of a model for it in the next few years.

To me, this rating system is a factor of openness that is often missing in open learning. Additionally, a true system of classifying courses would in some way collect feedback to ensure accuracy of the system. Current course feedback surveys are usually administered by people that are interested in numbers, grades, and data. But we all know that most students fill them out based on personal feelings of who the “coolest” or “easiest” profs are. So there is probably a huge disconnect between what most schools want from feedback surveys and what they are actually getting. What if we could actually collect data that helps improve the course description? Maybe we could by default increase all of the statistical numbers once students have a more accurate picture of what they are getting into before they register?

So, the basic idea would be to get students to rate how accurate the course classification is. Was the amount of group work described in the syllabus accurate? Did the course description accurately reflect what was actually taught? Did the instructor accomplish what they promised? Ultimately, ratings like this could be used to give instructors solid feedback and vision for future course improvements. For example, if you thought your course was mostly based on active learning but the students told you there actually wasn’t much of it – maybe you could do some major re-design.

How this looks exactly is hard to say, but I will be making this idea a major focus of my grad work since my faculty adviser thought it all a great idea. We’ll see where this ends up going.