A Template, a Course, and an OER for an Emergency Switch to Online

So the last few weeks have been… something. Many of us found ourselves in the rush to get entire institutions online, often with incredibly limited resources to do so. I’ve been in the thick of this as well. Recently I shared some thoughts about institutions going online, as well as an emergency guide to taking a week of a class online quickly. I would like to add some more resources to the list that we have been developing since those posts.

First of all, I would like to repeat what many have said (and what I tried to emphasize in that first post): take care of your self, your family, and those around you first. Don’t expect perfection from yourself. Practice self-care as much as possible (I know that easier said that done). Then make sure to take care of your students as well. Communicate with them as much as possible, be flexible, remember that many aspects of their lives have been suddenly upended, and above all, make sure to be a voice of care in these times.

I also know that at some point, you will be expected to put your course online and teach something, whether you think it is a good idea or not. So for those that are at that stage, here are some more resources to help.

First of all, I am working with some other educators to put together a free course called Pivoting to Online Teaching: Research and Practitioner Perspectives (I didn’t really like the word “pivot,” but I was overruled). It is a course that you can take for free from edX, but for those that don’t want to register, we have been placing all of the content on an alternative website that requires no sign-up. Lessons are being created in H5P (remixable) and traditional html format. Archives of past events are also being stored here as well. We are halfway through Week 1, so plenty of time to join us.

As part of that course, I created a module template for an emergency switch to online. This is basically a series of pages that work together as a module that you can copy and modify to quickly create course content. It tends to follow many of the concepts we are promoting in the class (Community of Inquiry, ungrading, etc), but it can also easily be modified to fit other concepts as well. I basically went through my earlier post “An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)” and followed it in making a Geology module. Then I add some notes in red to talk about options and things you should think about if you are new to this. You can find the Canvas or IMS Common Cartridge version in the Canvas Commons that can be imported in Canvas, or downloaded and imported to systems that support IMS. However, since there are also other systems that don’t use either of these formats, I also made a Google Docs version as well as a Microsoft Word version for download.

And finally, the OER – our book Creating Online Learning Experiences:A Brief Guide to Online Courses, from Small and Private to Massive and Open is still available through Mavs OpenPress in Pressbooks (with Hypothes.is enabled for comments as well). I want to highlight a few of the chapters:

Of course, I like the whole book, so it was hard to pick just a few chapters, but those are the ones that would probably help those getting online quickly. When you get more caught up, I would also suggest the Basic Philosophies chapter as one to help guide you think through many underlying aspects to teaching online.

An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)

With all of the concern the past few weeks about getting courses online, many people are collecting or creating resources for how to get courses online in case of a last minute emergency switch to online teaching. Some are debating whether to call it “emergency remote teaching” (or some variation of that) instead of actual “online teaching.” I agree with the difference, but don’t think that the academic definitions of either one really brings about much change in the practical work of getting online.

There are many problematic issues to address that many are not talking about. Accessibility, student support, and social support structures that schools provide don’t always switch online so well. Some students are even being kicked off of campus, with little mention being made of finding out where they will go if they don’t have a place to go this early, if they can afford to get where they need to go if so, and if the environment they go to can even allow for them learning from online. On top of all of that, few are talking about the difficulty and chaos that going online will create.

Of course, a lot can be said about if closing schools or going online instead of canceling is a good idea. All are good questions to ask. But a lot of people out there have already been forced to go online whether they agree or not, and many more will be forced to do so in the coming weeks. So we have to talk practical steps for those that are in this predicament.

There are many okay to good guides out there for switching online. I see most of them will tell faculty to examine their syllabus to see what can and can’t go online. This is a good first step, but it often ends up being the last step mentioned in this process. There needs to be some quick and blunt guides for what that means to examine your syllabus. So I am going to dive into that here.

Most Instructional Designers will be able to put a week’s worth of a class online in a very short amount of time IF given free reign to apply effective practices focused on the bare minimum needed and a complete set of content based on those principles. Once IDs start getting away from that – adding in time consuming online options that faculty love but that are not absolutely necessary, or waiting for faulty to get them content – the time to create a class increases quickly. However, if you are willing to focus on the bare bones of good online course design, there are many things you can learn from IDs in a pinch.

As I go through this, I will be addressing accessibility issues along the way. The main thing to keep in mind is that media (mainly video, but also images) is easy to make accessible (due to built in alt tags and captioning features), but also the most time consuming. Auto captioning usually doesn’t cut it. You will still need to manually read through and correct any mistakes by hand. The more you can get away from relying on video or video services, the less time it will take to prepare course work (in general).

The first step in going online is to talk with your students about what that would mean before you are forced to make the switch. Talk with them about what it takes to learn online. Have them go through your syllabus and brainstorm ideas for how to transition your objectives to online. Give them the freedom to suggest changes to objectives, or to even think of different activities to meet objectives. Ask them to talk to you privately if they don’t have Internet access at home, of they need other support services. Make sure they all have a way to check in with you (just so you know they are okay), and a back-up method or two in case the main communication method is not working well (or goes down).

If you have already been forced online, or there will be no class meetings between now and when the switch happens, you will need to think through this yourself. Of course, thinking through this yourself might help you guide the discussion with your students – so do it either way.

Content Creation

The first thing to ask yourself is how new information/content/etc will be communicated to students:

  1. One-Way Communication: Typical lecture method, where you share the new information that students learn. Easiest communication to make accessible, but captions could still be time consuming if you are relying on video (especially longer ones). If your goal is one way communication, you don’t need synchronous video tools, even for questions (students can contact you for that, email them, comment if you use a blog, etc). Also, note that this type of communication can be made to work on mobile devices easier.
  2. Discussion Between Instructor and Students: If you really want the ability to interact, and not just answer questions, you do need to look at tools for interaction. Discussion forums are the most accessible (but a little less human), while video conferencing tools are more problematic in regards to accessibility. For example, people with various hearing issues report that Zoom’s accessibility tools start to fall way short of ideal once four people get on a session. So if you really need this, you might want to consider using small group structures that can use a variety of tools (even a phone). Which would bleed over into another communication modality:
  3. Students Communicating With Each Other: Yes, this would include small group discussion. However, also consider how you can encourage students to use your class as a support network. Don’t just lock-down class tools to only be used for class activities. Help students get that human connection they will start to miss once social distancing sets in. This communication modality can be very effective for mobile devices and accessibility needs if you can be flexible about tools and structures.

Here is the thing that will save you the most time: If it were up to me for a class I was teaching, I would not try to schedule meetings for online lectures or even record videos of my lectures to get those online. It is possible to do that well when going online, but time consuming and problematic in regards to accessibility. Even typing out my lecture for the week can take a while. I would go straight to the Internet:

  • With so much out there, you can probably find articles, blogs, websites, etc that contain the content you need in a 15-30 minute search (or less if you already know of some sources).
  • Then use this link to see how to set up a really fast accessibility check tool in your browser. Use this tool on each source you find.
  • Be careful of video sources – make sure they have accurate captions.
  • Then take another 15-30 minutes to create a content page or blog post that lists each source and adds any core concepts you couldn’t find. This will be the most accessible form of content to make, as well as mobile friendly, as long as the services you use are accessible and mobile friendly.
  • This is a great way to get a wider array of perspectives on course topics than a textbook usually provides. But check to make sure you have diverse perspectives – if your list relies heavily on white Western heterosexual cis males, then you will need to change the parameters of your search to be more inclusive.
  • If needed, you can print articles out on paper for those without Internet access (and just hope there is a way to get it to them still functioning).

Obviously, if you have a textbook that you base your lectures on, they you already have a source of content that you can write up some notes on. This won’t necessarily be the most diverse perspective, but it will be quick. Just be sure to think through issues like students that couldn’t afford the textbook (ahem – OER?), if they can access the eBook texts at home, and so on.

Even more advanced quick method: Turn to some student-centered design methodologies to make the course more engaging:

  • Spend the 15-30 minutes creating an activity for your students to go find the content for the week (online, at a library, etc).
  • Towards the end of the week, create a page or blog post collecting those sources with your commentary.
  • Put some time into creating something more than just “go find content!”
  • Think through how to address accessibility issues, as well as how students that don’t have Internet will find content. Be flexible on that last one – not every student can just go to the local library when they want (and what if libraries close?).

Be ready to have to use the mail service for some students if you have to, and don’t worry too much about deadlines if so.

If you really need to use video, you will need to have well-edited captions, This can take a while. There are really three main options you have:

  • Option 1: Record your video, upload for auto-captioning, and then edit the captions for errors. Not all video upload services allow all of this, so check before time. This will probably sound the most natural, but you will also probably be surprised at how much time you waste going “ummm…” or “let me start over….”
  • Option 2: Type out your content and read from the page, without worrying about the way it makes you look “stiff.” You don’t usually write the way you talk, so it will just have to come out that way in crunch. But you will probably focus without too many side tracks. If you keep the video length to about 2-3 minutes, you can probably write and record it in 30 minutes, depending on how fast you write.
  • Option 3: Record your video, upload to YouTube or something else that has auto captioning, download the auto captions, edit for mistakes (not style), and re-record the video reading this script. The fact that it was based on your natural speech will make it sound more natural. Plus you had a built in practice run. A better end result (not perfect), but also more time consuming.

Activity Creation

This part of the course could possibly be the easiest part to create, or the hardest part, depending on your topic. If you have extensive lab requirements that can’t or aren’t simulated online, you will need to get together as a department and figure out how to translate that into the online space. Unfortunately, there is no quick way around that.

Also, keep in mind that you don’t have to come up with a project or test every week. Sometimes projects take more than one week. Sometimes it is good to just relax and learn. Plus, tests are a problematic concept to begin with, and proctoring solutions will be hard to implement when a lot of people start staying home in the same house. So as much as you can move away from high stakes big tests, the better the online experience will be for you and your learners.

Here is the thing that will save you the most time: One thing that is very cliche but effective is to use your LMS test tool to create a low stakes understanding check:

  • 5-10 questions that covers the core things students should have learned for the week.
  • Give students unlimited attempts so that learners can take them over as needed to get all the questions right.
  • This is not the coolest online design method, but it does give students some relief to know they are on the right track.
  • It should only take 20-30 minutes to create 5-10 questions… if your focus is on making sure students have had some contact with core concepts, and not on trick questions or “gotcha!” fake answers.
  • The goal with this kind of activity is not to catch cheaters, but to help students know what you think is important.

Even more advanced quick method: Really what I would focus on is creating authentic / experiential / etc projects that allows learners to engage more deeply with what they are learning:

  • Think of something that would allow learners to apply the course content to their real lives.
  • Think of something that would also let them apply course knowledge to a real world situation.
  • Let students think of how they will communicate what they have learned. Don’t limit them to just what you think they should produce (like a paper).
  • If you are spending more than 15-20 minutes writing out the instructions to this, you are over-thinking it.
  • If you spend less than 5 minutes writing instructions, you are probably not giving students that are possibly new to this level of agency enough guidelines on how to do the project. Remember the students that are new to all of this.
  • Provide 3-4 example ideas of how students could complete the assignment. Don’t worry too much of several students use your idea. Think of some outside the box ideas, like skits, graphics, etc.
  • Remember flexibility and accessibility. If you have to accept a hand-written project sent through the mail – or maybe even one transcribed by a sibling or a spouse – then no problem. Just be glad they are learning.
  • Its best to grade these in more of a general way. Don’t get bogged down in exact point totals for every mistake. Consider ungrading if possible in your institutional structure.
  • For larger classes, have students self-organize into groups based on interests and/or desired artifact to create. Don’t forget that students might need help self-organizing online or at a distance. Again, remember those with accessibility issues, or internet access issues.

At this point, I would have spent about an hour creating my class for the week. This would have been 30 minutes researching and writing a blog post containing the week’s content (FYI – this blog post is waaaay too long for that, so don’t follow my example :) ), and 30 minutes creating a student-centered authentic open-ended project. And this project would take students 3-4 weeks to complete. If you are new to creating classes this way, your first time or two at doing this might take longer (especially if you are new to the tools you will use). Also, if you need other things like video or labs, that will take more time that this. But there is something else to plan for as well.

Course Communications

I have been touching on communication issues through out this post, so I will try not to repeat what I have already stated here. You will need to communicate other things like class norms and online etiquette outside of content and activities. While it may seem like I am against synchronous communication methods here, that is not the case. You can use both. You will just need to consider how to use synchronous tools in ways that addresses accessibility and internet access issues, as well as make sure the tools works well on mobile devices (which is hard to do for all, since a lot of that is personal preference).

Like I mentioned in a previous post, scheduling synchronous sessions can be tricky at best in shutdown / quarantine situations. Yes, you can do the “we will record it and you can watch later if you miss” thing, but that is also problematic. Some of the reasons that cause people to miss – like overwhelmed internet service – will prevent the same people from watching the recorded video. They will also feel a bit left out.

But here is what I would do for communication in a sudden switch to online:

  • Talk to students before hand if possible. Create a plan.
  • Use synchronous tools for open communal office hours.
  • Set-up alternate options for communication – phone, email, even mail if needed.
  • Send out an email, text, mass phone call, or something weekly. This is a good way to humanize your online learning – stick with those principles as much as possible.
  • Don’t totally avoid video if you don’t want to. Sometimes, a quick 1-2 minute “welcome to the remote version of class” can help. Just get the captions right first!
  • Students might be new to learning online. If your school has a Code of Conduct for online interactions, make sure to follow that and make your students aware of it. If your school doesn’t have one, or has one that you feel is not adequate enough, consider creating your own Code of Conduct for your class.

Every time I edit this post, I add a bunch of new sentences all over the place. There is a lot more that could be said, but I will stop here. I hope that this post gives faculty the idea that they can focus their classes on what works in online learning, not just re-creating the face to face class. Also, I hope this empowers you to save some time in the process, without sacrificing effective practices in the name of an emergency. If you have access to an Instructional Designer to help, please talk to that person even if you have read this post. They can give you even more specific advice related to your unique course needs than this post can.

Let’s Talk About “Shifting an Entire University Online as Disaster Preparedness”…

With the rapid spread of Covid-19 (aka “the Coronavirus” in shorthand for now), there has been an explosion of discussions about preparing for quarantines and societal closures of various kinds. Among these discussions are moving conferences, courses, and even entire institutions online. I have been tweeting about this for a few days, so I wanted to collect and expand some of my thoughts on this topic.

Since I have a Masters in Instructional Design and a Ph.D. in Learning Technologies, we spent many assignments in courses for both degrees discussing various benefits and pitfalls of online learning, and yes, switching to online in the case of an emergency was frequently covered. It’s a complicated and problematic idea, so this will be a bit dark and complex / rambling post to make.

First of all, let me start off by stating that no matter how well you plan, switching to online will be more chaotic and hard than you can imagine, and it will cause greater damage to disadvantaged students than you will probably notice. Your first and foremost duty is to consider your disadvantaged learners first, and to work on navigating chaos rather than trying to stop it. Because you won’t avoid chaos. Remember – it is called DISASTER preparedness for a reason.

Many disaster preparedness plans I have seen, as well as many conference and institutional reactions to Covid-19, seem to only focus on able-bodied younger people that are not older, immuno-compromised, living with those that are older or immuno-compromised, already affected by food insecurity, homeless or on the edge of homelessness, affected by digital redlining, dealing with disability, held back by systemic discrimination and intersectionality, and so on. The reactions are only taking into account young able-bodied people living with other young able-bodied people, with maybe a link to an external resource that mentions everyone else (occasionally in passing). This is not going to cut it. Please keep your entire population in mind, not just those that will probably be okay no matter what you do.

Next, you need to realize that just because a conference or university can pivot to online, there are still institutional / organizational barriers to the overall idea of online. Some just won’t pivot because they are against online in general. If an institution does not allow remote work options, they probably do so for reasons that other institutions that do so have already dealt with. Its usually an institutional preference to be against remote work at this point, so that will likely carry over to online courses as well. Therefore, don’t assume some big switch will happen in those situations. Besides, how will institutions switch courses over to online if they don’t already have the procedures in place for their employees to go online?

Where I work for my day job, we already allow remote work, and already have a robust array of online courses. We have also been providing LMS shells for every course section, to be utilized even in on-campus courses. The structure is already there for the switch. Yes, I know LMSs are not hugely popular right now – I’m with you on that. But the important issue is that there is a space online there already for every class. It could be in WordPress blogs or many other tools for all that it matters.

But I guess we should talk first about how there are different types of disasters at different levels. There aren’t really any hard lines between these categories (I kind of made them up on the spot)… but in general, you see a few different kinds (and probably more than these):

  • Individualized/localized disasters: This is anything from one or more person getting sick in an atypical way (for them, at least), to something that affects only certain people in one or some locations. Tornadoes are an example of that – living in Texas, we frequently have to consider plans on how adjust courses based on the fact that a portion of our class will be affected and the other portion won’t. These plans have to often create more flexibility for students. But they can also happen out of the blue, and cause portions of your courses to be out of sync.
  • Displacement disasters: These can affect entire cities, regions, states, or even countries. There might be some warning, but the situation does come about very suddenly. Things like hurricanes, floods, and other mass displacement events. In general, your students’ first priority will not be education. They will need to get out, find shelter, food, water, etc. Usually, this will call for cancellation, postponement, etc. I teach online courses for UT Rio Grande Valley. When hurricanes were heading for the valley, we had to postpone even the online courses. People are usually fleeing something, so don’t plan to switch to online right away – or maybe even ever. Do look for ways to find out where your students are and how they are doing. Don’t just lock down the campus and say “good luck!”
  • Quarantine/lockdown/closed borders disasters: To be honest, this is the one that most of us did not think much about in the U.S. until Covid-19. On a global scale, it is probably more common than we realize. Neighborhoods, cities, states, etc could be quarantined, closed down, blocked off, etc due to disease, civil unrest, climate change displacement, even economic issues. Some of these might happen suddenly, while others might happen on a slower basis with time to prepare. I think some institutions think you can just switch your face-to-face courses to video conference tools and be done with it, but it is really much more complicated than this.

Something to remember: not all disasters bring about changes for all learners. Some are already living in disaster conditions. We tend to make disaster plans with the stereotypical “traditional” student in mind – young, flexible, and financially stable people that are so focused on education that they will skip meals or sleep to study more. These imaginary students are also probably perfectly flexible in our minds, so our first plan is that education has to be switched online right away to help them. The truth is, many of our students are already not sure where they will get their next meal or place to sleep. While you are thinking about how to adjust your class for disaster preparedness, why not consider how those changes could go ahead and happen at your institution – as a way to help those that are already in a disaster situation personally?

So… how to make the switch to online – if you are at one of places that do that? First of all, I’m focusing mostly on higher ed here, which might also work for High School classes as well as maybe some Junior High contexts. Not sure if I would recommend K-6 going online – but if someone has found a way to do that without leaving behind disadvantaged students, I am all ears.

First I want to touch a little bit on what will likely happen IF some try to make the switch. I realize that more schools have some kind of “sudden switch to online” plan than many of us think, so it is possible that some schools might go ahead and make that switch. These kinds of plans were a thing for a while, so I know they are out there. Some of those plans are inadequate (probably mainly from becoming outdated), but also probably based on some popular faux-futurist scaremongering and not true trend analysis. But that depends on a lot of things, so that may not be the case your plan. If the plan focuses on one big, easy solution  – its not going to work that well. Look for one that is realistic with the idea that “its gonna be rough, here are several possible options and ideas.”

Even those well-thought out plans can not account for everything, so a sudden or relatively slow-moving switch to online will be mass chaos whether a school has no plan, has an adequate plan, or has a detailed plan. Its just that the detailed flexible plans will make people realize there will be chaos.

The ability to navigate through the chaos will probably depend on the size of your Distance Education / Instructional Design group in relation to the number of classes. Those that have small DE/ID groups will find that even a great contingency plan falls apart without enough people in their DE/ID department. Those with a large DE/ID group will find the chaos much more manageable (even if they lack a solid contingency plan) thanks to having a group of people that know how the tools work, as well as the theory and research and history of how to use it (and how not to).

Places that have found themselves in need of a mass switch to online have also found that humans can manage chaos to some degree even without a plan, and that the switch can happen… it just won’t be something to brag about. What will likely happen – assuming the typical modest contingency plan & small DE/ID group – is that most faculty will suddenly ditch a lot of what they are doing in their on-campus courses. They will just stick with “the basics” for their switch online (whatever that means to them – more than likely email with attachments and synchronous video conference sessions). Which, of course, raises all kinds of questions about what they really do in their class if so much can be ditched last minute. Many will find out that email will work just the same as the LMS for a lot of what they want to do, which will lead many people to live the reality that the technology should not be the focus of course design, even if they don’t realize it.

(One weird thing down the road that I would predict is that instructors with the overall attitude of “I just do whatever” will probably come out looking like stars, and will probably get several keynotes/TED talks/etc out of it. The lack of structure and planning in their on-campus courses will, unfortunately, work out quite well… for once. Meanwhile, the Instructional Designers and Tech Support People that saved their butts by fixing their mistakes behind the scenes will sit in the crowd, ignored…. but showing up still because that is what we always do…. yes, there is a reason I sound like I am speaking from experience… :) )

But let’s also discuss some ways to at least try and steer things towards some better outcomes, even amidst the chaos. Keep in mind I am talking about possible options that may not work out perfectly every time. Disaster response happens in situations when a lot of things like proper design of online courses (which takes a long time to do properly) has to take place in a very short time frame. Nothing is wrong with going with what just works in the moment. These are just some ideas of what to think about and possibly try to incorporate in your plans.

So, if you really are going to go down the path of planning for a mass switch to online, the biggest issue you need to deal with is convincing professors to switch from synchronous methods to asynchronous as much as possible (I always assume everyone knows what is meant by that, but that is not always the case – here is a good summary if you are not totally sure what I mean here). I can’t stress enough how disaster could cause synchronous to break down.

When a disaster strikes, especially like a quarantine, people lose a lot of the control over their schedule. Issues such as “when food arrives,” “when family are available to check in,” or even possibly “when they are able to use the Internet” (uh-oh!) will possibly be out of their control. All parts of life suddenly have to become coordinated, scheduled, and controlled by others, who themselves are most likely doing the best they can to get supplies out with limited staff and mobility. Students will need the ability to work learning around society that is not fully functioning – not the other way around.

The problem with quarantines is that you have a massive strain suddenly on resources in a matter of hours that last longer than the usual cyclical strains. Whether the quarantine is short or long term, there will be possibly be rationing and rolling stoppages of all kinds of services to offset loads on systems (don’t forget how this could also affect cell phone service when local towers become overloaded). So what are you going to do when a quarter of your students don’t have Internet service when you schedule synchronous sessions, and then when those students have Internet, another quarter is hit by rolling blackouts, and so on? Or what about other issues, like those that suddenly have to change work hours to keep income flowing to themselves or their employer due to societal upheaval?

I know that oftentimes, the solution to this is “schedule the teaching session through Zoom or some other tool, but record it for those that miss.” From experience, I can tell you that more and more will start missing because they can, even if they are available. Then they start to not get the same learning experience as others since they can’t ask questions live, or feel as connected to other learners. I’m not saying “don’t do synchronous meetings ever,” but just consider those are not the best way to do online learning by default. There are additional consequences for those that miss that can harm them in the long run, so why not consider ways to make it equitable for all?

Even in the first disaster scenario I described – the individualized disaster – some people say that they will just get a camera and live stream their class. That might work great, but what if that student is in the hospital, and not in control of their schedule enough to be online when your class meeting happens? Are you expecting the hospital (or family caregivers if they are at home) to re-arrange their schedules around your class? Probably not a priority when someone is getting medical help. Just a thought.

Additionally, don’t forget the technical problems that can arise during mass migration to synchronous sessions (which many will be familiar with because they also happen in non-disaster situations as well).  For example:

[tweet 1235011409630593024 hide_thread=’true’]

or

[tweet 1234941983791239169 hide_thread=’true’]

…just for a few examples. There are hundreds to consider. For asynchronous as well – there are pros and cons to both.

While many know that there are major differences between asynchronous and synchronous course design considerations that have to be accounted for in the switch between those two modalities, sometimes we also forget that even switching from on-campus classrooms to synchronous online sessions is also not that simple.

[tweet 1234944981443514370 hide_thread=’true’]

If you have used Zoom or some other video conferencing tool in an online setting, you know that there are big differences between on-campus and online meetings. Just remember that not all of your students and faculty have used those tools in online learning contexts, so they will need guidance on how to do that. Where I work for my day job, they are already on top of that aspect.

There are also many other issues to consider, and this post is too long as it is. Hopefully you can start thinking through all of the unique complexities that a switch to online would run into. For example, are you prepared to let students that are locked in dorms rooms and bored work ahead in courses/programs to fill their time? Are you going to let people use campus tools to organize supply distribution, news updates, just chat if bored, etc if needed? Are you considering the mental effects of long-term quarantine, and how to address that while still in quarantine? There are so many to think about.

And to those that say “how likely is that to happen?” Well, not very to be honest. But that is kind of the point of emergency disaster preparedness. You never know until its too late, so think about it now.

A final note – I am not trying to draw hard lines between asynchronous and synchronous. Asynchronous does not mean you can’t ever have the option of synchronous class sessions. Its not always an either/or. Like I have said, there are also things you have to do because that is all you can in a pinch. These are just some ideas to consider. Keep in mind that there are many great ideas that mix both, like putting students in groups and having those groups meet synchronously. These options can work out very well (especially since smaller groups are easier to schedule common times). Or another idea – you can (and should) meet with individual students synchronously as well.

Don’t draw hard lines. It will be chaos. Plan to be flexible. Good luck, and Godspeed…

(Top photo by Kelly Sikkema on Unsplash)

Instructure Wars, Private Equity Concerns, and The Anatomy of Monetization of Data

The Annals of the Dark and Dreadful Instructure Wars of 2019

as told by

Matt, the Great FUD Warrior, Breaker of Keyboards, Smacker of Thine Own Head, Asker of Questions That Should Never Be Asked

If you were lucky, you were spared the heartache that came out of nowhere over the announcement that Instructure will possibly be sold to Thoma Bravo, a private equity firm.

Well, it should be stated that the concern from those that always have concerns over these sales announcements was expected. The quick “shush shush – nothing to worry about here” that came in response from people that usually understand the concerns over past Private Equity sales in Ed-Tech (Blackboard being the typical example) was the surprising part.

For the record, the first skirmish actually started when Jon Becker asked what possible outcomes there could be of the sale, Audrey Watters responded with her thoughts on that, and someone made a sexist attack on Audrey’s knowledge of private equity. Also for the record, they initially did not disagree with her points that prices would go up or that Instructure would be broken up and sold off. They stated this was impossible because there is nothing about Instructure that could be broken into parts. Many of us pointed out the sexists problems with the way he expressed his opinion (not his underlying opinion about PE), but he dug his heels in. We also pointed out that there were actually many things within Instructure that could be broken apart, but that was apparently grounds for fighting (even though it is true there are many parts of Instructure that could be broken off and sold, and just because Thoma Bravo has a history of buy-and-build strategy, there is nothing stopping them from still selling off parts they don’t fit their strategy. “Buy-and-build strategy” and “selling off parts” are not mutually exclusive). Within that argument, the idea that the all of the data that Instructure has been bragging about for a few years could either be sold or monetized (more on the important difference there later).

Sometime within the next day, Jesse Stommel made the tweet that really kicked off the main war (I don’t know if he was replying to comments about the value of data in the earlier arguments or it was a coincidence). This is going to be a long post, so I am trying not to embed Tweets here like I usually do. In what was an obvious reference to Instructure bragging that their data was core to their value as a company, Jesse made the comment that we can now know that this value they were bragging about has a price tag of $2 Billion dollars.

Now, can I just say here – I don’t think in any way that Jesse thought that “Instructure data actually cost $2 Billion.” I’m pretty certain he knows that personnel, assets, code, customer payments, etc all are part of that value. Its just that there was a lot of bragging about data being core to the company value, and a huge gap between the market value at the time) and $2 Billion dollars, and that his professional analysis was that data contributed to that in a big way.

Then there was some debate over the value of data in an Ed-Tech company. This was followed by some shooshing and tone policing towards anyone that thought there should be concern over the lack of transparency about data that Instructure has become known for recently, as well as concerns over what could change with new owners. This led to people retreating to their own corners to express their side without having to be interrupted with constant tangential arguments (and there is nothing wrong with this retreating).

Audrey Watters has written her account of the ordeal, which I recommend reading in its entirety first. I am tempted to quote the whole thing here, so really go read it. I’ll wait.

Okay, first I want to clarify something. In my mind, there is a difference between “selling data” and “monetizing data” even though there are obvious overlaps:

  • “Selling Data” is taking a specific set of data (like from a SQL data dump) and selling that to companies that will turn around and sell it to others (which does happen with educational data – more on that later). When someone says “what good is someone knowing that I submitted Quiz 2 back in 2016?”, they are referring to data as a set archive of database rows from a set date. It is kind of looking at data as a crop of apples that were harvested at a specific time. There was concern over this as a possibility, and we will look at that later.
  • “Monetizing Data” is any form of making money directly from creating, manipulating, transporting, etc data. This happens a lot in every day life, and not all instances of it are bad. The core business of most for-profit LMS companies is the monetization of data – nothing in an LMS works without data. Grade Books need data to work. Discussion forums are empty without data. Analytics dashboards show nothing without data. This is kind of looking at data like a the fruits of a field of apple trees that are constantly growing once picked. You could wipe an LMS database of all past data (well, assuming you could find a way to do so without shutting down functionality), but as soon as you turn it on it the code starts generating a massive set of new data. For many, the main concern with the monetization of data is who controls the data and what will they use it for in the LMS? Will I get manipulated by my own data being compared to past learner’s data without either of us knowing about it?

Now, to be fair, most responses were fairly nuanced between the two “sides” of the war. For the record, my “side” in the great Instructure War of 2019 is that “data has the potential to be used in ways that users may not want, which could include monetization, and both Instructure and their potential buyer are not saying enough about what their plans are.” I think that is close to what many others thought as well, but our position was mainly reduced to “all data bad!” While the other side was reduced to “data has no value so stop worrying!”, I do want to examine the idea of whether educational data can have value (to be sold or monetized) outside of either side.

Instructure’s View of Their Own Data

First, I think it is prudent to start with Instructure’s own view on their data. While it would be hard to reference they amount of bragging they have been doing about the value of data at conferences and sales calls, we only have to look at their own words on their own website to see how they view data.

First there is Project Dig. They start off by proclaiming that they have become “passionate about leveraging the growing Canvas data to further improve teaching and learning.” That passion “became our priority, and over the years we’ve provided greater access to more data and designed new, easy-to-understand (and act on) course analytics.” How can a priority of the company not be a huge factor in what they are worth? This is all under the banner of “we’ve been focused on delivering technology that makes teaching and learning easier for everyone.” Obviously, as an LMS, that focus is their main revenue maker as well. And now data is the priority for that focus.

FYI – the target launch date for their tools that will “identify and engage at-risk students, improve online instruction, and measure the impact of teaching with technology.” is…. 2020. Conveniently after the proposed sales date it seems. Again, how could this priority focus of the company that is improving what they offer to customers not be a huge factor in the current sales price?

But they do recognize that there are some problems with digging into data. What word gets mentioned A LOT in the FAQs about potential issues (hint: it involves a word combined with data that starts with “s” and rhymes with “felling”).

Some key highlights from the FAQs:

  • “Will your practices be consistent with your data privacy or security policies?”They say they “are not selling or sharing institutions’ data” – but only because they choose not to. It important to note that the question about selling is there because they feel they can do just that if they want. But they assure us they won’t. Of course, new owners can change that.
  • “Is this really just my data, monetized?”Basically, they say it is not an example of monetizing your data just because… they choose not to, not because they can’t. The implication still remains that the possibility is real and it is there. Then give examples of how they could. Again, new owners are not limited by this choice of the old owners.
  • “What can I say to people at my institution who are asking for an “opt-out” for use of their data?”This is the core problem many have with monetization of data: feel free to do it, just give me the option to opt out at least. They say a lot, but don’t really answer that question (which is very concerning).Important to note that they say “Institutions who have access to data about individuals are responsible to not misuse, sell, or lose the data.” Then they say they “hold themselves to that same standard.” Nothing says this couldn’t change with new owners. But how do you “sell” data that is worthless? They seem to think selling data is as possible as misusing it or losing it. It certainly would be a lot easier to say “no one is out there buying or selling educational value.”
  • (While they do say a lot about openness and transparency, many customers have expressed frustration at some lack in those areas.)

(an important side note in support of Watters point earlier is that, in addition to the main products of the company that each could be broken off and sold or things like assets, employees, etc, these data projects and services represent even more parts that could easily be broken off and sold if a PE firm so chooses to at any point)

Then I give you – Canvas Data. The doc for this service is really a whole page of ideas for how to monetize Canvas data, along with the existing tools to do it. Which is really the goal of the project: “customers can combine their Canvas Data with data from other trusted institutions.” What it doesn’t quite clarify here is that these institution include companies that sometimes charge money to create manipulate, and transport student data inside and outside of Canvas. Many people trust Canvas to vet these companies, but sometimes these arrangements are obscure.

I will give one example from an organization that I think is pretty trustworthy – H5P. H5P does integrate for free with Canvas for free. However, some activities designed in H5P generate grades (which is student data). If you want to transport that data back into the Canvas grade book, you need a paid account with H5P. This is just one example of how a company can monetize Canvas student data, even down to one small data point.

Now, while I see no reason to distrust H5P, I can’t force students to trust an organization they don’t know. What if they didn’t want grade generated on all these websites (because H5P is not the only company to do this)? What if they were not comfortable with a company profiting on moving around their grades? Or what if they were concerned about what the change in LMS ownership meant for all of this?

Anyways, all of this information is up on their website because Instructure believes that their data is very valuable, and that it can be sold. Why would they not point out that data is worthless by itself? Why would they talk about all of this if people weren’t asking these questions, if entities weren’t asking to buy data?

(And Instructure is not the only LMS doing this. Even Blackboard’s Ultra is already ready to do more with data, to monetize it today: “We’re not just handing you data. We’re surfacing data that matters when it matters most— to foster more personalized interactions and drive student success.”  In other words, they are not just doing what other LMS have always done and managed (handled) your data to monetize it, they are adding value by surfacing and doing more. They “drive success,” because success sells. If you have ever been to an LMS sales pitch, you know analytics, personalization, success, all these terms used on the pages I shared are key sales terms to convince organizations to sign the contract.)

The Marketplace for Student Data

One of the contentions of the “LMS data has little to no value” side is that no one is buying or selling student data, as in dumps of past records. It seems that there are existing marketplaces for student data, to the point that someone wrote a journal article on the whole thing: Transparency and the Marketplace for Student Data. “The study uncovered and documents an overall lack of transparency in the student information commercial marketplace and an absence of law to protect student information.” Sounds like a pretty good justification for concern over any student data out there, whether currently under consideration for sale or not. Why it that?

Taking the list of student data brokers Fordham CLIP was able to identify, Fordham CLIP sought to determine what data about students these brokers offer for sale and how they package student data in the commercial marketplace. There are numerous student lists and selects for sale for purposes wholly unrelated to education or military service. Also, in addition to basic student information like name, birth date, and zip code, data brokers advertise questionable lists of students, and debatable selects within student lists, profiling students on the basis of ethnicity, religion, economic factors, and even gawkiness.

That is not all.

Under the Radar Data Brokers

Get ready for this one: there is no evidence that educational data is even staying specifically within a dedicated student data marketplace. This article on under the radar data brokers compiled a list of “121 data brokers operating in the U.S. It’s a rare, rough glimpse into a bustling economy that operates largely in the shadows, and often with few rules.” Most of the entries on the list don’t get into the specific data they collect, so the fact that “education” appears three times on the list for the few that do is concerning:

  • BLACKBAUD INC.
    A “supplier of software and services specifically designed for nonprofit organizations. Its products focus on fundraising, website management, CRM, analytics, financial management, ticketing, and education administration.” (Wikipedia)
  • MCH INC. DBA MCH STRATEGIC DATA
    MCH “provides the highest quality education, healthcare, government, and church data.”
  • RUF STRATEGIC SOLUTIONS
    A marketing firm owned by consumer identity management company Infutor with a focus on travel, tourism, insurance, e-commerce, and education.

These are places already in business, already buying and selling data. If you look at the chart of the attributes (types of data) that Acxiom collects, “education” is one. Its not hard to believe that – if they don’t already have it – they would be very interested to add “I completed quiz 2 on such-and-such date” to that massive collection on each person.

So Why $2 Billion for Instructure?

The only concrete answer we have now is “nobody outside those privy to the details knows.” There are many speculations out there – some of which started the Instructure Wars. One of the main ones I haven’t touched on, that probably summarizes one side of the Instructure Wars, is that the data adds little to nothing to the value of Instructure (despite their own claims to the contrary), but it is “simple math” getting from the current market value to $2 Billion.

I think there is a point to be made in the “simple math” argument (although I would be careful calling it “simple” or claiming “people just don’t understand” if they don’t agree). I would say that even basic Math has to account for the value of data (both at the price it can be sold and the value it can add through monetization). Autumm Caines made the comparison that data is the engine to the LMS car, and you don’t really buy one without the other. In fact, those that are claiming that the data have no value are accusing Instructure of being the shadiest used car sales people in the world: “If you will buy this new fancy car, I will throw in the engine for free!”

However, it seems that different Instructure investors are now disagreeing with the $2 Billion price tag, some thinking it is too low. In fact, they think it “significantly” under values the company. I would assume these investors have access to details about the price of the sale, and if the $2 Billion was simple math, I don’t know if there would be much room to disagree. The cost of the code would be tied to revenue it generates, and therefore would be static and easy to calculate. Various aspects like assets, personnel, and the value of the income from investors are all relatively fixed. Even the future revenue is based on various predictive factors that would be hard to argue.

Seeing that the data is the newest priority of the company, and its value is difficult to calculate, might that be the best candidate for the source of this disagreement? Maybe, but the only for really, really complex data that leads to complex calculations that could easily be off. And LMS data is pretty straight forward… right?

Well, not so much. Kin Lane took a look at what the public APIs of Canvas reveal about the underlying data, and its a dozy. That is another article that I could quote the entire thing, so please take time to read it. I know the page looks long, but that is because he lists 1666 data points (!!!) in just the public APIs alone (while pointing out there are many private ones that probably have many more). He also points out how this structure and the value that it brings easily accounts for the $2 Billion price tag and more, especially when combined with the costs of code and people and customers and so on.

Now, of course, I am willing to bet that there are multiple factors that are causing the investors to fight over price. It could be that they think Canvas’ movement into the corporate training space is about to take off. It is probably a combination of many factors, some I have not even touched on here.

But it is just not far-fetched to think that there is a real possibility that the data is driving the prices, either a commodity to be sold, and/or a service to be monetized.

As I finished this post, Jame Luke published a blog post based on the economics side of the issue. While it does expose that many of us (especially me) are using the wrong terms, the basic idea that the PE will not have education’s best interest at heart and that the data is driving the economics here. He touches on some of the reasons why about data I do here, goes into a lot more depth, and shares several plausible scenarios for the future goals of Thoma Bravo, including one that makes the monetization of data very central to the future sales value of the company. A detailed but necessary read as well. I don’t have time or energy to go back and correct what I got wrong based on this post, so feel free to blast me in the comments if you so desire.

The Ferocity of the Battle

Right now, my direct messages on Twitter are booming with multiple people that are all flabbergasted as to why this is so controversial. We get that there would be disagreement, but the level of ferocity that one side has had in this battle is surprising. Especially since we all thought these were people that agreed that data does bring significant monetary value to a company.

“Data is the new oil!” we were told. But now we are told it was only… black paint all along? Something that will be there in every painting, but doesn’t cost much to be there?

Many people have offered thoughts on why some are some determined to fight this fight. If they get shared publicly I will probably come back and add them here. For my part, I just don’t know. People that want to protect student’s from misuse of data I get, but they have really gone to extra levels of fight over this (beyond what they usually do, that is). The real surprise is the shear irritation from the Learning Analytics community. I know – how did that happen? None of this Instructure kerfuffle says anything bad or good about Learning Analytics, yet they are in the thick of the battle at times.

Still, why is the basic message of “we should be vigilant to make sure that a company that has been a bit opaque with data issues recently gets sold to another company that may or may not be more open, because their data has the possibility to be exploited” so controversial right now? Why must so many people be proven right on the exact price of data? I don’t know.

For me, there could be a news report tomorrow that has Instructure stating “yep, the price was all about the data,” and I would just respond with “okay, thought so.” I get the feeling I am going to be buried in a barrage of snide Tweets if the opposite narrative goes in the news.

Which, let’s be honest, that will be the narrative from Canvas. They have to say there isn’t much value to the data no matter what the truth is. If they let on that it has actual, real value, every single school, teacher, and student will immediately want to sue for their share. Even if there is no lawsuit, the public relations nightmare would cause untold damage as people get mad their data had direct value in the massive sale.

Of course, the reality is that it does not matter what comes out. Canvas already bragged about the value of the data they are monetizing. They already are using it in ways that people don’t want. People have a good reason to be upset about the monetization of their data because it is already happening.

Thus ends the accounting of the never-ending Instructure Wars, as best can be summarized near the end of the dread year 2019.

As the wars drag on and alliances are strained, many began to wonder….

Will this ever end….

Is Learning Analytics Synonymous with Learning Surveillance, or Something Completely Different?

It all started off simply enough. Someone saw a usage of analytics that they didn’t like, and thought they should speak up and make sure that this didn’t cross over into Learning Analytics:

The responses of “Learning Analytics is not surveillance” came pretty quickly after that:

[tweet 1187857679206637568 hide_thread=’true’]

But some disagreed with the idea, feeling they are very, very similar:

[tweet https://twitter.com/Autumm/status/1188110779616288775 hide_thread=’true’]

(a couple of protected accounts that I can’t really embed here did come out and directly say they see Learning Analytics and Learning Surveillance as the same thing)

I decided to jump in the conversation and ask some questions about the difference between the two, and see if anyone could given definitions of the two that explained their difference, or perhaps prove they are they same.

My main point was that there is a lot of overlap between the two ideas. Both Learning Analytics and Learner Surveillance collect a lot of student data (grades, attendance, click stream, demographics, etc). If you look at the dictionary definition of surveillance (“close watch kept over someone or something (as by a detective)”), the overlap between the two only grows. Both rely on the collection of data to detect, keep watch, and predict future outcomes, all under the banner of being about the learning itself. Both Learning Analytics researchers and Learning Surveillance companies claim they do their work for the greater good of helping us to understand and optimize learning itself and/or the environments we learn in. The reality is that all surveillance (learning or otherwise) is now based on data that has been analyzed. If we don’t define the difference between Learning Analytics and Learner Surveillance, then the surveillance companies will continue to do what they want with Learning Analytics. Just saying “they are not the same” or “they are the same” without providing quantitative definitions of how they are or are not the same is not enough.

It seems that the questions that were raised in replies to my thread showcase how there is not a clear consensus on many aspects of this discussion. Some of the questions raised that need to be acknowledged and hashed out include:

  1. What counts as data, especially throughout the history of education?
  2. What exactly counts as surveillance and what doesn’t?
  3. Is surveillance an inherently evil, oppressive thing; a neutral force that completely depends on the answer to other questions here; or a benign overall positive force in society? Who defines this?
  4. Does the purpose of data collection (which is driven by who has access to it and who owns it) determine it’s category (analytics or surveillance)?
  5. Does the intent of those collecting data determine it’s category?
  6. Does consent change the nature of what is happening?
  7. Is Learning Analytics the same, similar in some ways but not others, or totally different than Learning Surveillance?
  8. What do we mean by the word “learning” in Learning Analytics?
  9. Are the benefits of Learning Analytics clear? Who gets to determine what is a “benefit” or not, or what counts as “clear”?

I am sure there are many other questions (feel free to add in the comments). But lets dig into each of these in turn.

The Historical Usage of Data in Education

There have been many books and papers written on the topic of what data is, but I got the sense that most people recognize that data has been used in education for a long time. Many took issue with equating Learning Analytics with collecting one data point:

This is a good point. Examining one data factor falls well short of any Learning Analytics goal I have ever read. Seeing that certain data points such as grades, feedback, attendance, etc have always been used in education, at what point or level does the historically typical analysis of information about learners become big data or Learning Analytics? If someone is just looking at one point of data, or they are looking at a factor related to the educational experience but not at learning itself, do we count it as “Learning Analytics”? If not, at what point does statistical information cross the line into becoming data that can be analyzed? How many different streams of data does one have to analyze before it becomes learning analytics? How close does the data have to be to the actual educational process to be considered Learning Analytics (or something else)? Does Learning Analytics even really ever look at actual learning? (more on that last one later)

What is Surveillance Anyways?

It seems there is a range of opinions on this, from surveillance meaning only specific methods of governmental oppression, to the very broad general definition in various dictionaries. Some would say that if you make your data collection research (collected in aggregate, de-identified, and protected by researchers), then it is not surveillance. Others say that analytics requires surveillance. Others take those ideas in a different direction:

https://twitter.com/gsiemens/status/1188112736934420487

I don’t know if I would ever go that far (and if you know George, this is not his definitive statement on the issue. I think.), or if I even feel the dictionary definition is the most helpful in this case. But you also can’t disagree with Miriam-Webster, right? Still, there are some bigger questions about what exactly is the line between surveillance and other concepts:

[tweet 1188147893246410752 hide_thread=’true’]

Oversight, supervision, corporate interest, institutional control, etc… don’t they all affect where we draw the line between analytics and surveillance (if we even do)? Or even deeper still….

Is All Surveillance Evil?

It seems there is an assumption that all surveillance is evil in some corners. Some even equate it with oppression and governmental control. However, if that is what everyone thinks of the idea, then why do grocery stores and hotels and other businesses blatantly post signs that say “Surveillance in Progress“? My guess is that this shows there are a lot of people that don’t see it as automatically bad, and even more that don’t care that it is happening. Or do they really not care, or just think there is nothing they can do about it? Either way, these signs would be a PR disaster for the companies if there was consensus that all surveillance is evil. Then again, I’m not so sure many would be so accepting of surveillance if we really knew all of the risks.

However, many do see surveillance as evil. Or at least, something that has gone too far and needs to stop:

But taking attendance and tracking bathroom breaks for points are two different things, right? So does that mean that…

Does the Purpose of Data Collection Change Anything?

Many people pointed out that the purpose for why data was collected would change whether we label the actions “Learning Analytics” or “Learning Surveillance.” Of course, the purpose of data collection is also driven by who has access to the data, who owns it, and what they need the data for (control? make money? help students? All of the above?). There is sometimes this assumption that research always falls into the “good” category, but that would ignore the history of why we have IRBs in the first place. Research can still cause harm even with the best of intentions (and not everyone has the best of intentions). This is the foundation of why we do the whole IRB thing, and that is not a perfect system. But the bigger view is that research is all about detective work, watching others closely to see what is going on, etc. Bringing the whole “purpose” angle into the debate will just cause the definition of Learning Analytics to move closer to the dictionary definition of surveillance.

On the other hand, a properly executed research project does keep the data in the hands of the researchers – and not in the hands of a company that wants to monetize the data analysis. Does the presence of a money making purpose cross the line from analytics to surveillance? Maybe in the minds of some, but this too causes confusion in that some analytics researchers are making sell-able products from their research. They may not be monetizing the product itself, but they may sell their services to help people use the tools. And its not wrong to sell your expertise on something you created. But many see a blurry line there. Purpose does have an effect, but not always a clear cut or easy to define one. Plus, some would point out that purpose is not as important as your intentions…

The Road to Surveillance is Paved With Good Intentions

Closely related to purpose is intent – both of which probably influence each other in most cases. While some may look at this as a clear-cut issue of “good” intentions versus “bad” intentions, I don’t personally see that as the reality of how people view themselves. Most companies view themselves as doing a good thing (even if they have to justify some questionable decisions). Most researchers see themselves as doing a good thing with their research. But we have IRBs and government regulation for a reason. We still have to check the intentions of researchers and businesses all the time.

But even beyond that – who gets to determine which intentions are good and which aren’t? Who gets to says what intentions still cause harm and which ones don’t? The people with the intentions, or the people affected by the intentions? What if there are different views among those that are affected? Do analytics researchers or surveillance companies get to choose who they listen to? Or if they listen at all? And are the lines between “harmful good intention” and “positive results of intention” even that clear? Where do we draw the line between harm and okay?

Some would say that the best way to deal with possibly harmful good intentions is to get consent….

Does the Line Between Analytics and Surveillance Change Due to Consent?

Some say one of the lines between Learning Surveillance and Learning Analytics is created by consent. Learning Analytics is research, and ethical research can not happen without consent.

[tweet 1188124784942551040 hide_thread=’true’]

Of course, the surveillance companies would come back and point to User Agreements and Terms of Service. So they are okay with consent, right?

Well, no. Who really reads the Terms of Service, anyways? Besides, they typically don’t clearly spell out what they do with your data anyways, right?

While this is often true, we see the same problem in research. We often don’t spell out the full picture for research participants, and then don’t bother to check to see if they really read the Informed Consent document or not. To be honest, consent in research as well as agreement with Terms of Service is more of a rote activity than a true consent process. We are really fooling ourselves if we think these processes count as consent. They really count more as a legal “covering the buttocks” than anything else.

Of course, many would point out that Learning Surveillance is often decided at the admin level and forced on all students as a condition of participating in the institution. And sadly, this is often the case. Since research is always (supposed to be) voluntary, there is some benefit to Informed Consent over Terms of Service, even if both are imperfect. But after all of this…

So, For Real, What is the Difference Between Analytics and Surveillance?

I think some people see the difference as:

Learning Analytics: informed consent, not monetized, intending to help education/learners, based on multiple data points that have been de-identified and aggregated.

Learning Surveillance: minimal consent sought from end users (forced by admin even), monetized, intending to control learners, typically focused on fewer data points that can identify individuals in different ways.

…or, something like that. But as I have explored above, this is not always the clear-cut case. Learning Analytics is sometimes monetized. Learning Surveillance often sells itself as helping learners more than controlling. De-identified data can be re-identified easier and easier as technology advances. Learning Surveillance can utilize a lot of data points, while some Learning Analytics studies focus in on a very small number. Both Learning Analytics and Learning Surveillance have consent systems that are full of problems. Learning Analytics can be used to control rather than help. And so on.

And we haven’t even touched on the problem of Learning Analytics not really even analyzing actual “learning” itself…

Learning Analytics or Click Stream Analytics?

Much of the criticism of Learning Surveillance focuses on how these tools and companies seek to monitor and control learning environments (usually online), while having very little effect on the actual learning process. A fair point, one that most Surveillance companies try to downplay with research of their own. That’s not really an admission of guilt as much as it is just the way the Ed-Tech game goes: any company that wants to sell  a product to any school is going to have to convince the people with the money that there is a positive affect on learning. Some how.

But does Learning Analytics actually look at learning itself?

[tweet 1188243487071834112 hide_thread=’true’]

So while Learning Analytics does often get much closer to examining actual learning than Learning Surveillance usually does, it is generally still pretty far away. But so is most of educational research, to be honest. It is not possible yet to tap into brains and observe actual learning in the brain. And a growing number of Learning Analytics papers are taking into account the fact that they are looking at artifacts or traces of learning activities, not the learning activities themselves or the actual learning process.

However, the distinction that “Analytics is looking at learning itself” and “Surveillance is looking at factors outside of learning” still comes apart to some degree when you look at what is really happening. Both of them are examining external traces or evidence of internal processes. This leaves us with the idea that there has to be a clear benefit to one or other if there is a true difference between the two….

What is Clear and What is a Benefit Anyways?

Through the years, I have noticed that many say that the benefits of analytics and/or surveillance are clear. The problem is, who gets to say they are clear, or that they are beneficial? All kinds of biases have been found in data and algorithms. If you are a white male, there are fewer risks of bias against you… so you may see the benefits as clear. To those that see a long history of bias being programmed into the systems… not so much. Is it really a “benefit” if it leaves out large parts of society because a bias was hard-coded into the system?

Where some people see benefits of analytics, other see reports tailored for upper level admin that tells them what we already know from research. Having participated in a few Learning Analytics research projects myself, I know that it takes a lot of digging to find results, and then an even longer time to explain to others what is there. And then, if you create some usable tool out of it, how long does it take to train people to use those results in “user-friendly” dashboards? Obviously, in academia we don’t have a problem with complex processes in and of themselves. But we should also be reluctant to call them “clear” if they are time-consuming to discover, understand, communicate, and make useful for others.

Then, on top of all of this, what we have had so far is a bunch of instructors, admins, and researchers arguing over whether analytics is surveillance, and if either one of them are okay or not. Do the students get a say? When are we going to take the time to see if students clearly understand what all this is about (and then clearly explain it to them if they don’t), and then see what they have to say? Some already understand the situation very well, but we need to get to place where most understand it fairly well, and then include their voice in the discussion.

So Back to the Question: How Do You Define These Two?

Like many have stated, analytics and surveillance have existed for a long time, especially in formal educational settings:

If you really think about it, Instructivism has technically been based on surveillance and analysis all along. This has kind of been baked into educational systems from the beginning. We can’t directly measure learning in the brain, so education has traditionally chosen to keep close watch over students while searching for evidence that they learned something (usually through tests, papers, etc). Our online tools have just replicated instructor-centered structures for the most part, bringing along the data analysis and user surveillance that those structures were known for before the digital era. Referring to teachers as “learning detectives” is an obscure trope, but one that I have heard from time to time.

(Of course, there are those that choose other ways of looking at education, utilizing various methods to support learner agency. This is outside the focus of this rambling article. But it is also the main focus of the concepts I research, even when digging into data analytics.)

So if you are digging through large data sets of past student work and activity like a detective, in order to find ways to improve educational environments or the learning process…. am I describing Learning Analytics, or Learning Surveillance?

Yes, I intentionally choose a sentence that could easily describe both on purpose.

To be honest, I think if we pull back too far and compare any type of data analysis in learning with any form of student surveillance in learning, there won’t be much difference between the two terms. And some people that only work occasionally with either one will probably be okay with that.

I think we need to start looking at Learning Analytics (with capital L-A) vs. analytics (little a), and Learning Surveillance (capital L-S) vs. surveillance (little s). This way, you can look at the more formal work of both fields, as well as general practices of the general ideas. For example, you can look at the problems with surveillance in both Learning Analytics as well as in Learning Surveillance.

However, if I was really pressed, I would say that Learning Analytics (with capital L-A) seeks to understand what is happening in the learning process, in a way that utilizes surveillance (little s) of interface processes, regardless of monetary goals of those analyzing the data. Learning Surveillance (capital L-S) seeks to create systems that control aspects of the learning environment in a way that monetizes the surveillance process itself, utilizing analytics (little a) from learning activities as a primary source of information.

You may look at my poor attempt at definitions and feel that I am describing them as the exact same thing. You may look at my definitions and see them as describing two totally different ideas. Maybe the main true difference between the two is in the eye of the beholder.

Dealing With Educators That Are Not Interested in Legal Issues

If you have been in Instructional Design / Learning Technology / etc for any length of time, you have no doubt run into the various laws and rules that govern all areas of the work we do in this space. Hopefully, you have even been well-trained on these legal issues as well (but I know that is not always the case). And unfortunately, you have probably run into educators that dismiss certain legal issues out of hand.

I say “educators” even though most people probably think primarily of professors when they think of people that ignore the law. This is because there really are people from all sectors of education that are known to dismiss any legal questions or concerns that may apply to what they are doing.

For me, it seems that Accessibility and Fair Use end up being the most oddly contentious issues to address in education (again, not just with professors, but people from all levels of education and non-teaching staff as well). Its not that people think Accessibility or Fair Use are bad per se. Its usually that dealing with Accessibility can “wait till later,” and that concerns over what counts as Fair Use (or more often, what doesn’t fall under Fair Use) are all just “semantics.”

With accessibility, what typically happens is that the instructional designer will point out the need to take the time to make sure all course content is accessible from the beginning. The common counter argument that is made is that accessibility is important, but it can wait until an actual accessibility request is made… “if” it is ever made. Usually with an extreme amount of emphasis and eye-winking placed on the word “if,” because many seem to believe there just aren’t that many students with accessibility needs out there.

Of course, when those accessibility requests do come in, usually much quicker that expected, the ensuing rush to meet requirements becomes a panicked realization that you can’t just adjust an entire course to be accessible in a few days or even a couple of weeks (the typical time frame given for compliance).

But at least they try. The one that seems to get the least effort to even respond to is our concern over what counts as “Fair Use.” I still find it weird that people use “just semantics” as a dismissive trump card, like it is a reason to give up making a point. Many of our laws are built on semantics. Semantics are usually an important issue to discuss, not a “go away with your petty concerns” push back. It never comes across as helpful or respectful disagreement, even if meant that way. People use “its just semantics” as a way to shut someone else down and “win” a disagreement.

I get that many people don’t want to discuss Fair Use because it can be confusing and hard to define. But if you have had someone warn you about going over the line, chances are that if you end up in court over it, you stand little chance of winning. If an Instructional Designer thinks you went over the line, well, we often go over the line ourselves as well. So don’t expect a court to be more liberal with the line than IDs are. And don’t expect me to put my neck on the line when you ignored my concerns in the first place.

Of course, these two aren’t the only areas that end up seeing contentious arguments over legal definitions. Artificial Intelligence, Learning Engineering, Virtual Reality, OPMs, and many other current hot topics all have critics that raise legal concerns, and defenders that dismiss those concerns out of hand. But these two are the biggest ones I have dealt with in day-to-day course design work.

If I could count the number of times a faculty member came back to me and told me they wished they had listened to me about legal issues….

But what to do about it ahead of time? Just keep bringing it up. Show them this blog post if you need to. Bookmark stories that you read about the issues – from legal or educational experts weighing in the idea, to actual news accounts of legal action. Show them that story about the school in Florida that got caught showing Disney films illegally on a school field trip because a Disney lawyer happened to be driving by the school bus.

You are going to have to make a case and defend it as if you are already in court in many cases. And this is just if you actually have the organizational power to push back. Many IDs do not. In that case, make sure you have an electronic trail suggesting to people that either specific classes or entire departments or schools should take these laws seriously.

So What Do You Want From Learning Analytics?

If you haven’t noticed lately, there is a growing area of concern surrounding the field of learning analytics (also sometimes combined with artificial intelligence). Of course, there has always been some backlash against analytics in general, but I definitely noticed at the recent Learning Analytics and Knowledge (LAK) conference that it was more than just a random concern raised here and there that you usually get at any conference. There were several voices loudly pointing out problems both online and in the back channel, as well as during in-person conversations at the conference. Many of those questioning what they saw were people with deep backgrounds in learning theory, psychology, and the history of learning research. But its not just people pointing out how these aspects are missing from so much of the Learning Analytics field – it is also people like Dr. Maha Bali questioning the logic of how the whole idea is supposed to work in blog posts like Tell Me, Learning Analytics…

I have been known to level many of the current concerns at the Learning Analytics (LA) field myself, so I probably should spell out what exactly it is that I want from this field as far as improvement goes. There are many areas to touch on, so I will cover them in no particular order. This is just what comes to mind off the top of my head (probably formed by my own particular bias, of course):

  • Mandatory training for all LA researchers in the history of educational research, learning theory, educational psychology, learning science, and curriculum & instruction. Most of the concerns I heard voiced at any LAK I have attended was that these areas are sorely missing in several papers and posters. Some papers were even noticed as “discovering” basic educational ideas, like students that spend more time in a class perform better. We have known this from research for decades, so… why was this researched in the first place? And why was none of this earlier research cited? But you see this more than you should in papers and posters in the LA field – little to no theoretical backing, very little practical applications, no connection to psychology, and so on. This is a huge concern, because the LAK Conference Proceedings is in the Top 10 Educational Technology journals as ranked by Google. But so many of the articles published there would not even go beyond peer review in many of the other journals in the Top 10 because of their lack of connection to theory, history, and practice. This is not to say these papers are lacking rigor for what they include – it is just that most journals in Ed-Tech require deep connections to past research and existing theory to even be considered. Other fields do not require that, so it is important to note this. Also, as many have pointed out, this is probably because of the Computer Science connection in LA. But we can’t forego a core part of what makes human education, well… human… just because people came from a background where those aspects aren’t as important. They are important to what makes education work, so just like a computer engineer that wants to get into psychology would have to learn the core facets of psychology to publish in that area, we should require LA researchers to study the core educational topics that the rest of us had to study as well. This is, of course, something that could be required to change many areas in Education itself as well – just having an education background doesn’t mean one knows a whole lot about theory and/or educational research. But I have discussed that aspect of the Educational world in many places in the past, so now I am just focusing on the LA field.
  • Mandatory training for all LA researchers in structural inequalities and the role of tech and algorithms in creating and enforcing those inequalities. We have heard the stories about facial recognition software not recognizing black faces. We know that algorithms often contain the biases of their creators. We know that even the prefect algorithms have to ingest imperfect data that will contain the biases of those that generated it. But its time to stop treating equality problems as an after thought, to be fixed only when they get public attention. LA researchers need to be trained in recognizing bias by the people that have been working to fight the biases themselves. Having a white male instructor mention the possibility of bias here and there in LA courses is not enough.
  • Require all LA research projects to include instructional designers, learning theorists, educational psychologists, actual instructors, real students, people trained in dealing with structural inequalities, etc as part of the research team from the very beginning. Getting trained in all of the fields I mentioned above does not make one an expert. I have had several courses on educational psychology as part of my instructional design training, but that does not make me an expert in educational psychology. We need a working knowledge of other fields to inform our work, but we also need to collaborate with experts as well. People with experience in these fields should be a required part of all LA projects. These don’t all have to separate people, though. A person that teaches instructional design would possibly have experience in several areas (practical instruction, learning theory, structural inequality, etc). But you know who’s voice is incredibly rare in the LA research? Students. Their data traces DO NOT count as their voice. Don’t make me come to a conference with a marker and strike that off your poster for you.
  • Be honest about the limitations and bias of LA. I read all kinds of ideas for what data we need in analytics – from the idea that we need more data to capture complex ways learning manifests itself after a course ends, to the idea that analytics can make sense of the word around us. The only way to get more (or better) data is to increase surveillance in some way or form. The only way to make more sense is to get more data, which means… more surveillance. We should be careful not to turn our entire lives into one mass of endless data points. Because even if we did, we wouldn’t be capturing enough to really make sense of the world. For example, we know that click stream data is a very limited way to determine activity in a course. A click in an online course could mean hundreds of different things. We can’t say that this data tells us what learners are doing or watching or learning – only just what they are clicking on. Every data point is just that – a click or contact or location or activity with very little context and very little real meaning by itself. Each data point is limited, and each data point has some type of bias attached to it. Getting more data points will not overcome limitations or bias – it will collect and amplify them. So be realistic and honest with those limitations, and expose the bias that exists.
  • Commit to creating realistic practical applications for instructors and students. So many LA projects are really just ways to create better reports for upper level admin. Either that, or ways to try and decrease drop-outs (or increase persistence across courses as the new terminology goes). The admin needs their reports and charts, so you can keep doing that. But educators need more than drop-out/persistence stuff. Look, we already have a decent to good idea what causes those issues and what we can do to improve them. Those solutions take money, and throwing more data at them is not going to decrease the need for funding once a more data-driven problem (which usually look just like the old problems) is identified. Please: don’t make “data-driven” become a synonymy for “ignore past research and re-invent the wheel” in educators eyes. Look for practical ways to address practical issues (within the limitations of data and under the guiding principle of privacy). Talk to students, teachers, learning theorists, psychologists, etc while you are just starting to dig into the data. See what they say would be a good, practical way to do something with the data. Listen to their concerns. Stop pushing for more data when they say stop pushing.
  • Make protecting privacy your guiding principle. Period. So much could be said here. Explain clearly what you are doing with the data. Opt-in instead of opt-out. Stop looking for ways to squeeze every bit of data out of every thing humans do and say (its getting kind of gross). Remember that while the data is incomplete and biased, it is still a part of someone else’s self-identity. Treat it that way. If the data you want to collect was actual physical parts of a person in real life – would you walk around grabbing it off of them the way you are collecting data digitally now? Treat it that way, then. Or think of it this way: if data was the hair on our heads, are you trying to rip or cut it off of peoples’ heads without permission? Are you getting permission to collect the parts that fall to the floor during a haircut, or are you sneaking in to hair cutting places to try and steal the stuff on the floor when no one is looking? Or even worse – are you digging through the trash behind the hair salon to find your hair clippings? Also – even when you have permission – are you assuming that just because the person who got the hair cut is gone, that this means the identity of each hair clipping is protected… or do you realize that there are machines that can identify DNA from those hair clippings still?
  • Openness. All of what I have covered here will require openness – with the people you collect data from, with the people you report the analytical results to, with the general public about the goals and results, etc. If you can’t easily explain the way the algorithms are working because they are so complex, then don’t just leave it there, Spend the time to make the algorithms make sense, or change the algorithm.

There are probably more that I am missing, or ways that I failed to explain the ones I covered correctly. If you are reading this and can think of additions or corrections, please let me know in the comments. Note: the first bullet point was updated due to misunderstandings about the educational journal publishing system. Also see the comments below for good feedback from Dr. Bali.

Ed-Tech Retro-Futurism and Learning Engineering

I don’t know what I am allowed to say about this yet, but recently I was recorded on an awesome podcast by someone that I a big fan of their work. One of the questions he asked was what I meant on my website when I say “Ed-Tech Retro-Futurist.” It is basically a term I made up a few years ago (and then never checked to see if someone else already said it) in response to the work of people like Harriet Watkins and Audrey Watters that try to point out how too many people are ignoring the decades of work and research in the educational world. My thought was that I should just skip Ed-Tech Futurism and go straight to Retro-Futurism, pointing out all of the ideas and research from the past that everyone is ignoring in the rush to look current and cool in education.

(which is actually more of what real futurists do, but that is another long post…)

One of the “new” terms (or older terms getting new attention) that I struggle with is “learning engineering.” On one hand, when people want to carve out an expert niche inside of instructional design for a needed subset of specific skills, I am all for that. This is what many in the field of learning engineering are doing (even though having two words ending in “-ing” just sounds off :) ). But if you go back several decades to the coining of the term, this was the original goal: to label something that was a specific subset of the Ed-Tech world in a way that can help easily identify the work in that area. Instructional Technology, Learning Experience Design and other terms like that also fall under that category.

(And for those that just don’t like the idea of the term “engineering” attached to the word “learning” – I get it. I just don’t think that is a battle we can win.)

However, there seems to be a very prominent strain of learning engineer that are trying to make the case for “learning engineering” replacing “instructional design” / “learning experience design” / etc or becoming the next evolution of those existing fields. This is where I have a problem – why put a label that already had a specific meaning on to something else that also already had a specific meaning, just in the pursuit of creating something new? You end up with charts like this:

Which are great – but there have also been hundreds of blog posts, articles, and other writings over several decades with charts almost exactly like this that have attributed these same keywords and competencies to instructional design and instructional technologist and other terms like that. I have a really dated Master’s Degree portfolio online that covers most of these except for Data Scientist. Data Science was a few years from really catching on in education, but when it did – I went and got a lot of training in it as an instructional designer.

There are also quotes like this that are also frequently used for instructional designers as well:

https://twitter.com/jaymesmyers2/status/1130836367230029824

And also tongue-in-cheek lists exactly like this for IDs:

(except for #4 – no instructional designer would say that even jokingly because we know what the data can and can’t do, and therefore how impossible that would be :) )

One of the signs that your field/area might be rushing too fast to make something happen is when people fail to think critically about what they share before they share it. An example of this would be something like this:

Did the person that created this think about the significance of comparing a fully-skilled Learning Engineer to “white” and a totally unskilled Learning Engineer to “black”? We really need a Clippy for PowerPoint slides that asks “You put the words ‘Black’ and ‘White’ on a slide. Have you checked to make sure you aren’t making any problematic comparisons from a racial standpoint?”

But there are those that are asking harder questions as well, so I don’t want to misrepresent the conversation:

There are also learning engineers that get the instructional design connection as well (see the Ellen Wagner quote on the right):

Although as an instructional designer, I would point out we aren’t just enacting these – we were trained and given degrees in these areas. The systems we work for currently might not formerly recognize this, but we do in our field and degree programs. Of course, instructional designers also have to add classroom management skills, training others how to design, convincing reluctant faculty, mindfulness, educational psychology, critical pedagogy, social justice, felt needs, effects of sociocultural issues such as food insecurity, and many other fields not listed in the blue above to all of those listed as well. Some might say “but those are part of human development theory and theories of human development and systems thinking.” Not really. They overlap, but they are also separate areas that also have to be taken into account.

(Of course, there is also the even larger field of Learning Science that encompasses all of this and more. You could also write a post like this about how instructional designers mistakenly think they are the same as learning scientists as well. Or how Learning Science tried to claim it started in 1990s when it really has a longer history. And so on.)

I guess the main problem I have is that instructional design came along first, and went into all of these areas first, and still few seem to recognize this. To imply that instructional design is a field that may also enact what learning engineers already have could possibly be taken as reversing what actually happened historically. I am still not clear if some learning engineers are claiming to have proceeded ID, to be currently superseding ID, or to have been the first to do what they do in the Ed-Tech world before ID. If any of those three, then there are problems – and thus the need for Ed-Tech Retro-Futurism.

So You Want to Go Online: OPMs vs In-House Development

As the Great OPM Controversy continues to rage, a lot is being said about developing online courses “in-house” (by hiring people to do the work rather than paying a company to do so). This is actually an area that I have a lot of experience in at various levels, so I wanted to address the pros and cons of developing in-house capacity for offering online programs. I have been out of the the direct instructional design business for a few years, so I will be a bit rusty here and there. Please feel free to comment if I miss anything or leave out something important. However, I still want to take a rough stab at a ballpark list of what needs consideration. First, I want to start with three given points:

  1. Everything I say here is assuming high-quality online courses, not just PowerPoints and Lecture Capture plopped online. But on the other hand, this is also assuming there won’t be any extra expenses like learning games or chat-bots or other expensive toys… errr… tools.
  2. In most OPM models, universities and colleges still have to supply the teachers, so that cost won’t be dealt with here, either. But make sure you are accounting for teacher pay (hopefully full time teachers more than adjuncts, and not just adding extra courses to faculty with already over-full loads).
  3. All of these issues I discuss are within the mindset of “scaling” the programs eventually to some degree or another, but I will get to the problems with scale later.

So the first thing to address is infrastructure, and I know there are a wide range of capacities here. Most universities and colleges have IT staff and support staff for things like email and campus computers. If you have that, you can hopefully build off of that. If you don’t…. well, the OPM model might be the better route for you as you are so far behind that you have to catch up with society, not just online learning. But I know most places are not in this boat. Some even already have technology and support in place for online courses – so you can just skip this part and talk directly with those people about their ability to support another program.

You also have to think about the support of technology, usually the LMS and possibly other software. If you have this in place, check to make sure the existing tools have capacity to take on more (they usually have some). If you have an IT department – talk with them about what it would take to add an LMS and any other tools (like data analysis tools) you would like to add. If you are talking one online program, you probably don’t need even one full time position to support what you need initially. That means you can make this a win/win for IT by helping them get that extra position for the ____ they have been wanting for a while if they can also share that position with online learning technology support part-time.

This is, of course, for a self-hosted LMS. All of the LMS providers out there will offer to host for you, and even provide support. It does cost, but shop around and realize there are vendors that will give you good service for a good price. But there are also some that won’t deal with you at all if you are not bringing a large numbers of courses online initially, so be careful there.

Then there is support for students and teachers. Again, this is something you can bundle from most LMS providers, or contract individually from various companies. If you already have student and faculty tech support of some kind on campus, talk with them to see what it would take to support __ number of new students in __ number of new online courses. They will have to increase staff, but since they often train and employ student workers to answer the calls/emails, this is also a win/win for your campus to get more money to more students. Assuming your campus fairly treats and pays its student workers, of course. If not, make sure to fix that ASAP. But keep in mind that this can be done for the cost of hiring a few more workers to handle increased capacity and then paying to train everyone in support to take online learning calls.

Then there will be the cost of the technology itself. Typically, this is the LMS cost plus other tools and plug-ins you might want to add in (data analytics, plagiarism detection, etc). Personally, I would say to avoid most of those bells and whistles at the beginning. Some of them – like plagiarism detection – are surveillance minded and send the wrong message to learners. Hire some quality instructional designers (I’ll get to that in a minute) and you won’t even need to use these tools. Others like data analytics might be of use down the line, but you might also find some of the things they do underwhelming for the price. With the LMS itself, note that there are other options like Domain of One’s Own that can replace the LMS with a wider range of options for different teachers and students (and they work with single sign on as well). There are also free open-source LMS if you want to self host. Then there are less expensive and more expensive LMS providers. Some that will allow you to have a small contract for a small program with the option to scale, others that want a huge commitment up front. Look around and remember: if it sounds like you are being asked to pay too much, you probably are.

So a lot of what I have discussed is going to vary in cost dramatically, depending on your needs and current capacity. However, if you remain focused on just what you need, and maybe sharing part of certain salaries with other departments to get part of those people’s time, and are also smart about scaling (more on that later), you are still looking at a cost that is in the tens of thousands range for what I have touched on so far. If you hit the $100k point, you are either a) over-paying for something, b) way behind the curve on some aspect, or c) deciding to go for some bells and whistles (which is fine if you need them or have people at your institution that want them – they usually cost extra with OPMs as well).

The next cost that almost anyone that wants to go online will need to pay for no matter what you do is course development. Many people think they can just get the instructors to do this – but just remember that the course will only be as good as their ability/experience in delivering online courses. You may find a few instructors that are great at it, but most at your school probably won’t fall into that category. I don’t say that as a bad thing in this context per se – most instructors don’t get trained in online course design, and even if they do, it is often specific to their field and not the general instructional design field. You will need people to make the course, which is where OPMs usually come in – but also in-house instructional designers as well.

With an average of 6-8 months lead time with a productive instructor, a quality instructional designer can complete 2-3 three quality 15 week online courses per semester. I know this for a fact, because as an instructional designer I typically completed 9 or so courses per year. And some IDs would consider that “slow.” More intense courses that are less ready to transition to online could take longer. But you can also break out of the 15 week course mindset when going online as well – just food for thought. If you are starting up a 10 course online program, you would probably want three instructional designers, with varying specialties. Why three IDs if just one could handle all ten courses in two years easily? Because there is a lot more to consider.

Once you start one online program, other programs will most likely follow suit fairly quickly. It almost always happens that way. So go ahead and get a couple more programs in the pipeline to get going once the IDs are ready. But you also need to build up and maintain infrastructure once you get those classes going. How do you fix design problems in the course? When do you revise general non-emergency issues? What about when you change instructors? And who trains all of these instructors on their specific course design? What about random one-off courses that want to go online outside of a program? Who handles course quality and accreditation? And so on. Quality, experienced instructional designers can handle all of these and more, even while designing courses. Especially if you get one that is a learning engineer or that at least specializes in learning engineering, because these infrastructural questions are part of their specialty.

The salary and benefits range of an instructional designer is between 50K-100K a year depending on experience and the cost of living where you are located. These are also positions that can work remotely if you are open to that – but you will want at least one on campus so they can talk to your students for feedback on the courses they are designing. But remote work is something to keep in mind because you also have to consider the cost of finding an office and getting computers and equipment for each new person you want to hire (either as IDs or the other positions described). Also don’t forget about the cost of benefits like health care, which is pretty standard for full-time IDs.

Another aspect to keep in mind is accreditation – that will take time and people, but that will be the case even if you go with an OPM as well. You will need to pull in people from across the campus that have experience with this, of course – but you will also have to find people that can handle this aspect regardless of what model you choose. And it can be a dozy, just FYI.

Another aspect to consider is advertising. This is a factor that will always cost, unless you are focused solely on transitioning an existing on campus program into an online one (and not planning on adding the online option to the on-campus one). But even then, if you want it to scale – you will need to advertise. Universities aren’t always the best at this. If yours is, then skip ahead. If not, you will need to find someone that can advertise your new program. Typically, this is where OPMs tend to shine. But it is also getting harder and harder to find those that will just let you pay for advertising separate from the entire OPM package.

I can’t really say what you need to spend here – but I will say to be realistic. Cap your initial courses at manageable amounts – not just for your instructors, but also for your support staff. I can’t emphasize enough that it is better to start off small and then scale up rather than open the floodgates from the beginning. Every course that I have seen that opens up the first offerings to massive numbers of students from the beginning has also experienced massive learner trauma. Don’t let companies or colleges gloss over those as “bumps in the road.” Those were actual people that were actually hurt by being that bump that got rolled over. Hurt that could have been avoided if you started small and scaled up at a manageable pace.

So while we are here, let’s talk scale. Scale is messy, no matter how you do it. Even going from one on-campus course to two on-campus courses has traditionally led to problems. All colleges have wanted to increase enrollments as much as possible since the beginning of academia, so its not like OPMs were the first to talk or try scale. However, we need to be real with ourselves about scale and the issues it can cause.

First of all, not all programs can scale. Nursing programs scale immensely because the demand for nurses is still massive. Also, nurses work their tails off, so Nursing instructors often personally take care of many problems of scale that some business models cause. I’m still not sure if the OPMs involved in those programs have even realized that is true yet. But not all programs can scale like a Nursing program can. Not all fields have the demand like Nursing does. Not all fields have the people with the mindset like Nurses have (no offense hopefully, but many of you know its true and its okay – I’m not sure if Nurses ever sleep).

All that to say – if you are not in Nursing, don’t expect to scale like Nursing can. Its okay. Just be realistic about. Also, be honest about any problems that are happening. Glossing over problems will only cause more problems in no time. Always have your foot on the brake, ready to stop the scaling before issues spiral out of hand.

Remember: education is a human endeavor, and people don’t react well to being herded like cattle. I feel like I have only touched the surface and left out so much, but I am as tired of typing as you probably are of reading. Hopefully this is giving some food for thought for the people that have been wondering about in-house program development.

So why go in-house development rather than OPM? Well, I have been making the case for the cost-saving benefits plus capacity-building benefits as well. Recently I read about an OPM that wanted to charge $600,000 to build one 10 course program. All that I have outlined here plus stuff I left out would easily half of that for a high-quality program. And I am one of those people that usually advocates for how expensive online courses can be to do right. But even I am thinking “Whoa!’ at $600K.

Look, if you are wanting to build a program in a field like Nursing that can realistically scale, and you want to deal with thousands of students being pushed through a program (along with all the massive problems that will bring), then you are probably one of five schools in the nation that fit that description and OPMs are probably the best bet for you. For the other 3000-4000+ institutions in the nation, here are some other factors to consider:

  • Hiring people usually means some or all of those people will live in your community, thus supporting local economies better.
  • Local people means people on your campus that can interact with your students and get their input and perspective.
  • Having your people do things also typically means more opportunities to hire students as GTAs, GRAs, assistants, etc – giving them real world skills and money for college.
  • When your academics and your GRAs are part of something, they usually research it and publish on it. The impact on the global knowledge arena could be massive, especially if you publish with OER models.
  • Despite what some say, HigherEd is constantly evolving. Not as fast as some would like, but it is happening. When the next shift happens, you will have the people on staff already to pivot to that change. If not, that will be another expensive contract to sign with the next OPM.

The last point I can’t emphasize enough. When the MOOC craze took off, my current campus turned to some of its experienced IDs – myself and my colleague Justin – to put a large number of MOOCs online. Now that analytics and AI are becoming more of a thing in education (again), they are turning to us and other IDs and people with Ed-Tech backgrounds on campus as well. For people that went the OPM route, these would all be more (usually expensive) contracts to buy. For our campus, it means turning to the people they are already paying. I don’t know what else to say to you if that doesn’t speak for itself.

Also, keep in mind that those who are not in academia don’t always understand the unique things that happen there. Recently I saw a group of people on Twitter upset about a college senior that couldn’t graduate because the one course they needed wasn’t offered that semester. The responses to this scenario are those that many in academia are used to hearing: “bet there is a simple software fix for this!” “what a money making scam!” “if only they cared to treat the student like a customer, they wouldn’t make this happen!” The implication is that the problem was on the University’s side for not caring about course scheduling enough to make graduation possible. Most people in academia are rolling their eyes at this – it is literally impossible for schools to get programs accredited if they don’t prove that they have created a pathway for learners to graduate on time. It makes good business sense that not all courses can be offered every semester, just like many business do not sell all products year round (especially restaurants). Plus, most teachers will tell you it is better to have 10 students in a course once a year than 2-3 students every semester – more interaction, more energy, etc. But schools literally have to map out a pathway for these variable offerings to work in order to just get the okay for the courses in the first place. Those of us in academia know this, but it seems that, based on what I saw on Twitter recently, many in the OPM space do not know this. We also know that there is always that handful of students that ignore the course offering schedules posted online, the advice of their advisers, and the warnings of their instructors because they think they can get the world to bend to their desires. I remember in the 90s telling two classmates they wouldn’t graduate on time if they weren’t in a certain class with me. They scoffed, but it turns out they in fact did not graduate on time. So something to keep in mind – outside perspectives and criticism can be helpful, but they can also completely misunderstand where the problems actually lie.

And look, I get it – there will always be institutions that prefer to get a “program in a box” for one fee no matter how large it is. If that is you, then more power to you. There are a few things I would ask if you go the OPM route: first of all, please find a way to be honest and open about the pros and cons of working with your OPM. They may not like it, but a lot of the backlash that OPMs are currently facing comes from people just not buying the “everything is awesome” line so many are pushing. The education world needs to know your successes as well as your failures. Failure is not a bad thing if you grow from it. Second, please keep in mind that while the “in-house” option looks expensive and complicated, going the OPM route will also be expensive and complicated. They can’t choose your options for you, so all the meetings I discuss here will also happen within an OPM model, just with difference people at the table. So don’t get an inflated ego thinking you are saving time or money going that route. Building a company is much different from building a degree program, so don’t buy into the logic that they are saving you start-up funds. They had to pay for a lot of things as a for-profit company that HigherEd institutions never have to pay for.

Finally, though, I will point out how you can also still sign contracts with various vendors for various parts of your process while still developing in-house, like many institutions have for decades. This is not always an all-or-nothing, either/or situation (see the response from Matthew Rascoff here for a good perspective on that, as well as Jonathan D. Becker’s response at the same link as a good case for in-house development). There are many companies in the OPM space that offer quality a la carte type services for a good price, like iDesign and Instructional Connections. Like I have said on Twitter, I would call those OPS (Online Program Support) more than OPM. Its just that this term won’t catch on. I have also heard the term OPE for Online Program Enablers, which probably works better.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.