Using Learning Analytics to Predict Cheating Has Been Going on for Longer Than You Think

Hopefully by now you have heard about the Dartmouth Medical School Cheating Scandal, where Dartmouth College officials used questionable methods to “detect” cheating in remote exams. At the heart of the matter is how College officials used click-stream data to “catch” so-called “cheaters.” Invasive surveillance was used to track student’s activity during the exams, officials used the data without really understanding it to make accusations, and then students were pressured to quickly react to the accusations without much access to the “proof.” Almost half of those accused (7 of 17 or 41%) have already had their cases dismissed (aka – they were falsely accused. Why is this not a criminal act?). Out of the remaining 10, 9 plead guilty, but 6 of those have now tried to appeal that decision because they feel they were forced to plead guilty. FYI – that is 76%(!) that are claiming they are falsely accused. Only one of those six wanted to be named – the other 5 are afraid of reprisals from the College if they speak up.

That is intense. Something is deeply wrong with all of that.

The frustrating thing about all of this is that plenty of people have been trying to warn that this is a very likely inevitable outcome of Learning Analytics research studies that look to detect cheating from the data. Of course, this particular area of research focus is not a major aim of Learning Analytics in general, but several studies have been published through the years. I wanted to take a look at a few that represent the common themes..

The first study is a kind of pre-Learning Analytics paper from 2006 called “Detecting cheats in online student assessments using Data Mining.” Learning Analytics as a field is usually traced back to about 2011, but various aspects of it existed before that. You can even go back to the 1990s – Richard A. Schwier describes the concept of “tracking navigation in multimedia” (in the 1995 2nd edition of his textbook Instructional Technology: Past, Present, and Future – p. 124, Gary J. Anglin editor). Schwier really goes beyond tracking navigation into foreseeing what we now call Learning Analytics. So all of that to say: tracking students’ digital activity has a loooong history.

But I start with this paper because it contains some of the earliest ways of looking at modern data. The concerning thing with this study is that the overall goal is to predict which students are most likely to be cheating based on demographics and student perceptions. Yes – not only do they look at age, gender, and employment, but also a learner’s personality, social activities, and perceptions (did they think the professor was involved or indifferent? Did they find the test “fair” or not? etc).

You can see by the chart on p.207 that males with lower GPAs are mostly marked as cheating, while females with higher GPAs are mostly marked as not cheating. Since race is not considered in the analysis, systemic discrimination could create incredibly racist oppression from this method.

Even more problematic is the “next five steps to data mining databases,” with one step recommending the collection of “responses of online assessments, surveys and historical information to detect cheats in online exams.” This includes the clarification that:

  • “information from students must be collected from the historical data files and surveys” (hope you didn’t have a bad day in the past)
  • “at the end of each exam the student will be is asked for feedback about exam, and also about the professor and examination conditions” (hope you have a wonderful attitude about the test and professor)
  • “professor will fill respective online form” (hope the professor likes you and isn’t racist, sexist, transphobic, etc if any of that would hurt you).

Of course, one might say this is pre-Learning Analytics and the current field is only interested in predicting failure, retention, and other aspects like that. Not quite. Lets look at the 2019 article “Detecting Academic Misconduct Using Learning Analytics.” The focus in this study is bit more specific: they seek to use keystroke logging and clickstream data to tell if a student is writing an authentic response or transcribing a pre-written one (which is assumed to only be from contract cheating).

The lit review of this study also shows that this study is not the only one digging into this idea. The idea goes back several years through multiple studies.

While this study does not get to the same Minority Report-level concerns that the last one did, there are still some problematic issues here. First of all is this:

“Keystroke logging allows analysis of the fluency and flow of writing, the length and frequency of pauses, and patterns of revision behaviour. Using these data, it is possible to draw conclusions about students’ underlying cognitive processes.”

I really need to carve out some time to write about how you can’t use clickstream data of any kind to detect cognitive processes in any way, shape or form. Most people that read this blog know why this is true, so I won’t take the time now. But the Learning Analytics literature is full of people that think they can detect cognitive activities, processes, or presence through clickstream data… and that is just not possible.

The paper does address the difficulties in using keystroke data to analyze writing, but proposes analysis of clickstream data as a much better alternative. I’m not really convinced by the arguments they present – but the gist is they are looking to detect revision behaviors, because authentic writing involved pauses and deletions.

Except that is not really true for everyone. People that write a lot (like, say, by blogging) can get to a place where they can write a lot without taking many pauses. Or, if they really do know the material, they might not need to pause as much. On the other hand, the paper assumes that transcription of an existing document is a mostly smooth process. I know it is for some, but it is something that takes me a while.

In other words, this study relies on averages and clusters of writing activities (words added/deleted, bursts of writing activity, etc) to classify your writing as original or copied. Which may work for the average, but what about students with disabilities that affect how they write? What about people that just work differently than the average? What about people from various cultures that approach writing in a different method, or even those that have to translate what they want to write into English first and then write it down?

Not everyone fits so neatly into the clusters.

Of course, this study had a small sample size. Additionally, while they did collect demographic data and had students take self-regulated learning surveys, they didn’t use any of that in the study. The SRL data would seem to be a significant aspect to analyze here. Not to mention at least mentioning some details on the students who didn’t speak English as a primary language.

Now, of course, writing out essay exam answers is not common in all disciplines, and even when it is, many instructors will encourage learners to write out answers first and then copy them into the test. So these results may not concern many people. What about more common test types?

The last article to look at is “Identifying and characterizing students suspected of academic dishonesty in SPOCs for credit through learning analytics” from 2020. There are plenty of other studies to look at, but this post is already getting long. SPOC here means “Small Private Online Course”… a.k.a. “a regular online course.” The basic gist is that they are clustering students by how close their answers are to each other and how close their submission times are. If they get the exact same answers (including choosing the same wrong choice) and turn in their test at about the same time, they are considered “suspect of academic dishonesty.” It should also be pointed out that the Lit Rreview here also shows they are the first or only people to be looking into this in the Learning Analytics realm.

The researchers are basically looking for students that meet together and give each other answers to the test. Which, yes – it is suspicious if you see students turn in all the same answers at about the same time and get the same grade. Which is why most students make sure to change up a few answers, as well as space out submissions. I don’t know if the authors of this study realized they probably missed most cheaters and just caught the ones not trying that hard.

Or… let me propose something else here. All students are trying to get the right answers. So there are going to be similarities. Sometimes a lot of students getting the same wrong answer on a question is seen as a problem to fix on the teaching side (it could have been taught wrong). Plus, students can have similar schedules – working the same jobs, taking the same other classes that meet in the morning, etc. It is possible that out of the 15 or so they flagged as “suspect,” 1 or 2 or even 3 just happened to get the same questions wrong and submit at about the same time as the others. They just had bad luck.

I’m not saying that happened to all, but look: you do have this funnel effect with tests like these. All of your students are trying to get the same correct answer and finish before the same deadline. So its quite possible there will be overlap that is very coincidental. Not for all, but isn’t it at least worth a critical examination if even a small number of students could get hurt by coincidentally turning in their test at the same time others are?

(This also makes a good case for ungrading, authentic assessment, etc.)

Of course, the “suspected” part gets dropped by the end of the paper: “We have applied this method in a for credit course taught in Selene Unicauca platform and found that 17% of the students have performed academic dishonest actions, based on current conservative thresholds.” How did they get from “suspected” to “have performed?” Did they talk to the students? Not really. They looked at five students and felt that there was no way their numbers could be anything but academic dishonesty. Then they talked to the instructor and found that three students had complained about low grades. The instructor looked at their tests, found they had the exact same wrong answers, and… case closed.

This is why I keep saying that Learning Analytics research projects should be required to have an instructional designer or learning research expert on the team. I can say after reviewing course results for decades that it is actually common for students to get the same wrong answers and be upset about it because they were taught wrong. Instructors and Instructional Designers do make mistakes, so always find out what is going on. Its also possible that there was a conversation weeks ago where one student with the wrong information spread that information to several students when discussing the class. It happens.

But this is what happens when you don’t investigate fully and assume the data is all you need. Throwing in a side of assuming that cheaters act a certain way certainly goes a long way as well. So you can see a direct line from assumptions made about personality and demographics of who cheaters are, to using clickstream data to know what is going on in the brain, to assuming the data is all you need…. all the way to the Dartmouth Medical School scandal. Where there is at least a 41%-76% false accusation rate currently.

Video Content or Audio-Only Content For Online Courses: Which is Better?

Like many of you, I saw this Tweet about audio-only lectures making the rounds on Twitter:

https://twitter.com/sivavaid/status/1389592396820795397

Now, of course, many questioned “why lectures?” (which is a good question to ask), but the main discussion seemed to focus on the content of courses more than lectures specifically. Video content (often micro-content) is common in online courses. There were many points raised about accessibility (both of videos and audio-only lectures). Many seem to feel strongly that you should do either video content or audio-only content. My main thought was: instead of asking “either/or”… why not think “both/and”?

From certain points of view, audio-only content addresses some accessibility issues many rarely consider. When creating video content, the speaker will sometimes rely on visual-only clues and images without much narration, leaving those that are listening with gaps in their understanding. So while it is easy to say “if you don’t want video, then just play the video in the background and don’t watch,” sometimes the audio portion of a video leaves out key pieces of information. This is usually because when the content gets to a visual part, the speakers often assumes everyone playing the video can see.

“Look at what the red line does here…”

“When you see this, what do you think of?…”

And so on. People that record podcasts often know they have to describe any visuals they want to use so people listening know what they are talking about. For accessibility purposes, we really should be doing this in videos as well. Not to mention that it helps the information make more sense for every one regardless of disability.

There are other advantages to audio-only content as well, such as being able to download the audio file to various devices and take it with you where you go. Some devices do this with video files – but how often do we offer videos for download? And what if someone had limited access or storage capacity for massive video files? Auio-only mp3 files work for a wider variety of people on the technical level.

On the other hand, there are times when video is preferred. The deaf or hard of hearing often come to mind. Additionally, some people think that the focus that video requires helps them understand better. Video can also help increase teacher presence. Plus, video content is not the same as a Zoom call (or even a video lecture broadcast live), so its not really fair to throw both in the same bucket.

I would also point out that just because learners like audio-only one semester, that doesn’t mean the next semester of learners will. And I would guarantee that there are those in Vaidhyanathan’s course that didn’t really like the audio-only, but didn’t want to speak up and be the outlier.

Remember: Outliers ALWAYS exist in your courses. Never underestimate the silencing power of consensus.

But again, I don’t think it takes much extra time to give learners the option to choose for themselves what they want.

First of all, every video you post in a course should be transcribed and closed-captioned as aground rule – not only for accessibility, but also for Universal Design for Learning. But I also know that this is the ideal that often not supported financially at many institutions. For the sake of this article, I am not going to repeat the need to be proactive in making courses accessible.

So with that in mind, the main step that you will need to add into your course design process is to think through your video content (which is hopefully focused micro-content) and add in descriptions of any visual-only content. Don’t forget intro, transition, and ending graphics – speak out everything that will be on screen.

Then, while you are editing or finalizing the video, export to mp3 in addition to your preferred video format. Or use a tool that can extract the audio from the video (this is also helpful if you already have existing videos with no visual-only aspects). Offer that mp3 as a download on the page with the video (or even create a podcast with it). Now your students have the option to choose video or audio-only (or to switch as they like).

Also, once you get the video closed captioned, take the transcript and spend a few minutes collecting it into paragraphs to make it more readable. Maybe even add the images from the video in the document (you already would have full alt descriptions in the text). Then also put this file on the page with video as a downloadable file. You could even consider maybe collecting your transcripts into PressBooks and make your own OER. However you want to do it, just make it another option for learners to get the content.

Anyways… the idea here is that students can choose for themselves to watch the video, listen to the audio file, or read the transcript – all in the manner they want to on the device they want.

One of the questions that always comes up here is how to make the video content sound natural. Spontaneous/off-the-cuff recordings can miss material or go down a rabbit-hole. Plus you might forget to describe some visual content. But reading pre-written scripts sounds wooden and boring. One of my co-authors for Creating Online Learning Experiences (Brett Benham), wrote about how to approach this issue in Chapter 10: Creating Quality Videos. You can read more at the link, but the basic idea is to quickly record a spontaneous take on your content and have that transcribed (maybe even by an automatic service to save some money). Then take that transcript, edit out the side-trails, mistakes, and missteps, and use your edited document to record the final video. It will then be your spontaneous voice, but cleaned-up where needed and read for closed-captioning.

To recap the basics points:

  1. Think about which parts of your video content will have visual aspects, and come up with a description for those parts in words.
  2. Record your video content with the visual aspects, but make sure to cover those descriptions you came up with.
  3. Create mp3 files from your videos and add that to the course page with the video embed/link and transcription file.

If you want to go to the next level with this:

  1. Enable downloading of your videos (or store them in a service that allows downloads if that option is not possible in your LMS).
  2. Turn your mp3 files into a podcast so that learners can subscribe and automatically download to devices when you post new files.
  3. Take your transcriptions and re-format them (don’t change any words or add/delete anything) into readable text, along with the visuals from the video. Save this as an accessible PDF and let learners download if they like.
  4. Collect your PDF transcripts into a PressBook, where you can add the audio and video files/links/embeds as well.
  5. Maybe even add some H5P activities to your PressBooks chapters to make them interactive lessons.

Op-Ed: Online Proctoring is Not Essential

After one of my usual Twitter rants about proctoring software, I was asked to turn the rant into an Op-Ed. Elearning Inside liked it enough to publish it:

In a recent op-ed about online proctoring, ProctorU CEO Scott McFarland made some concerning claims about how he feels proctoring online exams is “essential” and “indispensable.” Many were quick to point out their skepticism of the owner of a proctoring company making such a claim.

One important detail that McFarland left out was that the exams or tests themselves are not essential. Not only that, he skipped over some of the largest concerns with proctoring, while also not accurately addressing the research that is happening in this area…..

You can read the rest of the article, where I make wild references to assessment gods, 5000% of students cheating, and general debunking of the current “cheating is everywhere” FUD. But the main point is that there is a better way based on solid course design.

The Problem of Learning Analytics and AI: Empowering or Resistance in the Age of “AI”

So where to begin with this series I started on Learning Analytics and AI? The first post started with a basic and over-simplified view of the very basics. I guess the most logical place to jump to is… the leading edge of the AI hype? Well, not really… but there is an event in that area happening this week, so I need to go there anyways.

I was a bit surprised that the first post got some attention – thank you to those that read it. Since getting booted out of academia, I have been unsure of my place in the world of education. I haven’t really said much publicly or privately, but it has been a real struggle to break free from the toxic elements of academia and figure out who I am outside of that context. I was obviously surrounded by people that weren’t toxic, and I still adjunct at a university that I feel supports its faculty… but there were still other systemic elements that affect all of us that are hard to process once you are gone.

So, anyway, I just wasn’t sure if I could still write anything that made a decent point, and I wasn’t too confident I did that great of a job writing about such a complex topic in a (relatively) short blog post last time. Maybe I didn’t, but even a (potentially weak) post on the subject seems to resonate with some. Like I said in the last post, I am not the first to bring any of this up. In fact, if you know of any article or post that makes a better point than I do, please feel free to add it in the comments.

So, to the topic at hand: this week’s Empowering Learners in the Age of AI conference in Australia. My concern with this conference is not with who is there – it seems to be a great group of very knowledgeable people. I don’t know some of them, but many are big names in the field that know their stuff. What sticks out to me is who is not there, as well as how AI is being framed in the brief descriptions we get. But neither of those points is specific to this conference. In fact, I am not really looking at the conference as much as some parts of the field of AI, with the conference just serving as proof that the things I am looking at are out there.

So first of all, to address the name of the conference. I know that “empowering learners” is a common thing to say not just in AI, but education in general. But it is also a very controversial and problematic concept as well. This is one concern that I hang on all of education and even myself as I like the term “empower” as well. No matter what my intentions (or anyone else’s), the term still places the institution and the faculty as the center of the power in the learning process – there to decide whether the learners get to be empowered or not. One of the best posts on this topic is by Maha Bali: The Other Side of Student Empowerment in a Digital World. At the end of the post, she gets to some questions that I want to ask of the AI field, including these key ones:

“In what ways might it reproduce inequality? How participatory has the process been? How much have actual teachers and learners, especially minorities, on the ground been involved in or consulted on the design, implementation, and assessment of these tools and pedagogies?”

I’ll circle back to those throughout the post.

Additionally, I think we should all question the “Age of AI” and “AI Society” part. It is kind of complicated to get into what AI is and isn’t, but the most likely form of AI we will see emerge first is what is commonly called “Artificial General Intelligence” (AGI), which a is deceptive way of saying “pretending to act like humans but not really be intelligent like we are.” AGI is really a focus on creating something that “does” the same tasks humans can, which is not what most people would attribute to an “Age of AI” or “AI Society.” This article on Forbes looks at what this means, and how experts are predicting that we are 10-40 years away from AGI.

Just as an FYI, I remember reading in the 1990s that we were 20-40 years away from AGI then as well.

So we aren’t near an Age of AI, probably not in many of our lifetimes, and even the expert options may not end up being true. The Forbes articles fails to mention that there were many problems with the work that claimed to be able to determine sexuality from images. In fact, there is a lot to be said about differentiating AI from BS that rarely gets brought up by the AI researchers themselves. Tristan Greene best sums it up in his article about “How to tell the difference between AI and BS“:

“Where we find AI that isn’t BS, almost always, is when it’s performing a task that is so boring that, despite there being value in that task, it would be a waste of time for a human to do it.”

I think it would have been more accurate to say you are “bracing learners for the age of algorithms” than empowering for an age of AI (that is at least decades off but may never actually happen according to some). But that is me, and I know there are those that disagree. So I can’t blame people for being hopeful that something will happen in their own field sooner than it might in reality..

Still, the most concerning thing about the field of AI is who is not there in the conversations, and the Empowering Learners conference follows the field – at least from what I can see on their website. First of all, where are the learners? Is it really empowering for learners when you can’t really find them on the schedule or in the list of speakers and panelists? Why is their voice not up front and center?

Even bigger than that is the problem that has been highlighted this week – but one that has been there all along:

The specific groups she is referring to are BIPOC, LGBTQA, and Disabilities. We know that AI has discrimination coded into it. Any conference that wants to examine “empowerment” will have to make justice front and center because of long existing inequalities in the larger field. Of course, we know that different people have different views of justice, but “empowerment” would also mean each person that faces discrimination gets to determine what that means. Its really not fair to hold a single conference accountable for issues that long existed before the conference did, but by using the term “empowerment” you are setting yourself up to a pretty big standard.

And yes, “empowerment” is in quotes because it is a problematic concept here, but it is the term the field of AI and really a lot of the world of education uses. The conference web page does ask “who needs empowering, why, and to do what?” But do they mean inequality? And if so, why not say it? There are hardly any more mentions of this question after it is brought up, much less anything connecting the question to inequality, in most of the rest of the program. Maybe it will be covered in conference – it is just not very prominent at all as the schedule stands. I will give them the benefit of the doubt until after the conference happens, but if they do ask the harder questions, then they should have highlighted that more on the website.

So in light of the lack of direct reference to equity and justice, the concept of “empowerment” feels like it is taking on the role of “equality” in those diagrams that compare “equality” with “equity” and “justice”:

Equality vs equity vs justice diagram
(This adaption of the original Interaction Institute for Social Change image by Angus Macguire was found on the Agents of Good website. Thank you Alan Levine for helping me find the attribution.)

If you aren’t going to ask who is facing inequalities (and I say this looking at the fields of AI, Learning Analytics, Instructional Design, Education, all of us), then you are just handing out empowerment the same to all. Just asking “who needs empowering, why, and to do what?” doesn’t get to critically examining inequality.

In fact, the assumption is being made by so many people in education that you have no choice but to utilize AI. One of the best responses to the “Equality vs Equity vs Justice” diagrams has come from Bali and others: what if the kids don’t want to play soccer (or eat an apple or catch a fish or whatever else is on the other side of the fence in various versions)?

Resistance is a necessary aspect of equity and justice. To me, you are not “empowering learners” unless you are teaching them how to resist AI itself first and foremost. But resistance should be taught to all learners – even those that “feel they are safe” from AI. This is because 1) they need to stand in solidarity with those that are the most vulnerable, to make sure the message is received, and 2) they aren’t as safe as they think.

There are many risks in AI, but are we really taking the discrimination seriously? In the linked article, Princeton computer science professor Olga Russakovsky said

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities. We’re a fairly homogeneous population, so it’s a challenge to think broadly about world issues.”

Additionally, (now former) Google researcher Timnit Gebru said that scientists like herself are

“some of the most dangerous people in the world, because we have this illusion of objectivity.”

Looking through the Empowering Learner event, I don’t see that many Black and Aboriginal voices represented. There are some People of Color, but not near enough considering they would be the ones most affected by discrimination that would impede any true “empowerment.” And where are the experts on harm caused by these tools, like Safiya Noble, Chris Gilliard, and many others? The event seems weighted towards those voices that would mostly praise AI, and it is a very heavily white set of voices as well. This is the way many conferences are, including those looking at education in general.

Also, considering that this is in Australia, where are the Aboriginal voices? Its hard to tell on the schedule itself. I did see on Twitter that the conference will start with an Aboriginal perspective. But when is that? In the 15 minute introductory session? That is no where near enough time for that. Maybe they are elsewhere on the schedule and just not noted well enough to tell. But why not make that a prominent part of the event rather than part of a 15 minute intro (if that is what it is)?

There are some other things I want to comment on about the future of AI in general:

  • The field of AI is constantly making references to how AI is affecting and improving areas such as medicine. I would refer you back to the “How to tell the difference between AI and BS” article for much  of that. But something that worries me about the entire AI field talking this way is that the are attributing “artificial intelligence” to things that boil down to advanced pattern recognition mainly using human intelligence. Let’s take, for example, recognizing tumors in scans. Humans program the AI to recognize patterns in images that look like tumors. Everything that the AI knows to look for comes directly from human intelligence. Just because you can then get the algorithm to repeat what the humans programmed it to thousands of times per hour, that doesn’t make it intelligence. It is human intelligence pattern recognition that has been digitized, automated, and repeated rapidly. This is generally what is happening with AI in education, defense, healthcare, etc.
  • Many leaders in education in general like to say that “institutions are ill-prepared for AI” – but how about how ill-prepared AI is for the equity and reality?
  • There is also often talk in the AI community about building trust between humans and machines that we see examples of at the conference as well: “can AI truly become a teammate in group learning or a co-author of a ground-breaking scientific discovery?” I don’t know what the speaker plans to say, but the answer is no. No we shouldn’t build trust and no we shouldn’t anthropomorphize AI. We should always be questioning it. But we also need to be clear, again, that AI is not the one that is writing (or creating music or paintings). This is the weirdest area of AI – they feed a bunch of artistic or music or literary patterns into AI, tell it how to assemble the patterns, and when something comes out it is attributed to AI rather than the human intelligence that put it all together. Again, the machine being able to repeat and even refine what the human put there in the first place is not the machine creating it. Take, for example, these different AI generated music websites. People always send these to me and say “look how well the machine put together ambient or grindcore music or whatever.” Then I  listen… and it is a mess. They take grindcore music and chop it up in to bits and then run those bits through pattern recognition and spit out this random mix – that generally doesn’t sound like very good grindcore. Ambient music works the best to uninitiated ears, but to fans of the music it still doesn’t work that great.
  • I should also point out about the conference that there is a session on the second day that asks “Who are these built for? Who benefits? Who has the control?” and then mentions “data responsibility, privacy, duty of care for learners” – which is a good starting point. Hopefully the session will address equity, justice, and resistance specifically. The session, like much of the field of AI, rests on the assumption that AI is coming and there is nothing you can do to resist it. Yes the algorithms are here, and it is hard to resist – but you still can. Besides, experts are still saying 10-40 years for the really boring stuff to emerge as I examined above.
  • I also hope the conference will discuss the meltdown that is happening in AI-driven proctoring surveillance software.
  • I haven’t gotten much into surveillance yet, but yes all of this relies on surveillance to work. See the first post. Watch the Against Surveillance Teach-In Recording.
  • I was about to hit publish on this when I saw an article about a Deepfake AI Santa that you can make say whatever you want. The article says “It’s not nearly as disturbing as you might think”… but yes, it is. Again, people saying something made by AI is good and realistic when it is not. The Santa moves and talks like a robot with zero emotion. Here again, they used footage of a human actor and human voice samples and the “AI” is an algorithm that chops it up into the parts that makes your custom message. How could this possibly be misused?
  • One of the areas of AI that many in the field like to hype are “conversational agents” aka chatbots. I want to address that as well since that is an area that I have (tried) to research. The problem with researching agents/bots is that learners just don’t seem to be impressed with them – it’s just another thing to them. But I really want to question how these count as AI after having created some myself. The process for making a chatbot is that you first organize a body of information into chunks of answers or statements that you want to send as responses. You then start “training” the AI to connect what users type into the agent (aka “bot”) with specific statements or chunks of information. The AI makes a connection and sends the statement or information or next question or video or whatever it may be back to the user. But the problem is, the “training” is you guessing dozens of ways that the person might ask a question or make a statement (including typos or misunderstandings) that matches with the chunk of information you want to send back. You literally do a lot of the work for the AI by telling it all the ways someone might type something into the agent that matches each chunk of content. They want at least 20 or more. What this means is that most of the time, when you are using a chatbot, it gives you the right answer because you typed in one of the most likely questions that a human guessed and added to the “training” session. In the rare cases where some types something a human didn’t guess, then the Natural Language Processing kicks in to try and guess the best match. But even then it could be a percentage of similar words more than “intelligence.” So, again, it is human intelligence that is automated and re-used thousands of times a minute – not something artificial that has a form of intelligence. Now, this might be useful in a scenario when you have a large body of information (like an FAQ bot for the course syllabus) that could use something better than a search function. Or maybe a branching scenarios lesson. But it takes time to create a good chatbot. There is still a lot of work and skill to creating the questions and responses well. But to use chatbots for a class of 30, 50, 100? You probably will spend so much time making it that it would be easier to just talk to your students.
  • Finally, please know that I realize that what I am talking about still requires a lot of work and intelligence to create. I’m not doubting the abilities of the engineers and researchers and others that put their time into developing AI. I’m trying to get at the pervasive idea that we are in an Age of AI that can’t be avoided. Its a pervasive idea that was even made in a documentary web series a year ago. I also question whether “artificial intelligence” is the right term for all of this, rather than something more accurate like “automation algorithms.”

Again, everything I touch on here is not as much about this conference, as it is about the field of AI since this conference is really just a lot of what is in the AI field concentrated into two days and one website. The speakers and organizers might have already planned to address everything I brought up here a long time ago, and they just didn’t get it all on the website. We will see – there are some sessions with no description and just a bio. But still, at the core of my point, I think that educators need to take a different approach to AI than we have so far (maybe by not calling it that when it rarely is anything near intelligent) by taking justice issues seriously. If the machine is harming some learners more than others, the first step is to teach resistance, and to be successful in that all learners and educators need to join in the resistance.

The Problem of Learning Analytics and AI

For some time now, I have been wanting to write about some of the problems I observed during my time in the Learning Analytics world (which also crosses over into Artificial Intelligence, Personalization, Sentiment Analysis, and many other areas as well). I’m hesitant to do so because I know the pitchforks will come out, so I guess I should point out that all fields have problems. Even my main field of instructional design is far from perfect. Examining issues with in a field (should be) a healthy part of the growth of a field. So this will probably be a series of blog posts as I look at publications, conferences, videos, and other aspects of the LA/PA/ML/AI etc world that are in need of a critical examination. I am not the first or only person to do this, but I have noticed a resistance by some in the field to consider these viewpoints, so hopefully adding more voices to the critical side will bring more attention to these issues.

But first I want to step back and start with the basics. At the core of all analytics, machine learning, AI, etc are two things: surveillance and algorithms. Most people wouldn’t put it this way, but let’s face it: that is how it works. Programs collect artifacts of human behavior by looking for them, and then process those through algorithms. Therefore, the core of all of this is surveillance and algorithms.

At the most basic level, the surveillance part is a process of downloading a copy of data from a database that was intentionally recording data. That data is often a combination of click-stream data, assignment and test submissions, discussion forum comments, and demographic data. All of this is surveillance, and in many cases this is as far as it goes. A LOT of the learning analytics world is based on click stream data, especially with an extreme focus on predictive analytics. But in a growing number of examples, there are also more invasive forms of surveillance added that rely on video recordings, eye and motion detection, bio-metric scans, and health monitoring devices. The surveillance is getting more invasive.

I would also point out that none of this is accidental. People in the LA and AI fields like to say that digital things “generate” data, as if it is some unintentional by-product of being digital: “We turned on this computer, and to our surprise, all this data magically appeared!”

Data has to be intentionally created, extracted, and stored to exist in the first place. In fact, there usually is no data in any program until programmers decide they need it. They will then create a variable to store that data for use within the program. And at this moment is where bias is introduced. The reason why certain data – like names, for example – are collected and others aren’t has to do with a bias towards controlling who has access and who doesn’t. Then that variable is given a name – it could be “XD4503” for all the program cares. But to make it easier for programmers to work together, they create variables names that can be understood by everyone on the team: “firstName,” “lastName,” etc.

Of course, this designation process introduces more bias. What about cultures that have one name, or four names? What about those that have two-part names, like the “al” that is common in the Arabic names, but isn’t really used for alphabetizing purposes? What about cultures that use their surname as their first name? What about random outliers? When I taught eighth grade, I had two students that were twins, and their parents gave them both nearly identical sets of five names. The only difference between the two was that the third name was “Jevon” for one and “Devon” for the other. So much of the data that is created – as well as how it is named, categorized, stored, and sorted – is biased towards certain cultures over others.

Also note here that there is usually nothing that causes this data to leave the program utilizing it. In order for some outside process or person to see this data, programmers have to create a method for displaying and / or storing that data in database. Additionally, any click stream, video, or bio-metric data that is stored has to be specifically and intentionally captured in ways that can be stored. For example, a click in itself is really just an action that makes a website execute some function. It disappears after that function happens – unless someone creates a mechanism for recording what was clicked on, when it was clicked, what user was logged in to do the click, and so on.

All of this to say that none of this is coincidental, accidental, or unplanned. There is a specific plan and purpose for every piece of data that is created and collected outside of the program utilizing them. None of the data had to be collected just because it was magically “there” when the digitials were turned on. The choice was made to create the data through surveillance, and then store it in a way that it could be used – perpetually if needed.

Therefore, different choices could be made to not create and collect data if the people in control wanted it that way. It is not inevitable that data has to be generated and collected.

Of course, most of the few people that will read this blog already know all of this. The reason I state this all here is for anybody that might still be thinking that the problems with analytics and AI is created during the design of the end user products. For example, some believe that the problems that AI proctoring has with prejudice and discrimination started when the proctoring software was created… but really this part is only the continuation of problems that started when the data that these AI systems utilized was intentionally created and stored.

I think that the basic fundamental lens or mindset or whatever you want to call it for publishing research or presenting at conferences about anything from Learning Analytics to AI has to be a critical one rooted in justice. We know that surveillance and algorithms can be racist, sexist, ablest, transphobic, and the list of prejudices goes on. Where people are asking the hard questions about these issues, that is great. Where the hard questions seem to be missing, or people are not digging deep enough to see the underlying biases as well, I want to blog about it. I have also noted that the implementation of LA/ML/AI tools in education too often lacks input from the instructional design / learning sciences / etc fields – so that will probably be in the posts as well.

While this series of posts is not connected to the Teach-In Against Surveillance, I was inspired to get started on this project based on reflecting on why I am against surveillance. Hopefully you will join the Teach-In tomorrow, and hopefully I will get the next post on the Empowering Learners for the Age of AI conference written in this lifetime. :)

People Don’t Like Online Proctoring. Are Institutional Admins Getting Why?

You might have noticed a recent increase in the complaints and issues being leveled against online proctoring companies. From making students feeling uncomfortable and/or violated, to data breaches and CEOs possibly sharing private conversations online, to a growing number of student and faculty/staff petitions against the tools, to lawsuits being leveled against dissenters for no good reason, the news has not been kind to the world of Big Surveillance. I hear the world’s tiniest violin playing somewhere.

It seems that the leadership at Washington State University decided to listen to concerns… uhhh… double down and defend their position to use proctoring technology during the pandemic. While there are great threads detailing different problems with the letter, I do want to focus in on a few statements specifically. Not to specifically pick on this one school, but because WSU’s response is typical of what you hear from too many Higher Ed administrations. For example, when they say…

violations of academic integrity call into question the meaningfulness of course grades

That is actually a true statement… but not in the way it was intended. The intention was to say that cheating hurts academic integrity because it messes up the grade structures, but it could also be taken to say that cheating calls into highlights the problem with the meaningfulness of grades because cheating really doesn’t affect anyone else.

Think about it: someone else cheats, and it casts doubt on the meaning of my grade if I don’t cheat? How does that work exactly? Of course, this is a nonsense statement that reals highlights how cheating doesn’t change the meaning of grades for anyone else. Its like the leaders at this institution are right there, but don’t see the forest for the trees: what exactly does a grade mean if the cheaters that get away with it don’t end up hurting anyone but themselves? Or does cheating only cause problems for non-cheaters when the cheaters get caught? How does that one work?

But let’s focus here: grades are the core problem. Yes, many people feel they are arbitrary and even meaningless. Still others say they are unfair, while some look at them as abusive. At the very least, you really should realize grades are problematic. Students can guess and get a higher grade than what they really actually know. Tests can be gamed. Questions have bias and discrimination built in too many times. And so on. Online proctoring is just an attempted fix for a problem that existed long before “online” was even an option.

But let’s see if the writers of the letter explain exactly how one person cheating harms someone else… because maybe I am missing something:

when some students violate academic integrity, it’s unfair for the rest. Not only will honest students’ hard work not be properly reflected…. Proctoring levels the playing field so that students who follow the rules are not penalized in the long run by those who don’t.

As someone that didn’t cheat in school, I am confused as to how this exactly works. I really never spent a single minute caring about other students’ cheating. You knew it happened, but it didn’t affect you, so it was their loss and not yours. In fact, you never lost anything in the short or long run from other student’s cheating. I have no clue how my hard work was not “properly reflected” by other students’ cheating.

(I would also note that this “level the playing field” means that they assume proctoring services catch all “cheaters” online, just like have instructors in the classroom on campus meant that all of the “cheaters” in those classes. But we all know that is not the case.)

I have never heard a good answer for how does supposed “penalization” works. Most of the penalization I know of from classes are systemic issues against BIPoC students that happens in ways that proctoring never deals with. You sometimes wish institutions would put as much money into fighting that as they would spying through student cameras….

But what about the specific concerns with how these services operate?

Per WSU’s contract, the recorded session is managed by an artificial intelligence “bot” and no human is on the other end at ProctorU watching the student. Only the WSU instructor can review the recorded session.

A huge portion of the concern about proctoring has been about the AI bots – which are here presented as an “it’s all okay because” solution…? Much of the real concern many have expressed is with the algorithms themselves and how they are usually found to be based on racist, sexist, and ableist norms. Additionally, the other main concern is what the instructor might see when they do review a recording of a student’s private room. No part of the letter in question addresses any of the real concerns with the bigger picture.

(It is probably also confusing to people whether or not someone is watching on the other side of the camera when there are so many complaints online from students that have had issues with human proctors, especially ones that were “insulting me by calling my skin too dark” as one complaint states.)

The response then goes on to talk about getting computers that will work with proctoring service to students that need them, or having students come in to campus for in-person proctoring if they just refuse to use the online tool. None of this addresses the concerns of AI bias, home privacy, or safety during a pandemic.

The moral of the point I am making here is this: if you are going to respond to concerns that your faculty and staff have, make sure you are responding to the actual concerns and not some imaginary set of concerns that few have expressed. There is a bigger picture as to why people are objecting to these services, which – yes – may start with feeling like they are being spied on by people and/or machines. But just saying “look – no people! (kind of)” is not really addressing the core concerns.

QM and the Politics of the “Unbiased” Bias

So it started off innocent enough, with a Tweet about concerns regarding the popular QM rubric for course review:

Different people have voiced concerns with the rubric through the years… usually not saying that it is all bad or anything, but just noting that it presents itself as a “one rubric for all classes” that actually seems to show a bias for instructor-centered courses with pre-determined content and activities. Now, this describes many good classes – don’t get me wrong. But there are other design methodologies and learning theories that produce excellent courses as well.

The responses from the QM defenders to the tweet above (and those like me that agreed with it) were about many things that no one was questioning: how it is driven by research (we know, we have done the research as well), lots of people have worked for decades on it (obviously, but we have worked in the field for decades as well; plus many really bad surveillance tools can say the same, so be careful there), QM is not meant to be used this way (even though we are going by what the rubric says), it is the institution’s fault (we know, but I will address this), people who criticize QM don’t know that much about it (I earned my Applying the QM Rubric (APPQMR) certificate on October 28, 2019 – so if don’t understand, then whose fault is that? :) ), and so on.

Now technically most of us weren’t “criticizing” QM as much as discussing it’s limitations. Rubrics are technology tools, and in educational technology are told to not look at tools as the one savior of education. We are supposed to acknowledge their uses, limitations, and problems. But every time someone wants to discuss the limitations of QM, we get met with a wall of responses bent on proving there are no limitations to QM.

The most common response is that QM does not suggest how instructors teach, what they teach, or what materials they use to teach. It is only about online course design and not teaching. True enough, but in online learning, there isn’t such a clear line between design and teaching. What you design has an effect on what is taught. In fact, many online instructors don’t even call what they do “teaching” in a traditional sense, but prefer to use words like “delivery” or “facilitate” in place of “teaching.” Others will say things like “your instructional design is your teaching.” All of those statements are problematic to some degree, but the point is that your design and teaching are intricately linked in online education.

But isn’t the whole selling point of QM the fact that it improves your course design? How do you improve the design without determining what materials work well or not so well? How do you improve a course without improving assignments, discussions, and other aspect of “what” you teach? How do you improve a course without changing structural options like alignment and objectives – the things that make up “how” you teach?

The truth is, General Standards 3 (Assessment and Measurement), 4 (Instructional Materials), and 5 (Learning Activities and Learner Interaction) of the QM rubric do affect what you teach and what materials you use. They might not tell you to “choose this specific textbook,” but they do grade any textbook, content, activity, or assessment based on certain criteria (which is often a good thing when bad materials are being used). But those three General Standards  – along with General Standard 2 (Learning Objectives (Competencies)) – also affect how you teach. Which, again, can be a good thing when bad ideas are utilized (although the lack of critical pedagogy and abolitionist education in QM still falls short of what I would consider quality for all learners). So we should recognize that QM does affect the “what” and “how” of online course design, which is the guide for the “what” and “how” of online teaching. That is the whole selling point, and it would be useless as a rubric if it didn’t help improve the course to do this.

So, yes, specific QM review standards require certain specific course structures that do dictate how the course is taught. The QM rubric is biased towards certain structures and design methodologies. If you are wanting to teach a course that works within that structure (and there are many, many courses that do), QM will be a good tool to help you with that structure. However, if you start getting into other structures of ungrading, heutagogy / self-determined learning, aboriginal pedagogy, etc, you start losing points fast.

This is has kind of been my point all along. Much of the push back against that point dives into other important (but not related to the point) issues such as accreditation, burnout, and alignment. Sometimes people got so insulting insulting that I had to leave the conversation and temporarily block and shut it out of my timeline.

QM evangelists are incredibly enthusiastic about their rubric. As an instructional designer, I am taught to question everything – even the things I like. Definitely not a good combination for conversation it seems.

But I want to go back and elaborate on the two points that I tried to stick to all along.

The first point was a response to how some implied that QM is without bias… that it is designed for all courses, and because of this, if some institutions end up using it as a template to force compliance, that is their bias and not QM’s fault. And I get it – when you create something and people misuse it (which no one is denying happens), it can be frustrating to feel like you are being blamed for other’s misuse. But I think if we take a close look at how QM is not unbiased, and how there are politics and bias built into every choice they made, we can see how that has the effect of influencing how it is misused at institutions.

QM is a system based on standardization that was created by specific choices that were chosen through bias, in a method that biases it towards instructor-centered standardized implementation by institutional systems that are known to prefer standardization.

I know that sounds like a criticism, but there are a couple of things to first point out:

  • Bias is not automatically good or bad. Some of the bias in QM I agree with on different levels. Bias is a personal opinion or organizational position, therefore choosing one option over another always brings in bias. There is no such thing as bias-free tech design.
  • The rubric in QM is Ed-Tech. All rubrics are Ed-Tech. That makes QM an organization that sells Ed-Tech, or an Ed-Tech organization.  This is not saying that Ed-Tech is their “focus” or anything like that.

Most people understand how that QM was designed to be flexible. But even those QM design choices had bias in them. All design choices have bias, politics, and context. And when the choices of an entity such as QM are packaged up and sold to an institution, they are not being sold to a blank slate with no context. The interaction of the two aspects causes very predictable results, regardless of what the intent was.

For example, the QM rubric adds up to 100 points. That was a bias point right there. Why 100? Well, we are all used to it, so it makes it easy to understand. But it also connects to a standardized system that we were mostly all a part of growing up, one that didn’t have a lot of flexibility. If we wanted to score higher, we had to conform. When people see a rubric that adds up to 100, that is what many think of first. Choosing a point total that connects with pre-existing systems that also utilize that highest score is a choice that brings in all of the bias, politics, and assumptions that are typically associated with that number elsewhere.

Also, the ideal minimum score is 85. Again, that is biased choice. Why not 82, or 91, or 77? Because 85 is the beginning of “just above average” of the usual “just above average score” (a “B”) that many are used to. Again, this connects to a standardized system we are used to, and reminds people of the scores they got in grade school and college.

In fact, even using points in general, instead of check marks or something else, was another choice that QM made that is also a biased choice. People see points and they think of how those need to add up to the highest number. This mindset affects people even when they get a good number: think of how many students get an 88 and try to find ways to bump it up to a 90. This is another systemic issue that many people equate to “follow the rules, get the most points.”

Then, when you look at how long some of the explanations of the QM standards are, again that was a choice and it had bias. But when combined with an institutional system that keeps its faculty and staff very busy, it creates a desire to move through the long, complicated system as fast as possible to just get it done. This creates people that game the system, and one of the best ways to hack a complex process that repeats itself each course is to create a template and fill it out.

While templates can be a helpful starting place for many (but not everyone), institutional tendency is to do what institutions do: turn templates into standards for all.

This is all predictable human behavior that QM really should consider when creating its rubric. I see it in my students all the time – even though I tell them that there is flexibility to be creative and do things their way, most of them still come back to mimicking the example and giving me a standardized project.

You can see it all up and down the QM rubric – each point on the rubric is a biased choice. Which is not to say that they are all bad, it’s just that they are not neutral (or even free from political choices). Just some specific examples:

  • General Standard 3.1 is based on “measure the achievement” – which is great in many classes, but there are many forms of ungrading and heutagogy and other concepts that don’t measure achievement. Some forms of grading don’t measure achievement, either.
  • General Standard 3.2 refers to a grading policy that doesn’t work in all design methodologies. In theory, you could make your grading policy about ungrading, but in reality I have heard that this approach rarely passes this standard.
  • General Standard 3.3 is based on “criteria,” which is a popular paradigm for grading, but not compatible with all of them.
  • General Standard 3.4 is hard to grade at all in self-determined learning when the students themselves have to come up with their own assessments (and yes, I did have a class like that when I was a sophomore in college – at a community college, actually). Well, I say hard – you really can’t depending on how far you dive into self-determined learning.
  • General Standard 3.5 seems like it would fit a self-determined heutagogical framework nicely… in theory. In reality, its hard to get any points here because of the reasons covered in 3.4

Again, the point being that it is harder for some approaches like heutagogy, ungrading, and connectivism to pass. If I had time and space, I would probably need to go into what all of those concepts really mean. But please keep in mind that these methods are not “design as you go” or “failing to plan.” These are all well-researched concepts that don’t always have content, assessment, activities, and objectives in a traditional sense.

Many of the problems still come back to the combination of a graded rubric being utilized by a large institutional system. A heutagogical course might pass the QM rubric with, say, an 87 – but the institution is going to look at it as a worse course than a traditional instructivist course that scores a 98. And we all know that institutional interest in the form of support, budgets, praise, awards, etc will go towards the courses that they deem as “better” – this is all a predictable outcome from choosing to use a 100 point scale.

There are many other aspects to consider, and this post is getting too long. A couple more rabbit trails to consider:

So, yes, I do realize that QM has helped in many places where resources for training and instructional designers are low. But QM is a rubric, and rubrics always fall apart the more you try to make them become a one-size-fits-all solution. Instead of trying to expand the one Higher Ed QM rubric to fit all kinds of design methodologies, I think it might be better to look at what kind of classes it doesn’t work with and move to create a part of the QM process that identifies those courses and moves them on to another system.

Emergency Educational Measures in a Time of Pandemics, Anti-Racism Protests, and Political Chaos: Is This All Going… Anywhere?

A few months ago, a former co-worker of mine Erika Beljaars-Harris asked me to come talk to the University she now works at – RMIT Australia. They were interested in hearing about the current problems U.S. education is facing and where it is going. While I am not a futurist by training, well, neither are half the people that put the term in their bio. But I think there are at least three obvious trends that are having  – and will continue to have – immediate impacts on education. I also think it is pretty obvious where they are going if you put aside what you would like to see happening and be honest about how difficult it is going to continue to get.

So I came up with the title for the presentation (that I re-used for this post) and a quick 10-minute intro to frame the discussion. I thought I would write this post to capture the basic thoughts I shared in that intro. I covered the three trends I mentioned in the title and where I think we (in the U.S., but many other countries are facing similar issues as well) are going with those, both good and bad.

Pandemic Responses

When I first wrote up the description, it was a month ago and few were really talking Fall seriously. Now a lot of places have started talking, and even releasing some plans. These plans, for the most part, say nothing concrete other than “back to campus! (we hope but won’t commit to fully)” I made a joke about how solid those plans feel:

Campuses need students, faculty, and staff back on campus to generate revenue. So they are going to wait as long as they can to make any decisions other than going back. But coronavirus numbers are growing. Politicians and leaders are not doing enough to reverse that. A small handful of schools have put out some good plans for the Fall (thank you to those that have), but most have released nothing, or even worse, confusing complicated plans that try to hard to get as many students on campus as possible.

So where is this going? Classes are going to be online at some point in the Fall – or even before the Fall. It will probably depend on how much death and sickness we can accept before making the decision. But make no mistake: we will be back online sooner rather than later. And I realize that is not necessarily a good thing for all – but it will happen regardless. As much as you will be told to get ready for on campus or hybrid options, I would put more effort towards getting your classes and yourself ready for online. Make plans to keep yourself and your students as safe as possible. Don’t wait for guidance from leaders, and if it does come, realize you will probably have to go above and beyond anything they plan for.

Anti-Racism and Education

I am not totally sure I used correct English in the title – but the main idea is that we have witnessed sustained protests against systemic racism in several ways following the murder of George Floyd, and many are noting that they seem to be having some effects. I suddenly got a bunch of emails on June 18th telling me about Juneteenth celebrations or events happening the next day – as if a bunch of companies suddenly realized they better do something. Sure, many people will probably lose interest in social justice and equality once the news cycle moves on, but I think we have finally crossed a line to where you need to decide to be anti-racist or not. The lines have been drawn

So where is this going? Don’t look at “diversity” as a thing that you do for a special one day thing in class, or leave to the People of Color in your life. Don’t look at it as something our Black colleague can fill you in on when ever you remember to ask. Don’t ask your Indigenous student to speak for all Indigenous Peoples during a class discussion. You will need to start doing the work to bring marginalized voices into your content from the beginning of class to the end. It will no longer be acceptable to lean on “traditional experts” (aka white males) to make up the main focus of your content. You will need to do the work of searching and learning for yourself who needs to be added to your lessons. You will also need to do the work yourself to examine your own biases and prejudices and how they affect how you teach and interact with students, colleagues, and co-workers. Critical educational practices / pedagogy will need to be practiced by all, not just those that study it as an academic field.

Update: an important aspect of this work is also listening:

Political Chaos

So even beyond academic leaders, it seems that political leaders at all levels and political parties are also falling apart in their response to COVID. Some have done well, but most have not. From reacting too late, to re-opening too soon, to realizing they opened too soon but took too long to reverse course, it has been a failure of partisan politics up and down the board. I am not getting into both side-ism here – one party clearly wanted to follow Science and the other did not (even though even those that wanted to follow Science could have done much better in most cases). Wearing masks, social distancing, supporting the economy after you ordered businesses to shut down, and so on should NOT have ever been political issues to disagree over. And I know that this problem comes from the top and affects the ability of all levels underneath to make decisions. But more leaders at state and local levels should have decided to do the right things regardless of what was happening at the national level. Many did and that helped for the people living in those areas. But many did not. And instead of getting over the worst of it by now as we should be, we are gearing up for the worst of it to still come.

So where is this going? I think we have to face the hard truth that we are on our own for now. You will have to find what ever group you can that will do the right thing and band together to do what you can. Whether a smaller group in the city, at the department level, a subsection of your department, an ongoing Twitter DM group, a Slack channel, a group Facebook Message, whatever it might be – find some people to connect with and band together to help. Whether it is emotional help online or in-person help in your community (socially distanced, of course), find those that are willing to do the right thing and form networks. Try to reach out to as big a level as you can, but don’t count on leaders to get things done. Some leaders may get some things done and that is great – connect with them. If not, find the people that will and band together with them to make it through.

 

Crash Course in Online Teaching 1: Starting With Theory (Wait, Wait – Give It a Chance! Really!)

With several universities now coming to grips with the fact that they will still be online in the Summer (and most likely the Fall), several are turning to how to quickly train their entire faculty in online teaching in a hurry. There really isn’t one ideal way to do this, but I want to offer up the way I would do it if given the chance to design a Crash Course in Online Learning (insert your Budgie/Metallica song parody here).

I would personally start off with a very basic intro to… stay with me here… learning theory. Wait – don’t click away just yet. Many would balk at the idea of starting with theory and not practical tool/building skills, while learning theorists would cry foul at thinking that a basic intro to learning theory is even possible. But I have found that a few basic concepts taught at a very intro level can help faculty not only understand how they have been taught in the past, but also how they can try new ideas and designs they may have never thought of.

So give it chance if you are tired of academia and theory, or give me some wide wiggle room for scaffolding if you are deep into learning theory. I’m going to combine and give examples that will make theory easy to digest, but also blur the lines of complexity that exist. Please keep that in mind as we go forward.

First of all, I will point out that I have published and taught this before. I will give the basics below, but if you have 30 minutes to an hour to dig in, there are slightly more explanatory resources out there. The first is the paper that I published called “From Instructivism to Connectivism: Theoretical Underpinnings of MOOCs.” Yes, it is about MOOCs, but the ideas can be applied to any course. Plus, it comes with a worksheet that you can use to plan your course. The second is a video archive of a training session I led from last year on the paper. If you prefer video more than blog posts or papers, this might work for you:

So, for those that are wanting the summary version, here we go….

Overall Power Dynamics

Out of all the different ways to approach learning theory, I like focus on power dynamics first when it comes to designing a course. So think about the overall power dynamic you want to see happening in your course. This can change from week to week, but in general most courses stick to one for the most part. The question is: who determines what learners will learn in your course, and who directs how it is learned? There are many ways to look at this, but I like to focus on three different -isms:

  • Instructivism: The course is controlled by an expert instructor, determined and directed by that expert.
  • Constructivism: Constructed self-discovery, sometimes determined by the expert, but usually directed by the learner.
  • Connectivism: Learner-determined and directed by the learner, enhanced by networking (connecting) with others (including other learners, the instructor, online resources, etc).

Yes, these can mix, and you can move between different ones. The main thing to think about is who is determining the overall knowledge and/or skills to be learned, and then who is directing how the learners will learn those knowledge and skills and then how they will prove they learned them.

Again, there are many different ways of looking at these terms, many other -isms you can use, and a lot of ambiguity that I am glossing over here. These three -isms (and how I described them) are just a good place to start for those that are new. Generally you pick one, but also think how elements of others might also be utilized in your course.

Methodology of Course Design

Course design methodology often overlaps with power structures. However, within various power structures, there still is room for different design methodologies. For example, even in connectivism there it is still possible to design a course that focuses on transmission of knowledge from experts, even if those experts are not always the instructor.

In this stage, you are thinking about where knowledge and/or skills training comes from, not just who controls the overall power dynamics. Again, there are many, many different ways to look at this. I want to start with three popular -agogies:

  • Pedagogy: many people use this as a catch-all phrase for all teaching design, but in a traditional sense for several centuries it has meant focusing on knowledge transfer from an expert (Update: please note there are many different ways of looking at the term that have gained traction over the past 80-100 years that  hope to cover in upcoming parts – see viewing pedagogy as a philosophy rather than a theory in fields like critical pedagogy).
  • Andragogy: learners draw upon their experience to connect what they already know with new content / knowledge / skills / etc (some have advocated to use the term “Anthropagogy” in place of Andragogy to be more inclusive, but I use the more common term here).
  • Heutagogy: learners focus on learning how to learn about a particular topic rather than just what to learn. Heutagogy is often seen as a critical response to the limitations of other -agogies.

Typically, you see pedagogy matching with instructivism, andragogy matching with constructivism, and heutagogy matching with connectivism. But it is possible for other combinations, such as constructivist heutagogy or connectivivist pedagogy. There is a chart of page 94 of the article above that explains the nine different combination and gives examples.

The main idea is that you choose which combination of the two you want for your class most often (even if if changes from time to time). I tend to advocate for a connectivist heutagogical approach the most often, as that is what more and more people need in the world today. Rather than memorizing expert facts as determined by a the instructor, we need more learners that know how to grow and learn about a topic by connecting with the people and resources that can teach them what they need to know.

At this point, you will start considering what activities and assignments you will be using in your course. It is also good to have some well-written and aligned goals, objectives, competencies, or other standards. I will cover that as a separate post next, but more than likely, you are transitioning a course that already exists on campus into an online course. So I will continue with the theory first, but keep in mind that even if you already have goals and objectives, it would be a good idea to review them after you work through learning theory.

Types of Interaction

Of course, your class will have all types of interaction. However, I have found that once people jump into creating activities and assignments and content first, they leave out interaction until after the bulk of the course activities have been created. At that point, interaction becomes an afterthought or an add-on addition to what has already been created. Which is not ideal.

Thinking through the types of communication that can happen in a course is a good way to proactively plan out different ways to foster interaction as you create content and activities. Most of us think of different types of communication like student-to-student, teacher-to-student, student-to-content, etc. There are already 12 types that have been identified in the literature, but there could be up to 20 emerging. I gave my run-down of communication types that currently exist and how they might change in the future here:

Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

You will probably want to pick several types of interaction for different parts or times in your course. Again, if you don’t plan for it from the beginning, it may never make it into your goals, objectives, or lesson plans. However, please keep in mind that you can come back and change/add/remove types throughout the design process.

Also, make sure to match the different types of interaction with the methodology and power structure you selected earlier, at least as you initially see them working out in your course. If you don’t have a good match for your previous choices, then you probably need to consider adding some appropriate interaction types.

Communicative Actions

The final theoretical part to think about will probably be something that you consider now, but come back to once you have an idea of what activities you want in your course. But I will cover it here since it is also in the article above, and it helps to think about it from the beginning as well. Once you know the power structure, methodology, and types of interaction you want, you will need to think through the form that various communication acts will take in your course.

There are many different theories of communication – one that I have found works well for instructors is Learning and Teaching as Communicative Actions (LTCA) theory (based on the work of Jurgen Habermas, but created mainly by one of doctoral committee members Dr. Scott Warren for full disclosure). Current LTCA theory proposes four types of communicative actions:

  • Normative communicative actions: communication of knowledge that
    is based on past experiences (for example, class instructions that
    explain student learning expectations).
  • Strategic communicative actions: communication through textbooks,
    lectures, and other methods via transmission to the learner (probably
    the most utilized educational communicative actions).
  • Constative communicative actions: communication through
    discourses, debates, and arguments intended to allow learners to make
    claims and counterclaims (utilizing social constructivism and /or
    connectivism).
  • Dramaturgical communicative actions: communication for purposes
    of expression (reflecting or creating artifacts individually or as a group
    to demonstrate knowledge or skills gained).

As you can see, you will most likely need to mix these during the class – even for each lesson. The goal on this part is not to pick one or two, but to think through how you communicate what is happening in your course. Think through the activities you will have in your course, and then match those with at least one communicative action and power dynamic/methodological combination.

Pulling It All Together

So that is really it for a quick run through some of the basics of theory that can help you begin to design an online course. Like I have said, there are many other theories than those covered here, and deeper/more complex ways of looking at the ones that were covered. This is meant to be a quick guide to just get started, whether you are designing a new online course from scratch or converting an existing on-campus course to an online version. If you looked at the article, you saw that there is a one-page worksheet at the end to help you work through all of these theories in a fairly quick manner. I have also created a Word Doc version, a Google Doc version, and a PDF form version that you can use to fill out and use as you like.

In parts 2 and 3, I want to go back to some topics I have covered before – but for now, here are links to past posts that cover those basics if you can’t wait:

Goals, Objectives, and Competencies

An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)

Update: I wasn’t clear enough that this is a basic beginner’s way to look at the terms and ideas that are used in learning theory. This will continue to go deeper as I look at other areas of theory in future posts. Some people are not happy that I avoided using the term “Critical Pedagogy” anywhere in the article. I apologize for that. My main thought was that Critical Pedagogy is often classified as an educational philosophy because it puts theory into action, and therefore it would be better to cover that in practical areas like formative evaluation, writing objectives, creating content, etc. Examining power dynamics, who controls communication, and what forms communication take are one of the foundations of being critical about education, so it is still the foundation of everything in this post.

A Template, a Course, and an OER for an Emergency Switch to Online

So the last few weeks have been… something. Many of us found ourselves in the rush to get entire institutions online, often with incredibly limited resources to do so. I’ve been in the thick of this as well. Recently I shared some thoughts about institutions going online, as well as an emergency guide to taking a week of a class online quickly. I would like to add some more resources to the list that we have been developing since those posts.

First of all, I would like to repeat what many have said (and what I tried to emphasize in that first post): take care of your self, your family, and those around you first. Don’t expect perfection from yourself. Practice self-care as much as possible (I know that easier said that done). Then make sure to take care of your students as well. Communicate with them as much as possible, be flexible, remember that many aspects of their lives have been suddenly upended, and above all, make sure to be a voice of care in these times.

I also know that at some point, you will be expected to put your course online and teach something, whether you think it is a good idea or not. So for those that are at that stage, here are some more resources to help.

First of all, I am working with some other educators to put together a free course called Pivoting to Online Teaching: Research and Practitioner Perspectives (I didn’t really like the word “pivot,” but I was overruled). It is a course that you can take for free from edX, but for those that don’t want to register, we have been placing all of the content on an alternative website that requires no sign-up. Lessons are being created in H5P (remixable) and traditional html format. Archives of past events are also being stored here as well. We are halfway through Week 1, so plenty of time to join us.

As part of that course, I created a module template for an emergency switch to online. This is basically a series of pages that work together as a module that you can copy and modify to quickly create course content. It tends to follow many of the concepts we are promoting in the class (Community of Inquiry, ungrading, etc), but it can also easily be modified to fit other concepts as well. I basically went through my earlier post “An Emergency Guide (of sorts) to Getting This Week’s Class Online in About an Hour (or so)” and followed it in making a Geology module. Then I add some notes in red to talk about options and things you should think about if you are new to this. You can find the Canvas or IMS Common Cartridge version in the Canvas Commons that can be imported in Canvas, or downloaded and imported to systems that support IMS. However, since there are also other systems that don’t use either of these formats, I also made a Google Docs version as well as a Microsoft Word version for download.

And finally, the OER – our book Creating Online Learning Experiences:A Brief Guide to Online Courses, from Small and Private to Massive and Open is still available through Mavs OpenPress in Pressbooks (with Hypothes.is enabled for comments as well). I want to highlight a few of the chapters:

Of course, I like the whole book, so it was hard to pick just a few chapters, but those are the ones that would probably help those getting online quickly. When you get more caught up, I would also suggest the Basic Philosophies chapter as one to help guide you think through many underlying aspects to teaching online.