Using Learning Analytics to Predict Cheating Has Been Going on for Longer Than You Think

Hopefully by now you have heard about the Dartmouth Medical School Cheating Scandal, where Dartmouth College officials used questionable methods to “detect” cheating in remote exams. At the heart of the matter is how College officials used click-stream data to “catch” so-called “cheaters.” Invasive surveillance was used to track student’s activity during the exams, officials used the data without really understanding it to make accusations, and then students were pressured to quickly react to the accusations without much access to the “proof.” Almost half of those accused (7 of 17 or 41%) have already had their cases dismissed (aka – they were falsely accused. Why is this not a criminal act?). Out of the remaining 10, 9 plead guilty, but 6 of those have now tried to appeal that decision because they feel they were forced to plead guilty. FYI – that is 76%(!) that are claiming they are falsely accused. Only one of those six wanted to be named – the other 5 are afraid of reprisals from the College if they speak up.

That is intense. Something is deeply wrong with all of that.

The frustrating thing about all of this is that plenty of people have been trying to warn that this is a very likely inevitable outcome of Learning Analytics research studies that look to detect cheating from the data. Of course, this particular area of research focus is not a major aim of Learning Analytics in general, but several studies have been published through the years. I wanted to take a look at a few that represent the common themes..

The first study is a kind of pre-Learning Analytics paper from 2006 called “Detecting cheats in online student assessments using Data Mining.” Learning Analytics as a field is usually traced back to about 2011, but various aspects of it existed before that. You can even go back to the 1990s – Richard A. Schwier describes the concept of “tracking navigation in multimedia” (in the 1995 2nd edition of his textbook Instructional Technology: Past, Present, and Future – p. 124, Gary J. Anglin editor). Schwier really goes beyond tracking navigation into foreseeing what we now call Learning Analytics. So all of that to say: tracking students’ digital activity has a loooong history.

But I start with this paper because it contains some of the earliest ways of looking at modern data. The concerning thing with this study is that the overall goal is to predict which students are most likely to be cheating based on demographics and student perceptions. Yes – not only do they look at age, gender, and employment, but also a learner’s personality, social activities, and perceptions (did they think the professor was involved or indifferent? Did they find the test “fair” or not? etc).

You can see by the chart on p.207 that males with lower GPAs are mostly marked as cheating, while females with higher GPAs are mostly marked as not cheating. Since race is not considered in the analysis, systemic discrimination could create incredibly racist oppression from this method.

Even more problematic is the “next five steps to data mining databases,” with one step recommending the collection of “responses of online assessments, surveys and historical information to detect cheats in online exams.” This includes the clarification that:

  • “information from students must be collected from the historical data files and surveys” (hope you didn’t have a bad day in the past)
  • “at the end of each exam the student will be is asked for feedback about exam, and also about the professor and examination conditions” (hope you have a wonderful attitude about the test and professor)
  • “professor will fill respective online form” (hope the professor likes you and isn’t racist, sexist, transphobic, etc if any of that would hurt you).

Of course, one might say this is pre-Learning Analytics and the current field is only interested in predicting failure, retention, and other aspects like that. Not quite. Lets look at the 2019 article “Detecting Academic Misconduct Using Learning Analytics.” The focus in this study is bit more specific: they seek to use keystroke logging and clickstream data to tell if a student is writing an authentic response or transcribing a pre-written one (which is assumed to only be from contract cheating).

The lit review of this study also shows that this study is not the only one digging into this idea. The idea goes back several years through multiple studies.

While this study does not get to the same Minority Report-level concerns that the last one did, there are still some problematic issues here. First of all is this:

“Keystroke logging allows analysis of the fluency and flow of writing, the length and frequency of pauses, and patterns of revision behaviour. Using these data, it is possible to draw conclusions about students’ underlying cognitive processes.”

I really need to carve out some time to write about how you can’t use clickstream data of any kind to detect cognitive processes in any way, shape or form. Most people that read this blog know why this is true, so I won’t take the time now. But the Learning Analytics literature is full of people that think they can detect cognitive activities, processes, or presence through clickstream data… and that is just not possible.

The paper does address the difficulties in using keystroke data to analyze writing, but proposes analysis of clickstream data as a much better alternative. I’m not really convinced by the arguments they present – but the gist is they are looking to detect revision behaviors, because authentic writing involved pauses and deletions.

Except that is not really true for everyone. People that write a lot (like, say, by blogging) can get to a place where they can write a lot without taking many pauses. Or, if they really do know the material, they might not need to pause as much. On the other hand, the paper assumes that transcription of an existing document is a mostly smooth process. I know it is for some, but it is something that takes me a while.

In other words, this study relies on averages and clusters of writing activities (words added/deleted, bursts of writing activity, etc) to classify your writing as original or copied. Which may work for the average, but what about students with disabilities that affect how they write? What about people that just work differently than the average? What about people from various cultures that approach writing in a different method, or even those that have to translate what they want to write into English first and then write it down?

Not everyone fits so neatly into the clusters.

Of course, this study had a small sample size. Additionally, while they did collect demographic data and had students take self-regulated learning surveys, they didn’t use any of that in the study. The SRL data would seem to be a significant aspect to analyze here. Not to mention at least mentioning some details on the students who didn’t speak English as a primary language.

Now, of course, writing out essay exam answers is not common in all disciplines, and even when it is, many instructors will encourage learners to write out answers first and then copy them into the test. So these results may not concern many people. What about more common test types?

The last article to look at is “Identifying and characterizing students suspected of academic dishonesty in SPOCs for credit through learning analytics” from 2020. There are plenty of other studies to look at, but this post is already getting long. SPOC here means “Small Private Online Course”… a.k.a. “a regular online course.” The basic gist is that they are clustering students by how close their answers are to each other and how close their submission times are. If they get the exact same answers (including choosing the same wrong choice) and turn in their test at about the same time, they are considered “suspect of academic dishonesty.” It should also be pointed out that the Lit Rreview here also shows they are the first or only people to be looking into this in the Learning Analytics realm.

The researchers are basically looking for students that meet together and give each other answers to the test. Which, yes – it is suspicious if you see students turn in all the same answers at about the same time and get the same grade. Which is why most students make sure to change up a few answers, as well as space out submissions. I don’t know if the authors of this study realized they probably missed most cheaters and just caught the ones not trying that hard.

Or… let me propose something else here. All students are trying to get the right answers. So there are going to be similarities. Sometimes a lot of students getting the same wrong answer on a question is seen as a problem to fix on the teaching side (it could have been taught wrong). Plus, students can have similar schedules – working the same jobs, taking the same other classes that meet in the morning, etc. It is possible that out of the 15 or so they flagged as “suspect,” 1 or 2 or even 3 just happened to get the same questions wrong and submit at about the same time as the others. They just had bad luck.

I’m not saying that happened to all, but look: you do have this funnel effect with tests like these. All of your students are trying to get the same correct answer and finish before the same deadline. So its quite possible there will be overlap that is very coincidental. Not for all, but isn’t it at least worth a critical examination if even a small number of students could get hurt by coincidentally turning in their test at the same time others are?

(This also makes a good case for ungrading, authentic assessment, etc.)

Of course, the “suspected” part gets dropped by the end of the paper: “We have applied this method in a for credit course taught in Selene Unicauca platform and found that 17% of the students have performed academic dishonest actions, based on current conservative thresholds.” How did they get from “suspected” to “have performed?” Did they talk to the students? Not really. They looked at five students and felt that there was no way their numbers could be anything but academic dishonesty. Then they talked to the instructor and found that three students had complained about low grades. The instructor looked at their tests, found they had the exact same wrong answers, and… case closed.

This is why I keep saying that Learning Analytics research projects should be required to have an instructional designer or learning research expert on the team. I can say after reviewing course results for decades that it is actually common for students to get the same wrong answers and be upset about it because they were taught wrong. Instructors and Instructional Designers do make mistakes, so always find out what is going on. Its also possible that there was a conversation weeks ago where one student with the wrong information spread that information to several students when discussing the class. It happens.

But this is what happens when you don’t investigate fully and assume the data is all you need. Throwing in a side of assuming that cheaters act a certain way certainly goes a long way as well. So you can see a direct line from assumptions made about personality and demographics of who cheaters are, to using clickstream data to know what is going on in the brain, to assuming the data is all you need…. all the way to the Dartmouth Medical School scandal. Where there is at least a 41%-76% false accusation rate currently.

Video Content or Audio-Only Content For Online Courses: Which is Better?

Like many of you, I saw this Tweet about audio-only lectures making the rounds on Twitter:

https://twitter.com/sivavaid/status/1389592396820795397

Now, of course, many questioned “why lectures?” (which is a good question to ask), but the main discussion seemed to focus on the content of courses more than lectures specifically. Video content (often micro-content) is common in online courses. There were many points raised about accessibility (both of videos and audio-only lectures). Many seem to feel strongly that you should do either video content or audio-only content. My main thought was: instead of asking “either/or”… why not think “both/and”?

From certain points of view, audio-only content addresses some accessibility issues many rarely consider. When creating video content, the speaker will sometimes rely on visual-only clues and images without much narration, leaving those that are listening with gaps in their understanding. So while it is easy to say “if you don’t want video, then just play the video in the background and don’t watch,” sometimes the audio portion of a video leaves out key pieces of information. This is usually because when the content gets to a visual part, the speakers often assumes everyone playing the video can see.

“Look at what the red line does here…”

“When you see this, what do you think of?…”

And so on. People that record podcasts often know they have to describe any visuals they want to use so people listening know what they are talking about. For accessibility purposes, we really should be doing this in videos as well. Not to mention that it helps the information make more sense for every one regardless of disability.

There are other advantages to audio-only content as well, such as being able to download the audio file to various devices and take it with you where you go. Some devices do this with video files – but how often do we offer videos for download? And what if someone had limited access or storage capacity for massive video files? Auio-only mp3 files work for a wider variety of people on the technical level.

On the other hand, there are times when video is preferred. The deaf or hard of hearing often come to mind. Additionally, some people think that the focus that video requires helps them understand better. Video can also help increase teacher presence. Plus, video content is not the same as a Zoom call (or even a video lecture broadcast live), so its not really fair to throw both in the same bucket.

I would also point out that just because learners like audio-only one semester, that doesn’t mean the next semester of learners will. And I would guarantee that there are those in Vaidhyanathan’s course that didn’t really like the audio-only, but didn’t want to speak up and be the outlier.

Remember: Outliers ALWAYS exist in your courses. Never underestimate the silencing power of consensus.

But again, I don’t think it takes much extra time to give learners the option to choose for themselves what they want.

First of all, every video you post in a course should be transcribed and closed-captioned as aground rule – not only for accessibility, but also for Universal Design for Learning. But I also know that this is the ideal that often not supported financially at many institutions. For the sake of this article, I am not going to repeat the need to be proactive in making courses accessible.

So with that in mind, the main step that you will need to add into your course design process is to think through your video content (which is hopefully focused micro-content) and add in descriptions of any visual-only content. Don’t forget intro, transition, and ending graphics – speak out everything that will be on screen.

Then, while you are editing or finalizing the video, export to mp3 in addition to your preferred video format. Or use a tool that can extract the audio from the video (this is also helpful if you already have existing videos with no visual-only aspects). Offer that mp3 as a download on the page with the video (or even create a podcast with it). Now your students have the option to choose video or audio-only (or to switch as they like).

Also, once you get the video closed captioned, take the transcript and spend a few minutes collecting it into paragraphs to make it more readable. Maybe even add the images from the video in the document (you already would have full alt descriptions in the text). Then also put this file on the page with video as a downloadable file. You could even consider maybe collecting your transcripts into PressBooks and make your own OER. However you want to do it, just make it another option for learners to get the content.

Anyways… the idea here is that students can choose for themselves to watch the video, listen to the audio file, or read the transcript – all in the manner they want to on the device they want.

One of the questions that always comes up here is how to make the video content sound natural. Spontaneous/off-the-cuff recordings can miss material or go down a rabbit-hole. Plus you might forget to describe some visual content. But reading pre-written scripts sounds wooden and boring. One of my co-authors for Creating Online Learning Experiences (Brett Benham), wrote about how to approach this issue in Chapter 10: Creating Quality Videos. You can read more at the link, but the basic idea is to quickly record a spontaneous take on your content and have that transcribed (maybe even by an automatic service to save some money). Then take that transcript, edit out the side-trails, mistakes, and missteps, and use your edited document to record the final video. It will then be your spontaneous voice, but cleaned-up where needed and read for closed-captioning.

To recap the basics points:

  1. Think about which parts of your video content will have visual aspects, and come up with a description for those parts in words.
  2. Record your video content with the visual aspects, but make sure to cover those descriptions you came up with.
  3. Create mp3 files from your videos and add that to the course page with the video embed/link and transcription file.

If you want to go to the next level with this:

  1. Enable downloading of your videos (or store them in a service that allows downloads if that option is not possible in your LMS).
  2. Turn your mp3 files into a podcast so that learners can subscribe and automatically download to devices when you post new files.
  3. Take your transcriptions and re-format them (don’t change any words or add/delete anything) into readable text, along with the visuals from the video. Save this as an accessible PDF and let learners download if they like.
  4. Collect your PDF transcripts into a PressBook, where you can add the audio and video files/links/embeds as well.
  5. Maybe even add some H5P activities to your PressBooks chapters to make them interactive lessons.

Op-Ed: Online Proctoring is Not Essential

After one of my usual Twitter rants about proctoring software, I was asked to turn the rant into an Op-Ed. Elearning Inside liked it enough to publish it:

In a recent op-ed about online proctoring, ProctorU CEO Scott McFarland made some concerning claims about how he feels proctoring online exams is “essential” and “indispensable.” Many were quick to point out their skepticism of the owner of a proctoring company making such a claim.

One important detail that McFarland left out was that the exams or tests themselves are not essential. Not only that, he skipped over some of the largest concerns with proctoring, while also not accurately addressing the research that is happening in this area…..

You can read the rest of the article, where I make wild references to assessment gods, 5000% of students cheating, and general debunking of the current “cheating is everywhere” FUD. But the main point is that there is a better way based on solid course design.

QM and the Politics of the “Unbiased” Bias

So it started off innocent enough, with a Tweet about concerns regarding the popular QM rubric for course review:

Different people have voiced concerns with the rubric through the years… usually not saying that it is all bad or anything, but just noting that it presents itself as a “one rubric for all classes” that actually seems to show a bias for instructor-centered courses with pre-determined content and activities. Now, this describes many good classes – don’t get me wrong. But there are other design methodologies and learning theories that produce excellent courses as well.

The responses from the QM defenders to the tweet above (and those like me that agreed with it) were about many things that no one was questioning: how it is driven by research (we know, we have done the research as well), lots of people have worked for decades on it (obviously, but we have worked in the field for decades as well; plus many really bad surveillance tools can say the same, so be careful there), QM is not meant to be used this way (even though we are going by what the rubric says), it is the institution’s fault (we know, but I will address this), people who criticize QM don’t know that much about it (I earned my Applying the QM Rubric (APPQMR) certificate on October 28, 2019 – so if don’t understand, then whose fault is that? :) ), and so on.

Now technically most of us weren’t “criticizing” QM as much as discussing it’s limitations. Rubrics are technology tools, and in educational technology are told to not look at tools as the one savior of education. We are supposed to acknowledge their uses, limitations, and problems. But every time someone wants to discuss the limitations of QM, we get met with a wall of responses bent on proving there are no limitations to QM.

The most common response is that QM does not suggest how instructors teach, what they teach, or what materials they use to teach. It is only about online course design and not teaching. True enough, but in online learning, there isn’t such a clear line between design and teaching. What you design has an effect on what is taught. In fact, many online instructors don’t even call what they do “teaching” in a traditional sense, but prefer to use words like “delivery” or “facilitate” in place of “teaching.” Others will say things like “your instructional design is your teaching.” All of those statements are problematic to some degree, but the point is that your design and teaching are intricately linked in online education.

But isn’t the whole selling point of QM the fact that it improves your course design? How do you improve the design without determining what materials work well or not so well? How do you improve a course without improving assignments, discussions, and other aspect of “what” you teach? How do you improve a course without changing structural options like alignment and objectives – the things that make up “how” you teach?

The truth is, General Standards 3 (Assessment and Measurement), 4 (Instructional Materials), and 5 (Learning Activities and Learner Interaction) of the QM rubric do affect what you teach and what materials you use. They might not tell you to “choose this specific textbook,” but they do grade any textbook, content, activity, or assessment based on certain criteria (which is often a good thing when bad materials are being used). But those three General Standards  – along with General Standard 2 (Learning Objectives (Competencies)) – also affect how you teach. Which, again, can be a good thing when bad ideas are utilized (although the lack of critical pedagogy and abolitionist education in QM still falls short of what I would consider quality for all learners). So we should recognize that QM does affect the “what” and “how” of online course design, which is the guide for the “what” and “how” of online teaching. That is the whole selling point, and it would be useless as a rubric if it didn’t help improve the course to do this.

So, yes, specific QM review standards require certain specific course structures that do dictate how the course is taught. The QM rubric is biased towards certain structures and design methodologies. If you are wanting to teach a course that works within that structure (and there are many, many courses that do), QM will be a good tool to help you with that structure. However, if you start getting into other structures of ungrading, heutagogy / self-determined learning, aboriginal pedagogy, etc, you start losing points fast.

This is has kind of been my point all along. Much of the push back against that point dives into other important (but not related to the point) issues such as accreditation, burnout, and alignment. Sometimes people got so insulting insulting that I had to leave the conversation and temporarily block and shut it out of my timeline.

QM evangelists are incredibly enthusiastic about their rubric. As an instructional designer, I am taught to question everything – even the things I like. Definitely not a good combination for conversation it seems.

But I want to go back and elaborate on the two points that I tried to stick to all along.

The first point was a response to how some implied that QM is without bias… that it is designed for all courses, and because of this, if some institutions end up using it as a template to force compliance, that is their bias and not QM’s fault. And I get it – when you create something and people misuse it (which no one is denying happens), it can be frustrating to feel like you are being blamed for other’s misuse. But I think if we take a close look at how QM is not unbiased, and how there are politics and bias built into every choice they made, we can see how that has the effect of influencing how it is misused at institutions.

QM is a system based on standardization that was created by specific choices that were chosen through bias, in a method that biases it towards instructor-centered standardized implementation by institutional systems that are known to prefer standardization.

I know that sounds like a criticism, but there are a couple of things to first point out:

  • Bias is not automatically good or bad. Some of the bias in QM I agree with on different levels. Bias is a personal opinion or organizational position, therefore choosing one option over another always brings in bias. There is no such thing as bias-free tech design.
  • The rubric in QM is Ed-Tech. All rubrics are Ed-Tech. That makes QM an organization that sells Ed-Tech, or an Ed-Tech organization.  This is not saying that Ed-Tech is their “focus” or anything like that.

Most people understand how that QM was designed to be flexible. But even those QM design choices had bias in them. All design choices have bias, politics, and context. And when the choices of an entity such as QM are packaged up and sold to an institution, they are not being sold to a blank slate with no context. The interaction of the two aspects causes very predictable results, regardless of what the intent was.

For example, the QM rubric adds up to 100 points. That was a bias point right there. Why 100? Well, we are all used to it, so it makes it easy to understand. But it also connects to a standardized system that we were mostly all a part of growing up, one that didn’t have a lot of flexibility. If we wanted to score higher, we had to conform. When people see a rubric that adds up to 100, that is what many think of first. Choosing a point total that connects with pre-existing systems that also utilize that highest score is a choice that brings in all of the bias, politics, and assumptions that are typically associated with that number elsewhere.

Also, the ideal minimum score is 85. Again, that is biased choice. Why not 82, or 91, or 77? Because 85 is the beginning of “just above average” of the usual “just above average score” (a “B”) that many are used to. Again, this connects to a standardized system we are used to, and reminds people of the scores they got in grade school and college.

In fact, even using points in general, instead of check marks or something else, was another choice that QM made that is also a biased choice. People see points and they think of how those need to add up to the highest number. This mindset affects people even when they get a good number: think of how many students get an 88 and try to find ways to bump it up to a 90. This is another systemic issue that many people equate to “follow the rules, get the most points.”

Then, when you look at how long some of the explanations of the QM standards are, again that was a choice and it had bias. But when combined with an institutional system that keeps its faculty and staff very busy, it creates a desire to move through the long, complicated system as fast as possible to just get it done. This creates people that game the system, and one of the best ways to hack a complex process that repeats itself each course is to create a template and fill it out.

While templates can be a helpful starting place for many (but not everyone), institutional tendency is to do what institutions do: turn templates into standards for all.

This is all predictable human behavior that QM really should consider when creating its rubric. I see it in my students all the time – even though I tell them that there is flexibility to be creative and do things their way, most of them still come back to mimicking the example and giving me a standardized project.

You can see it all up and down the QM rubric – each point on the rubric is a biased choice. Which is not to say that they are all bad, it’s just that they are not neutral (or even free from political choices). Just some specific examples:

  • General Standard 3.1 is based on “measure the achievement” – which is great in many classes, but there are many forms of ungrading and heutagogy and other concepts that don’t measure achievement. Some forms of grading don’t measure achievement, either.
  • General Standard 3.2 refers to a grading policy that doesn’t work in all design methodologies. In theory, you could make your grading policy about ungrading, but in reality I have heard that this approach rarely passes this standard.
  • General Standard 3.3 is based on “criteria,” which is a popular paradigm for grading, but not compatible with all of them.
  • General Standard 3.4 is hard to grade at all in self-determined learning when the students themselves have to come up with their own assessments (and yes, I did have a class like that when I was a sophomore in college – at a community college, actually). Well, I say hard – you really can’t depending on how far you dive into self-determined learning.
  • General Standard 3.5 seems like it would fit a self-determined heutagogical framework nicely… in theory. In reality, its hard to get any points here because of the reasons covered in 3.4

Again, the point being that it is harder for some approaches like heutagogy, ungrading, and connectivism to pass. If I had time and space, I would probably need to go into what all of those concepts really mean. But please keep in mind that these methods are not “design as you go” or “failing to plan.” These are all well-researched concepts that don’t always have content, assessment, activities, and objectives in a traditional sense.

Many of the problems still come back to the combination of a graded rubric being utilized by a large institutional system. A heutagogical course might pass the QM rubric with, say, an 87 – but the institution is going to look at it as a worse course than a traditional instructivist course that scores a 98. And we all know that institutional interest in the form of support, budgets, praise, awards, etc will go towards the courses that they deem as “better” – this is all a predictable outcome from choosing to use a 100 point scale.

There are many other aspects to consider, and this post is getting too long. A couple more rabbit trails to consider:

[tweet 1299002302137802753 hide_thread=’true’]

So, yes, I do realize that QM has helped in many places where resources for training and instructional designers are low. But QM is a rubric, and rubrics always fall apart the more you try to make them become a one-size-fits-all solution. Instead of trying to expand the one Higher Ed QM rubric to fit all kinds of design methodologies, I think it might be better to look at what kind of classes it doesn’t work with and move to create a part of the QM process that identifies those courses and moves them on to another system.

Why Trust Google’s Algorithms When You Can Teach?

You have heard it said “If you can Google it, why teach it?”, but I want to ask “why trust Google’s algorithms when you can teach?” I Google things all the time, so I am not saying to stop using Google (or your preferred search engine). But is it really safe to let our learners of any age just Google it and let that be it? I want to push back against that idea with some issues to consider.

When we say “Google it,” we need to be clear that we are not really searching a database and getting back unfiltered results from complete data curated by experts (like you would get in, say, a University library), but allowing specific Google algorithms to filter all the web content it can find everywhere for us and present us with content based on their standards. There is often little to anything guaranteeing those results are giving us accurate information, or even trying to, say, correct a typo we don’t notice that gets us the wrong information (like adding the word “not” when you don’t realize it). But how often do people think through the real differences between Google and a library when they refer to Google as the modern day global library?

We have all heard the news stories that found everything from promotion of neo-Nazi ideals to climate change denial within Google search and auto-correct results. Things like that are huge problems within themselves, but the issues I am getting at here are how Google search results are designed to drive clicks by giving people more of what they want to hear, regardless of whether it is factual or not. Even worse, most internet search engines are searching through incomplete data that is already biased and flawed, adding to existing inequalities when it uses that data to produce search results. People with more money and power can add more content from their viewpoint to the data pool, and then pay to multiply and promote their content with search engines while diminishing other viewpoints. Incomplete, biased, flawed… all are terms that really don’t do the problem they describe justice here.

When you are an educator of learners at any level – why leave them to navigate through a massive echo-chamber of biased and incomplete search results for any information about your field? Why not work with them to think through the information they find? And when they do need to memorize things (because not every job will let you Google the basics on the spot), why not look into research on how memorization before application helps things like critical thinking and application? To be honest, as many, many others have pointed out, Google has only increased the need to teach rather than “just Google it.” But can we change the societal narrative on this on before it is too late?

Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

One of the foundation concepts in instructional design and other parts of the field of education are the types of interaction that occur in the educational process online. In 1989, Michael G. Moore first categorized three types of interaction in education: student-teacher, student-student, and student-content. Then, in 1994, Hillman, Willis, and Gunawardena expanded on this model, adding student-interface interactions. Four years later, Anderson & Garrison (1998) added three more interaction types to account for advances in technology: teacher-teacher, teacher-content, and content-content. Since social constructivist theory did not quite fit into these seven types of interaction, Dron decided to propose four more types of interaction in 2007: group-content, group-group, learner-group, and teacher-group. Some would argue that “student-student” and “student-content” still cover these newer additions, and to some degree that is true. But it also helps to look at the differences between these various terms as technology has advanced and changed interactions online – so I think the new terms are helpful. More recently, proponents of connectivism have proposed acknowledging patterns of “interactions with and learning from sets of people or objects [which] form yet another mode of interaction” (Wang, Chen, & Anderson, 2014, p. 125). I would call that networked with sets of people and/or objects.

The instructional designer within me likes to replace “student” with “learner” and “content” with “design” to more accurately describe the complexity of learners that are not students and learning designs that are not content. However, as we rely more and more on machine learning and algorithms, especially at the systemic level, we are creating new things that learners will increasingly be interacting with for the foreseeable future. I am wondering if it is time to expand this list of interactions to reflect that? Or is it long enough as it is?

So the existing ones I would keep, with “learner” exchanged for “student” and “design” exchanged for “content”:

  • learner-teacher (ex: instructivist lecture, learner teaching the teacher, or learner networking with teacher)
  • learner-learner (ex: learner mentorship, one-on-one study groups, or learner teaching another learner)
  • learner-design (ex: reading a textbook, watching a video, listening to audio, completing a project, or reading a website)
  • learner-interface (ex: web-browsing, connectivist online interactions, gaming, or computerized learning tools)
  • teacher-teacher (ex: collaborative teaching, cross-course alignment, or professional development)
  • teacher-design (ex: teacher-authored textbooks or websites, teacher blogs, or professional study)
  • group-design (ex: constructivist group work, connectivist resource sharing, or group readings)
  • group-group (ex: debate teams, group presentations, or academic group competitions)
  • learner-group (ex: individual work presented to group for debate, learner as the teacher exercises)
  • teacher-group (ex: teacher contribution to group work, group presentation to teacher)
  • networked with sets of people or objects (ex: Connectivism, Wikipedia, crowdsourced learning, or online collaborative note-taking)

The new ones I would consider adding include:

  • algorithm-learner (ex: learner data being sent to algorithms; algorithms sending communication back to learners as emails, chatbot messages, etc)
  • algorithm-teacher (ex: algorithms communicating aggregate or individual learner data on retention, plagiarism, etc)
  • algorithm-design (ex: algorithms that determine new or remedial content; machine learning/artificial intelligence)
  • algorithm-interface (ex: algorithms that reformat interfaces based on input from learners, responses sent to chatbots, etc)
  • algorithm-group (ex: algorithms that determine how learners are grouped in courses, programs, etc)
  • algorithm-system (ex: algorithms that report aggregate or individual learner data to upper level admin)
  • system-learner (ex: system-wide initiatives that attempt to “solve” retention, plagiarism, etc)
  • system-teacher (ex: cross-curricular implementation, standardized teaching approaches)
  • system-design (ex: degree programs, required standardized testing, and other systemic requirements)

Well… that gets too long. But I suspect that a lot of the new additions listed would fall under the job category of what many call “learning engineering” maybe? You might have noticed that it appears as if I removed “content-content” – but that was renamed “algorithm-design,” as that is mainly what I think of for “content-content.” But I could be wrong. I also left out “algorithm-algorithm,” as algorithms already interface with themselves and other algorithms by design. That is implied in “algorithm-design,” kind of in the same way I didn’t include learners interacting with themselves in self-reflection as that is implied in “learner-learner.” But I could be swayed by arguments for including those as well. I am also not sure how much “system-interface” interaction we have, as most systems interact with interfaces through other actors like learners, teachers, groups, etc. So I left that off. I also couldn’t think of anything for “system-group” that was different from anything else already listed as examples elsewhere. And I am not sure we have much real “system-system” interaction outside of a few random conversations at upper administrative levels that rarely trickle down into education without being vastly filtered through systemic norms first. Does it count as “system-system” interaction in a way that affects learning if the receiving system is going to mix it with their existing standards before approving and disseminating it first? I’m not sure.

While many people may not even see the need for the new ones covered here, please understand that these interactions are heavily utilized in surveillance-focused Ed-Tech. Of course, all education utilizes some form of surveillance, but to those Ed-Tech sectors that make it their business to promote and sell surveillance as a feature, these are interactions that we need to be aware of. I would even contend that these types of interaction are more important behind the scenes of all kinds of tech than many of us realize. So even if you disagree with this list, please understand that these interactions are a reality.

So – that is 20 types of interaction, with some more that maybe should have been included or not depending on your viewpoint (and I am still not sure we have advanced enough with “algorithm-interface” yet to give it it’s own category, but I think we will pretty soon). Someone may have done this already and I just couldn’t find it in a search – so I apologize if I missed others’ work. None of this is to say that any of these types of interactions are automatically good for learners just because I list them here – they just are the ones that are happening more and more as we automate more and more and/or take a systems approach to education. In fact, these new levels could be helpful in informing critical dialogue about our growing reliance on automation and surveillance in education as well.

What Does It Take to Make an -agogy? Dronagogy, Botagogy, and Education in a Future Where Humans Are Not the Only Form of “Intelligence”

Several years ago I wrote a post that looked at every form of learning “-agogy” I could find. Every once in a while I think that I probably need to do a search to see if others have been added so I can do an updated post. I did find a new one today, but I will get to that in a second.

The basic concept of educational -agogy is that, because “agogy” means “lead” (often seen in the sense of education, but not always), you combine who is being led or the context for the leading with the suffix. Ped comes from the Greek word for “children,” andr from “men,” huet from “self,” and so on. It doesn’t always have to be Greek (peeragogy, for example) – but the focus is on who is being taught and not what topic or tool they are being taught.

I noticed a recent paper that looks to make dronagogy a term: A Framework of Drone-based Learning (Dronagogy) for Higher Education in the Fourth Industrial Revolution. The article most often mentions pedagogy as a component of dronagogy, so I am not completely sure of the structure they envision. But it does seem clear that drones are the topic and/or tool, and only in certain subjects. Therefore, dronology would have probably been a more appropriate term. They are essentially talking about the assembly and programming of drones, not teaching the actual drones.

But someday, something like dronagogy may actually be a thing (and “someday” as in pretty soon someday, not “a hundred years from now” someday). If someone hasn’t already, soon someone will argue that Artificial Intelligence has transcended “mere” programming and needs to be “led” or “taught” more than “programmed.” At what point will we see the rise of “botagogy” (you heard it here first!)? Or maybe “technitagogy” (from the Greek word for “artificial” – technitós)?

Currently, you only hear a few people like George Siemens talking about how humans are no longer the only form of “intelligence” on this planet. While there is some resistance to that idea (because AI is not as “intelligent” as many think it is), it probably won’t be much longer before there is wider acceptance that we actually are living in a future where humans are not the only form of “intelligence” around. Will we expand our view of leading/teaching to include forms of intelligence that may not be like humans… but that can learn in various ways?

Hard to say, but we will probably be finding out sooner than a lot of us think we will. So maybe I shouldn’t be so quick to question dronagogy? Will drone technology evolve into a form of intelligence someday? To be honest, that just sounds like a Black Mirror episode that we may not want to get into.

(Feature image by Franck V. on Unsplash)

What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

How Would You Use Innovation to Save Education?

Too often it seems like educators define innovation as “change for the sake of changing something.” Innovation becomes the default context that they start with: if you have a problem, then fix it by innovating. For a while now, various outlets have been asking various questions that all boil down to: How would you use innovation to save education?

This is part of what Audrey Watters refers to as the “Innovation Gospel,” which became overwhelming in education and business a long time ago. One goal of the Innovation Gospel, of course, is to “fix” education… but always by starting with innovation rather than solutions. Watters response to what she would do to fix education is not “innovative” according to many, but it is something that would be a huge change:

This is also a question I have often pondered – what would I do if I had massive money to fix education? “Reparations” being one of the best answers, I will have to go for some runner-up answers. To be honest, nothing really innovative comes to mind at first. What I first think of are things that we all have heard from research as far back as the 80s or 90s (probably earlier) – stuff that we are pretty sure would help education, but that we never really hear mentioned in the Innovation Gospel:

  • Care for students: make sure they are fed, clothed, cared for – and not just with the small (but impactful nonetheless) efforts we currently have.
  • Train teachers to be more empathetic and caring for their students.
  • Pay to make facilities and tools safe and inclusive.
  • For that matter, make our schools and curriculum inclusive and empathetic for all learners. Even the newer ones.
  • Re-vamp curriculum to move away from pedagogy to heutagogy (teaching learners how to learn rather than what to learn).
  • Fund and pay teachers and staff.
  • Remove grades and standardized testing.

The list could probably go on, but the important thing to emphasize here is that this is all old research. None of it is “innovative” in the way many use the term today.  You will even find these ideas mentioned or even explored in depth in older Instructional Design textbooks as “established ideas” (even though I would still use “established” cautiously at best) or some other term that implies they are not new.

So why do we hear more about learning analytics and virtual reality and innovation “fixing” education these days than these “established” ideas?

Maybe it is our worship of the Innovation Gospel. Maybe it is difficult to quantify care, inclusion, heutagogy, and grade-less classrooms. Maybe it exposes education’s long fascination with increasing surveillance of learners in various ways. Maybe it means we lose the ability to “weed out” less desirable students in the name of standardization and averages. Maybe we are afraid that these are never-ending rabbit holes of problems that we don’t want to know how deep they go. Maybe these are just too hard and complex and overwhelming to know where to start. Maybe, maybe, maybe.

Whatever the reason, the people that have the money and means to work on these issues are usually not interested in the fixes that have already been discovered (but poorly implemented or never implemented). They are interested in data policies and future trends and fancy shiny virtual things – all things that might in some way impact education (or they might not). Our challenge is to pull that interest away from the shiny new toy of innovation and focus it on the nitty gritty work of making the hard changes at the classroom level of education. To be honest… that is a pretty daunting dragon to slay.