Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

One of the foundation concepts in instructional design and other parts of the field of education are the types of interaction that occur in the educational process online. In 1989, Michael G. Moore first categorized three types of interaction in education: student-teacher, student-student, and student-content. Then, in 1994, Hillman, Willis, and Gunawardena expanded on this model, adding student-interface interactions. Four years later, Anderson & Garrison (1998) added three more interaction types to account for advances in technology: teacher-teacher, teacher-content, and content-content. Since social constructivist theory did not quite fit into these seven types of interaction, Dron to propose four more types of interaction in 2007: group-content, group-group, learner-group, and teacher-group. Some would argue that “student-student” and “student-content” still over these newer additions, and to some degree that is true. But it also helps to look at the differences between these different terms as technology has advanced and changed interactions online – so I think the new terms are also helpful. More recently, proponents of connectivism have proposed acknowledging patterns of “interactions with and learning from sets of people or objects [which] form yet another mode of interaction” (Wang, Chen, & Anderson, 2014, p. 125). I would call that networked with sets of people or objects.

The instructional designer within me likes to replace “student” with “learner” and “content” with “design” to more accurately describe the complexity of learners that are not students and learning designs that are not content. However, as we rely more and more on machine learning and algorithms, especially at the systemic level, we are creating new things that learners will increasingly be interacting with for the foreseeable future. I am wondering if it is time to expand this list of interactions to reflect that? Or is it long enough as it is?

So the existing ones I would keep, with “learner” exchanged for “student” and “design” exchanged for “content”:

  • learner-teacher (ex: instructivist lecture, learner teaching the teacher, or learner networking with teacher)
  • learner-learner (ex: learner mentorship, one-on-one study groups, or learner teaching another learner)
  • learner-design (ex: reading a textbook, watching a video, listening to audio, completing a project, or reading a website)
  • learner-interface (ex: web-browsing, connectivist online interactions, gaming, or computerized learning tools)
  • teacher-teacher (ex: collaborative teaching, cross-course alignment, or professional development)
  • teacher-design (ex: teacher-authored textbooks or websites, teacher blogs, or professional study)
  • group-design (ex: constructivist group work, connectivist resource sharing, or group readings)
  • group-group (ex: debate teams, group presentations, or academic group competitions)
  • learner-group (ex: individual work presented to group for debate, learner as the teacher exercises)
  • teacher-group (ex: teacher contribution to group work, group presentation to teacher)
  • networked with sets of people or objects (ex: Wikipedia, crowdsourced learning, or online collaborative note-taking)

The new ones I would consider adding include:

  • algorithm-learner (ex: learner data being sent to algorithms; algorithms sending communication back to learners as emails, chatbot messages, etc)
  • algorithm-teacher (ex: algorithms communicating aggregate or individual learner data on retention, plagiarism, etc)
  • algorithm-design (ex: algorithms that determine new or remedial content; machine learning/artificial intelligence)
  • algorithm-interface (ex: algorithms that reformat interfaces based on input from learners, responses sent to chatbots, etc)
  • algorithm-group (ex: algorithms that determine how learners are grouped in courses, programs, etc)
  • algorithm-system (ex: algorithms that report aggregate or individual learner data to upper level admin)
  • system-learner (ex: system-wide initiatives that attempt to “solve” retention, plagiarism, etc)
  • system-teacher (ex: cross-curricular implementation, standardized teaching approaches)
  • system-design (ex: degree programs, required standardized testing, and other systemic requirements)

Well… that gets too long. But I suspect that a lot of the new additions list would fall under the job category of what many call “learning engineer” – so maybe there is a use for this? You might have noticed that it appears as if I removed “content-content” – but that was renamed “algorithm-design,” as that is mainly what I think of for “content-content.” But I could be wrong. I also left out “algorithm-algorithm,” as algorithms already interface with themselves and other algorithms by design. That is implied in “algorithm-design,” kind of in the same way I didn’t include learners interacting with themselves in self-reflection as that is implied in “learner-learner.” But I could be swayed by arguments for including those as well. I am also not sure how much “system-interface” interaction we have, as most systems interact with interfaces through other actors like learners, teachers, groups, etc. So I left that off. I also couldn’t think of anything for “system-group” that was different from anything else already listed as examples elsewhere. And I am not sure we have much real “system-system” interaction outside of a few random conversations at upper administrative levels that rarely trickle down into education without being vastly filtered through systemic norms first. Does it count as “system-system” interaction in a way that affects learning if the receiving system is going to mix it with their existing standards before approving and disseminating it first? I’m not sure.

So – that is 20 types of interaction, with some more that maybe should have been included or not depending on your viewpoint (and I am still not sure we have advanced enough with “algorithm-interface” yet to give it it’s own category, but I think we will pretty soon). Someone may have done this already and I just couldn’t find it in a search – so I apologize if I missed others’ work. None of this is to say that any of these types of interactions are good or bad for learners – they just are the ones that are happening more and more as we automate more and more and/or take a systems approach to education. In fact, these new levels could be helpful in informing critical dialogue about our growing reliance on automation in education as well.

Artificial Intelligence and Knowing What Learners Know Once They Have “Learned”

One of the side effects – good or bad – of our increasing utilization of Artificial Intelligence in education is that it brings to light all of the problems we have with knowing how a learner has “learned” something. This specific problem has been discussed and debated in Instructional Design courses for decades – some of my favorite class meetings in grad school revolved around digging into these problems. So it is good to see these issues being brought to a larger conversation about education, even if it is in the context of our inevitable extinction at the hands of our future robot overlords.

Dave Cornier wrote a very good post about the questions to ask about AI in learning. I will use that post to direct some responses mostly back to the AI community as well as those utilizing AI in education. Dave ends up questioning a scenario that is basically the popular “Netflix for Education” approach to Educational AI: the AI perceives what the learners choose as their favorite learning resource by likes, view counts, etc, and then proposes new resources to specific learners to help them learn more, in the way Netflix recommends new shows to watch based on the popularity of other shows (which were connected to each other by popularity metrics as well).

This, of course, leads to the problem that Dave points out: “If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.” Popular, as we all learn at some point, does not always equal good, helpful, correct, etc. However, people in the AI field will point out that they can build a system that relies on the expertise of experts and teachers in the field rather than likes, and I get that. Some have done that. But there is a bigger problem here.

Let’s back up to the part from Dave’s post about how AI accomplishes recommendations by simplifying the learners down to a few choices, much in the same way Netflix simplifies viewing choices down to a small list of genres. This is often true. However, this is true not because programmers wanted it that way – this is the model they inherited from education itself. Sure, it is true that in an ideal learning environment, the teacher talks to all learners and gets to make personal teaching choices for each one because of that. But in reality, most classes design one pathway for all learners to take: read this book, listen to these lectures, take this test, answer this discussion question while responding to two peers, wash, rinse, repeat.

AI developers know this, and to their credit, they are offering personalized learning solutions that at least expand on this. Many examinations of the problems with AI skip over this part and just look at ideal classrooms where learners and instructors have time to dig into individual learner complexities. But in the real world? Everyone follows the one path. So adding 7 or 10 or more options to the one that now to exists (for most)? Its at least a step in right direction, right?

Depends on who you ask. But that is another topic for anther day.

This is kind of where a lot of what is now called “personalized education” is at. I compare this state to all of those personalized gift websites, where you can go buy a gift like a mouse pad and get a custom message or name printed on it. Sure, the mouse pad is “personalized” with my name… but what if I didn’t need a mouse pad in the first place? You might say “well, there were only a certain set of gifts available and that was the best one out of the choices that were there.”

Sure, it might be a better gift than some plain mouse pad from Walmart to the person that needed a mouse pad. But for everyone else – not so much.

Like Dave and many have pointed out – someone is choosing those options and limiting the number of them. But to someone going from the linear choice of local TV stations to Netflix, at first that choice seems awesome. However, soon you start noticing the limitations of only watching something on Netflix. Then it starts getting weird. If I liked Stranger Things, I would probably like Tidying Up with Marie Kondo? Really?

The reality is, while people in the AI field will tell you that AI “perceives the learner and knowledge in a field,” it is more accurate to say that the AI “records choices that the learner makes about knowledge objects and then analyzes those choices to find patterns between the learner and knowledge object choices in ways that are designed to be predictive in some way for future learners.” If you just look at all that as “perceiving,” then you probably will end up with the Netflix model and all the problems that brings. But if you take a more nuanced look at what happens (it’s not “perceiving” as much as “recording choices” for example), and connect it with a better way of looking at the learner process, you will end up with better models and ideas.

So back to how we really don’t have that great of an idea of how learning actually happens in the brain. There are many good theories, and Stephen Downes usually highlights the best in emerging research in how we really understand the actual process of learning in the brain. But since there is still so much we either a) don’t know, or b) don’t know how to quantify and measure externally from the brain – then we can’t actually measure “learning” itself.

As a side note: this is, quite frankly, where most of the conversation on grading goes wrong. Grades are not a way to measure learning. We can’t stick a probe on people’s heads and measure a “learning” level in human brains. So we have to have some kind of external way to figure out if learning happens. As Dr. Scott Warren puts it: its like we are looking at this brick wall with a few random windows that really aren’t in the right spot and are trying to figure out what is happening on the other side of the wall.

Some people are clinging to the outmoded idea that brains are like computers: input knowledge/skills, output learning. Our brains don’t work like that. But unfortunately, that is often the way many look at the educational process. Instructors design some type of input – lectures, books, training, videos, etc – and then we measure the output with grades as way to say if “learning happened” or not.

The reality is, we technically just point learners towards something that they can use in their learning process (lectures, books, videos, games, discussions, etc), they “do” the learning, and then we have to figure out what they learned. Grades are a way to see how learners can apply what they learned to a novel artifact – a test, a paper, a project, a skill demonstration, etc. Grades in no way measure what students have learned, but rather how students can apply what they learned to some situation or context determined by someone else. That way – if they apply it incorrectly by, say, getting the question wrong – we assume they haven’t learned it well enough. Of course, an “F” on a test could mean the test was a poor way to apply the knowledge as much as it could say that the learner didn’t learn. Or that the learner got sidetracked while taking the test. Or, so on….

The learning that happens in between the choosing of the content/context/etc and the application of the knowledge gained on a test or paper or other external measurement is totally up to the learner.

So that is what AI is really analyzing in many designs – it is looking at what choices were made before the learning and what the learner was able to do with their learning on the other side of the learning based on some external application of knowledge/skills/etc. We have to look at AI something that affects and/or measures the bookends to the actual learning.

Rather than the Netflix approach to recommendations, I would say a better model to look to is the Amazon model of “people also bought this.” Amazon looks at each thing they sell as an individual object that people will connect in various ways to other individual objects – some connects that make sense, others that don’t. Sometimes people look at one item and buy other similar items instead, sometimes people buy items that work together, and sometimes people “in the know” buy random things that seem disconnected to newbies. The Amazon system is not perfect, but it does allow for greater individuality in purchasing decisions, and doesn’t assume that “because you bought this phone, you might also want to buy this phone as well because it is a phone, too.”

In other words, the Amazon model can see the common connections as well as the uncommon connections (even across their predefined categories), and let you the consumer decide which connections work for you or not. The Netflix model looks for the popular/common connections within their predefined categories.

I would submit that learners need ways to learn that can look at common learning pathways as well as uncommon pathways – especially across any categories we would define for them.

Of course, Amazon can collect data in ways that would be illegal (for good reason) in education, and the fact that they have millions of transactions each day means that they get detailed data about even obscure products in ways that would be impossible at a smaller scale in education. In no way should this come across as me proposing something inappropriate like “Amazon for Education!” The point I am getting at here is that we need a better way to look at AI in education:

  • Individuals are complex, and all systems need to account for complexity instead of simplifying for the most popular groups based on analytics.
  • AI should not be seen as something that perceives the learner or their knowledge or learning, but one that collects incomplete data on learners choices.
  • The goal of this collection should not just be to perceive learners and content, but to understand complex patterns made by complex people.
  • The categories and patterns selected by the creators of AI applications should not become limitations on the learners within that application.
  • While we have good models for how we learn, the actual act of “learning” should still be treated as a mysterious process (until that changes – if ever).
  • AI, like all education, does not measure learning, but how learning that occurred mysteriously in the learner was applied to an external context or artifact. This will be a flawed process, so the results of any AI application should be viewed within the bias and flaws created by the process.
  • The learners perception of what they learned and how well they were able to apply it to external context/artifact is mostly ignored or discarded as irrelevant self-reported data, and that should stop.