Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

One of the foundation concepts in instructional design and other parts of the field of education are the types of interaction that occur in the educational process online. In 1989, Michael G. Moore first categorized three types of interaction in education: student-teacher, student-student, and student-content. Then, in 1994, Hillman, Willis, and Gunawardena expanded on this model, adding student-interface interactions. Four years later, Anderson & Garrison (1998) added three more interaction types to account for advances in technology: teacher-teacher, teacher-content, and content-content. Since social constructivist theory did not quite fit into these seven types of interaction, Dron to propose four more types of interaction in 2007: group-content, group-group, learner-group, and teacher-group. Some would argue that “student-student” and “student-content” still over these newer additions, and to some degree that is true. But it also helps to look at the differences between these different terms as technology has advanced and changed interactions online – so I think the new terms are also helpful. More recently, proponents of connectivism have proposed acknowledging patterns of “interactions with and learning from sets of people or objects [which] form yet another mode of interaction” (Wang, Chen, & Anderson, 2014, p. 125). I would call that networked with sets of people or objects.

The instructional designer within me likes to replace “student” with “learner” and “content” with “design” to more accurately describe the complexity of learners that are not students and learning designs that are not content. However, as we rely more and more on machine learning and algorithms, especially at the systemic level, we are creating new things that learners will increasingly be interacting with for the foreseeable future. I am wondering if it is time to expand this list of interactions to reflect that? Or is it long enough as it is?

So the existing ones I would keep, with “learner” exchanged for “student” and “design” exchanged for “content”:

  • learner-teacher (ex: instructivist lecture, learner teaching the teacher, or learner networking with teacher)
  • learner-learner (ex: learner mentorship, one-on-one study groups, or learner teaching another learner)
  • learner-design (ex: reading a textbook, watching a video, listening to audio, completing a project, or reading a website)
  • learner-interface (ex: web-browsing, connectivist online interactions, gaming, or computerized learning tools)
  • teacher-teacher (ex: collaborative teaching, cross-course alignment, or professional development)
  • teacher-design (ex: teacher-authored textbooks or websites, teacher blogs, or professional study)
  • group-design (ex: constructivist group work, connectivist resource sharing, or group readings)
  • group-group (ex: debate teams, group presentations, or academic group competitions)
  • learner-group (ex: individual work presented to group for debate, learner as the teacher exercises)
  • teacher-group (ex: teacher contribution to group work, group presentation to teacher)
  • networked with sets of people or objects (ex: Wikipedia, crowdsourced learning, or online collaborative note-taking)

The new ones I would consider adding include:

  • algorithm-learner (ex: learner data being sent to algorithms; algorithms sending communication back to learners as emails, chatbot messages, etc)
  • algorithm-teacher (ex: algorithms communicating aggregate or individual learner data on retention, plagiarism, etc)
  • algorithm-design (ex: algorithms that determine new or remedial content; machine learning/artificial intelligence)
  • algorithm-interface (ex: algorithms that reformat interfaces based on input from learners, responses sent to chatbots, etc)
  • algorithm-group (ex: algorithms that determine how learners are grouped in courses, programs, etc)
  • algorithm-system (ex: algorithms that report aggregate or individual learner data to upper level admin)
  • system-learner (ex: system-wide initiatives that attempt to “solve” retention, plagiarism, etc)
  • system-teacher (ex: cross-curricular implementation, standardized teaching approaches)
  • system-design (ex: degree programs, required standardized testing, and other systemic requirements)

Well… that gets too long. But I suspect that a lot of the new additions list would fall under the job category of what many call “learning engineer” – so maybe there is a use for this? You might have noticed that it appears as if I removed “content-content” – but that was renamed “algorithm-design,” as that is mainly what I think of for “content-content.” But I could be wrong. I also left out “algorithm-algorithm,” as algorithms already interface with themselves and other algorithms by design. That is implied in “algorithm-design,” kind of in the same way I didn’t include learners interacting with themselves in self-reflection as that is implied in “learner-learner.” But I could be swayed by arguments for including those as well. I am also not sure how much “system-interface” interaction we have, as most systems interact with interfaces through other actors like learners, teachers, groups, etc. So I left that off. I also couldn’t think of anything for “system-group” that was different from anything else already listed as examples elsewhere. And I am not sure we have much real “system-system” interaction outside of a few random conversations at upper administrative levels that rarely trickle down into education without being vastly filtered through systemic norms first. Does it count as “system-system” interaction in a way that affects learning if the receiving system is going to mix it with their existing standards before approving and disseminating it first? I’m not sure.

So – that is 20 types of interaction, with some more that maybe should have been included or not depending on your viewpoint (and I am still not sure we have advanced enough with “algorithm-interface” yet to give it it’s own category, but I think we will pretty soon). Someone may have done this already and I just couldn’t find it in a search – so I apologize if I missed others’ work. None of this is to say that any of these types of interactions are good or bad for learners – they just are the ones that are happening more and more as we automate more and more and/or take a systems approach to education. In fact, these new levels could be helpful in informing critical dialogue about our growing reliance on automation in education as well.

Artificial Intelligence and Knowing What Learners Know Once They Have “Learned”

One of the side effects – good or bad – of our increasing utilization of Artificial Intelligence in education is that it brings to light all of the problems we have with knowing how a learner has “learned” something. This specific problem has been discussed and debated in Instructional Design courses for decades – some of my favorite class meetings in grad school revolved around digging into these problems. So it is good to see these issues being brought to a larger conversation about education, even if it is in the context of our inevitable extinction at the hands of our future robot overlords.

Dave Cornier wrote a very good post about the questions to ask about AI in learning. I will use that post to direct some responses mostly back to the AI community as well as those utilizing AI in education. Dave ends up questioning a scenario that is basically the popular “Netflix for Education” approach to Educational AI: the AI perceives what the learners choose as their favorite learning resource by likes, view counts, etc, and then proposes new resources to specific learners to help them learn more, in the way Netflix recommends new shows to watch based on the popularity of other shows (which were connected to each other by popularity metrics as well).

This, of course, leads to the problem that Dave points out: “If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.” Popular, as we all learn at some point, does not always equal good, helpful, correct, etc. However, people in the AI field will point out that they can build a system that relies on the expertise of experts and teachers in the field rather than likes, and I get that. Some have done that. But there is a bigger problem here.

Let’s back up to the part from Dave’s post about how AI accomplishes recommendations by simplifying the learners down to a few choices, much in the same way Netflix simplifies viewing choices down to a small list of genres. This is often true. However, this is true not because programmers wanted it that way – this is the model they inherited from education itself. Sure, it is true that in an ideal learning environment, the teacher talks to all learners and gets to make personal teaching choices for each one because of that. But in reality, most classes design one pathway for all learners to take: read this book, listen to these lectures, take this test, answer this discussion question while responding to two peers, wash, rinse, repeat.

AI developers know this, and to their credit, they are offering personalized learning solutions that at least expand on this. Many examinations of the problems with AI skip over this part and just look at ideal classrooms where learners and instructors have time to dig into individual learner complexities. But in the real world? Everyone follows the one path. So adding 7 or 10 or more options to the one that now to exists (for most)? Its at least a step in right direction, right?

Depends on who you ask. But that is another topic for anther day.

This is kind of where a lot of what is now called “personalized education” is at. I compare this state to all of those personalized gift websites, where you can go buy a gift like a mouse pad and get a custom message or name printed on it. Sure, the mouse pad is “personalized” with my name… but what if I didn’t need a mouse pad in the first place? You might say “well, there were only a certain set of gifts available and that was the best one out of the choices that were there.”

Sure, it might be a better gift than some plain mouse pad from Walmart to the person that needed a mouse pad. But for everyone else – not so much.

Like Dave and many have pointed out – someone is choosing those options and limiting the number of them. But to someone going from the linear choice of local TV stations to Netflix, at first that choice seems awesome. However, soon you start noticing the limitations of only watching something on Netflix. Then it starts getting weird. If I liked Stranger Things, I would probably like Tidying Up with Marie Kondo? Really?

The reality is, while people in the AI field will tell you that AI “perceives the learner and knowledge in a field,” it is more accurate to say that the AI “records choices that the learner makes about knowledge objects and then analyzes those choices to find patterns between the learner and knowledge object choices in ways that are designed to be predictive in some way for future learners.” If you just look at all that as “perceiving,” then you probably will end up with the Netflix model and all the problems that brings. But if you take a more nuanced look at what happens (it’s not “perceiving” as much as “recording choices” for example), and connect it with a better way of looking at the learner process, you will end up with better models and ideas.

So back to how we really don’t have that great of an idea of how learning actually happens in the brain. There are many good theories, and Stephen Downes usually highlights the best in emerging research in how we really understand the actual process of learning in the brain. But since there is still so much we either a) don’t know, or b) don’t know how to quantify and measure externally from the brain – then we can’t actually measure “learning” itself.

As a side note: this is, quite frankly, where most of the conversation on grading goes wrong. Grades are not a way to measure learning. We can’t stick a probe on people’s heads and measure a “learning” level in human brains. So we have to have some kind of external way to figure out if learning happens. As Dr. Scott Warren puts it: its like we are looking at this brick wall with a few random windows that really aren’t in the right spot and are trying to figure out what is happening on the other side of the wall.

Some people are clinging to the outmoded idea that brains are like computers: input knowledge/skills, output learning. Our brains don’t work like that. But unfortunately, that is often the way many look at the educational process. Instructors design some type of input – lectures, books, training, videos, etc – and then we measure the output with grades as way to say if “learning happened” or not.

The reality is, we technically just point learners towards something that they can use in their learning process (lectures, books, videos, games, discussions, etc), they “do” the learning, and then we have to figure out what they learned. Grades are a way to see how learners can apply what they learned to a novel artifact – a test, a paper, a project, a skill demonstration, etc. Grades in no way measure what students have learned, but rather how students can apply what they learned to some situation or context determined by someone else. That way – if they apply it incorrectly by, say, getting the question wrong – we assume they haven’t learned it well enough. Of course, an “F” on a test could mean the test was a poor way to apply the knowledge as much as it could say that the learner didn’t learn. Or that the learner got sidetracked while taking the test. Or, so on….

The learning that happens in between the choosing of the content/context/etc and the application of the knowledge gained on a test or paper or other external measurement is totally up to the learner.

So that is what AI is really analyzing in many designs – it is looking at what choices were made before the learning and what the learner was able to do with their learning on the other side of the learning based on some external application of knowledge/skills/etc. We have to look at AI something that affects and/or measures the bookends to the actual learning.

Rather than the Netflix approach to recommendations, I would say a better model to look to is the Amazon model of “people also bought this.” Amazon looks at each thing they sell as an individual object that people will connect in various ways to other individual objects – some connects that make sense, others that don’t. Sometimes people look at one item and buy other similar items instead, sometimes people buy items that work together, and sometimes people “in the know” buy random things that seem disconnected to newbies. The Amazon system is not perfect, but it does allow for greater individuality in purchasing decisions, and doesn’t assume that “because you bought this phone, you might also want to buy this phone as well because it is a phone, too.”

In other words, the Amazon model can see the common connections as well as the uncommon connections (even across their predefined categories), and let you the consumer decide which connections work for you or not. The Netflix model looks for the popular/common connections within their predefined categories.

I would submit that learners need ways to learn that can look at common learning pathways as well as uncommon pathways – especially across any categories we would define for them.

Of course, Amazon can collect data in ways that would be illegal (for good reason) in education, and the fact that they have millions of transactions each day means that they get detailed data about even obscure products in ways that would be impossible at a smaller scale in education. In no way should this come across as me proposing something inappropriate like “Amazon for Education!” The point I am getting at here is that we need a better way to look at AI in education:

  • Individuals are complex, and all systems need to account for complexity instead of simplifying for the most popular groups based on analytics.
  • AI should not be seen as something that perceives the learner or their knowledge or learning, but one that collects incomplete data on learners choices.
  • The goal of this collection should not just be to perceive learners and content, but to understand complex patterns made by complex people.
  • The categories and patterns selected by the creators of AI applications should not become limitations on the learners within that application.
  • While we have good models for how we learn, the actual act of “learning” should still be treated as a mysterious process (until that changes – if ever).
  • AI, like all education, does not measure learning, but how learning that occurred mysteriously in the learner was applied to an external context or artifact. This will be a flawed process, so the results of any AI application should be viewed within the bias and flaws created by the process.
  • The learners perception of what they learned and how well they were able to apply it to external context/artifact is mostly ignored or discarded as irrelevant self-reported data, and that should stop.

The Artificial Component of Artificial Intelligence and the C-3P0 Rule

There have been many think pieces and twitter threads examining how the “intelligence” component of “Artificial Intelligence” is not real intelligence, or at least not anything like human intelligence. I don’t really want to jump into the debate about what counts as real intelligence, but I think the point about AI not being like human intelligence should be obvious in the “artificial” component of the term. To most, people it probably is – when discussing the concept of AI in an overall sense at least.

Nobody thinks you have to mow artificial grass. No one would take artificial sweetener and use it in all of the same cooking/baking applications that they would with sugar. By calling something “artificial,” we acknowledge that there are significant differences between the real thing and the artificial thing.

But like I said, most people would probably recognize that as true for AI. The problem usually comes when companies or researchers try to make it hard to tell if their AI tool/process/etc is human or artificial. Of course, some are researching if people can tell the difference between a human and their specific AI application (that they created without any attempt to specifically make it deceptively human), and that is a different process.

Which, of course, brings up some of the blurry lines in human/machine interface. Any time you have a process or tool or application that is designed for people to interface with, you want to make sure it is as user-friendly as possible. But where is the line between “user-friendly” and “tricking people into thinking they are working with a human”? Of course there is a facet of intent in that question, but beyond intent there are also unintended consequences of not thinking through these issues fully.

Take C-3P0 from Star Wars. I am sure that the technology exists in the Star Trek universe to create robots that look like real humans (just look at Luke’s new hand in The Empire Strikes Back). But the makers of protocol droids like C-3P0 still made them look like robots even though they were protocol droids that needed to have near perfect human traits for their interface. They mad e a choice to make their AI tool still look artificial. Yes, I know that ultimately these are movies and the film makers made C-3P0 look the way it did just because they thought it was cool and futuristic looking. But they also unintentionally created something I would call a “C-P30 Rule” that everyone working with AI should consider: make sure that your AI, no matter how smoothly it needs to interface with humans, has something about it that quickly and easily communicates to those that utilize it that it is artificial.

What Does It Take to Make an -agogy? Dronagogy, Botagogy, and Education in a Future Where Humans Are Not the Only Form of “Intelligence”

Several years ago I wrote a post that looked at every form of learning “-agogy” I could find. Every once in a while I think that I probably need to do a search to see if others have been added so I can do an updated post. I did find a new one today, but I will get to that in a second.

The basic concept of educational -agogy is that, because “agogy” means “lead” (often seen in the sense of education, but not always), you combine who is being led or the context for the leading with the suffix. Ped comes from the Greek word for “children,” andr from “men,” huet from “self,” and so on. It doesn’t always have to be Greek (peeragogy, for example) – but the focus is on who is being taught and not what topic or tool they are being taught.

I noticed a recent paper that looks to make dronagogy a term: A Framework of Drone-based Learning (Dronagogy) for Higher Education in the Fourth Industrial Revolution. The article most often mentions pedagogy as a component of dronagogy, so I am not completely sure of the structure they envision. But it does seem clear that drones are the topic and/or tool, and only in certain subjects. Therefore, dronology would have probably been a more appropriate term. They are essentially talking about the assembly and programming of drones, not teaching the actual drones.

But someday, something like dronagogy may actually be a thing (and “someday” as in pretty soon someday, not “a hundred years from now” someday). If someone hasn’t already, soon someone will argue that Artificial Intelligence has transcended “mere” programming and needs to be “led” or “taught” more than “programmed.” At what point will we see the rise of “botagogy” (you heard it here first!)? Or maybe “technitagogy” (from the Greek word for “artificial” – technitós)?

Currently, you only hear a few people like George Siemens talking about how humans are no longer the only form of “intelligence” on this planet. While there is some resistance to that idea (because AI is not as “intelligent” as many think it is), it probably won’t be much longer before there is wider acceptance that we actually are living in a future where humans are not the only form of “intelligence” around. Will we expand our view of leading/teaching to include forms of intelligence that may not be like humans… but that can learn in various ways?

Hard to say, but we will probably be finding out sooner than a lot of us think we will. So maybe I shouldn’t be so quick to question dronagogy? Will drone technology evolve into a form of intelligence someday? To be honest, that just sounds like a Black Mirror episode that we may not want to get into.

(Feature image by Franck V. on Unsplash)

What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Modernizing Websites: html5, Indieweb, and More?

On and off for the past few weeks I have been looking into modernizing some of my websites with things like html5 and indieweb. The main goal of this experimentation was to improve the LINK Research Lab web presence by getting some WebMention magic going on our website. The bonus is that I experiment with some of these on my own website before moving them onto a real website for the LINK Lab. I had to make sure they didn’t blow things up, after all.

However, the problem with that is my website was running on a cobbled together WordPress theme that was barely holding together, and looking dated. I was looking for a nice theme to switch over to quickly, but not having much success. Then I remembered that Alan Levine had a sweet looking html5 theme (wp-dimension). One weekend I gave it whirl, and I think we have a winner.

The great thing about Cog Dog’s theme is that it has a simple initial interface for those that want to take a quick look at my work, but also has the ability to allow people to dig deeper into any topic they want to. I had to download and delete all of the blog posts that were already on my website, as the theme turns blog posts into the quick look posts on the front page. Those old posts were just feedwordpress copies of every post I wrote on this blog – so no need to worry about that. Overall, a great theme that is easy to use that I highly recommend for anyone wanting to create a professional website fast.

Much of my current desire to update websites came from reading Stephen Downes’ post on OwnYourGram – a service that let’s you export your Instagram files to your own website. To be honest, the IndieWeb part on the OwnHourGram website was just not working for me, until I found the actual IndieWeb plugins for WordPress. When in doubt, look for the plugins that already exist. I added those, and it all finally worked great. I found that the posts it imported didn’t work that well with many WordPress themes (Instagram posts don’t have titles, but many WordPress themes ignore posts without titles – or renders them strangely on the front page). So I still need to tinker with that.

The main part I became the most interested in was how IndieWeb features like WebMentions can help you connect with conversations about your content on other websites (and also social media). That will probably be the most interesting feature that I want to start using on this website and the LINK Lab website as well. So now that I have it figured out, time to get it set up before it all changes :) I’m just digging into this after being a fan from a far for a while, so let’s see what else is out there.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.

Can You Automate OER Evaluation With The RISE Framework?

The RISE Framework is a learning analytics methodology for identifying OER resources in a course that may need improvement. On one level, this is an interesting development, since so few learning analytics projects are actually getting into how to improve the actual education of learners. But on the other hand, I am not sure if this framework has a detailed enough understanding of instructional design, either. A few key points seem to be missing. It’s still early, so we will see.

The basic idea of the RISE Framework is that analytics will create a graph that plots page clicks in OER resources on the x-axis, and grades on assessments on the y-axis. This will create a grid that shows where there were higher than average grades with higher than average clicks, higher than average grades with lower than average clicks, lower than average grades with higher than average clicks, and lower than average grades with lower than average clicks. This is meant to identify the resources that teachers should consider examining for improvement (especially focusing on the ones that got a high number of clicks but lower grade scores). Note that this is not meant to definitely say “this is where there is problem, so fix it now” but more ” there may or may not be a problem here, so check it out.” Keep that in mind while I explore some of my doubts here, because I would be a lot harsher on this if it was presented as a tool to definitely point out exact problems rather than what it is: a way to start the search for problems.

Of course, any system of comparing grades with clicks itself is problematic on many fronts, and the creators of the RISE Framework do take this into consideration when spelling out what each of the four quadrants could mean. For example, in the quadrant that specifies high grades with low content usage, they not only identify “high content quality” as the cause of this, but also “high prior knowledge,” “poorly written assessment,” and so on. So this is good – many factors outside of grades and usage are taken into account. This is because, on the grade front, we know that scores are a reflection of a massive number of factors – the quality of the content being only one of those (and not always the biggest one). As  noted, prior knowledge can affect grades (sometimes negatively – not always positively like the RISE framework appears to assume). Exhaustion or boredom or anxiety can impact grades. Again, I am glad that these are in the framework, but the affect these have on grades is assumed in one direction – rather than the complex directions they take in real life. For example, students that game the test or rubric can inflate scores without using the content much – even on well-designed assessments (I did that all of the time in college).

However, the bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores. Anyone that has analyzed assessment data can tell you that the final score on a test is actually an aggregate of many smaller items (test questions). That aggregate grade can mask many deficiencies at the micro level. That is why instructors prefer to analyze individual test questions or rubric lines than the aggregate scores of the entire test. Assessments could cover, say 45 questions of content that were well covered in the resources, and then 5 questions that are poorly covered. But the high scores on the 45 questions, combined with the fact that many will get some questions right by random guessing on the other 5, could result in test scores that mask a massive problem with those 5 questions. But teachers can most likely figure that out quickly without the RISE framework, and I will get to that later.

The other concern is with clicks on the OER. Well, they say that you can measure “pageviews, time spent, or content page ratings”… but those first two are still clicks, and the last one is a bit too dependent on the happiness of the raters (students) at any given moment to really be that quantitative. I wouldn’t outright discount it as a factor, but I will state that you are always going to find a close alignment with the test scores on that one for many reasons. In other words, it is a pre-biased factor – students that get a high score will probably rate the content as effective even if it wasn’t, and students that get a low score will probably blame the content quality whether it was really a factor or not.

Also, now that students know their clicks are being recorded, they are more and more often clicking around to make sure they get good numbers on those data points. I even do that when taking MOOCs, just in case: click through the content at a realistic pace even if I am really doing something else other than reading. People have learned to skim resources while checking their phone, clicking through at a pace that makes it seem like they are reading closely. Most researchers are very wary of using click data like pageviews or time spent to tell anything other than where students clicked, how long between clicks, and what was clicked on. Guessing what those mean beyond that? More and more, that is being discouraged in research (and for good reason).

Of course, I don’t have time to go into how relying on only content and assessment is poor way to teach a course, but I think we all know that. A robust and helpful learning community in a class can answer learning questions and help learners overcome bad resources to get good grades. And I am not referring to cheating here – Q&A forums in courses can often really help some learners understand bad readings – while also possibly making them feel like they are the problem, not the content.

Still, all of that is somewhat or directly addressed in the framework, and because it is a guide rather than definitive answer, variations like those discussed above are to be expected. I covered them just to make sure I was covering all critical bases.

The biggest concern I have with the RISE framework really comes here: “The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate.”

Well, since that doesn’t happen in many courses due to time constraints, that eliminates large chunks of courses. I can also tell you as an instructional designer, many people think they have well-aligned outcomes…. but don’t.

But, let’s assume that you do have a course with “content and assessment items have been explicitly aligned with learning outcomes.” If you have explicitly aligned assessments, you don’t need the RISE framework. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number. Most testing systems today will give you an item-by-item breakdown of each assessment (because teachers have been asking for it). Any low course score on any specific question indicates some problem. At that point, it is best (and quickest) to just ask your learners:

  1. Did the question make sense? Was it well written?
  2. Did it connect to the content?
  3. Did the content itself make sense?

Plus, most content hosting systems have ways to track page clicks, so you can easily make your own matrix using clicks if you need to. The matrix in the framework might give you a good way to organize the data to see where your problem lies…. but to be honest, I think it would be quicker and more accurate to focus on the assessment questions instead of the whole test, and ask the learners about specific questions.

Also, explicit alignment can itself hide problems with the content. An explicit alignment would require that you test what is in the content, even if the content is bad. This is one of the many things you learn as an ID: don’t test what students don’t learn; write your test questions to match the content no matter what. A decently-aligned assessment can still produce grades from a very bad content source. One of my ID professors once told me something along the lines of “a good instructional designer can help students pass even with bad textbooks; a bad instructional designer can help them fail with the best textbook.”

Look – instructional designers have been dealing with good and bad textbooks for decades now. Same goes for instructors that serve as their own IDs. We have many ways to work around those.

I may be getting the RISE framework wrong, but comparing overall scores on assessments to certain click-stream activity in OER (sometimes an entire book) comes across like shooting fish in a barrel with a shotgun approach. Especially when well-aligned test questions can pinpoint specific sources of problems at a fairly micro-fine level.

Now then, if you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something. Then, if you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant:

We see that you missed Question 4, but you did spend a good amount of time on page 25 of the book, where this question was taken from. Would you say that:

  • The text on page 25 was not well-written
  • Question 4 was not well-written
  • The text on page 25 doesn’t really match Question 4
  • I visited page 25, but did not spend the full time there reading the text

Of course, writing it out this ways sounds creepy. You would have to make sure that learners opt-in for this after fully understanding that this is what would happen, and then you would probably need to make sure that the responses go to someone that is not directly responsible for their grade to be analyzed anonymously. Then report those results in a generic way: “survey results identified that there is probably not a good alignment between page 25 and question 4, so please review both to see if that is the case.”

In the end, though, I am not sure if you can get detailed enough to make this framework effective without diving deep into surveillance monitoring. Maybe putting the learner in control of these tools, and give them the option of sharing the results with their instructor if they feel comfortable?

But, to be honest, I am probably not in the target audience for this tool. My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

“Creating Online Learning Experiences” Book is Now Available as an OER

Well, big news in the EduGeek Journal world. I have been heading up a team of people to work on new book that was released as an OER through PressBooks today:

Creating Online Learning Experiences: A Brief Guide to Online Courses, from Small and Private to Massive and Open

Book Description: The goal of this book is to provide an updated look at many of the issues that comprise the online learning experience creation process. As online learning evolves, the lines and distinctions between the various classifications of courses has blurred and often vanished. Classic elements of instructional design remain relevant at the same time that newer concepts of learning experience are growing in importance. However, problematic issues new and old still have to be addressed. This book aims to be a handbook that explores as many of these issues and concepts as possible for new and experienced designers alike, whether creating traditional online courses, open learning experiences, or anything in between.

We have been working on this book on and off for three or more years now, so I am glad to finally get it out to the world. In addition to me, there were several great contributing writers: Brett Benham, Justin Dellinger, Amber Patterson, Peggy Semingson, Catherine Spann, Brittany Usman, and Harriet Watkins.

Also, on top of that, we recruited a great group of reviewers that dug through various parts and gave all kinds of helpful suggestions and edits: Maha Al-Freih, Maha Bali, Autumm Caines, Justin Dellinger, Chris Gilliard, Rebecca Heiser, Rebecca Hogue, Whitney Kilgore, Michelle Reed, Katerina Riviou, Sarah Saraj, George Siemens, Brittany Usman, and Harriet Watkins.

Still skeptical? How about an outline of topics, most of which we did try to filter through a critical lens to some degree:

  1. Overview of Online Courses
  2. Basic Philosophies
  3. Institutional Courses
  4. Production Timelines and Processes
  5. Effective Practices
  6. Creating Effective Course Activities
  7. Creating Effective Course Content
  8. Open Educational Resources
  9. Assessment and Grading Issues
  10. Creating Quality Videos
  11. Utilizing Social Learning in Online Courses
  12. Mindfulness in Online Courses
  13. Advanced Course Design
  14. Marketing of an Online Course

So, please download and read the book here if you like: Creating Online Learning Experiences

There is also a blog post from UTA libraries about the release: Libraries Launch Authoring Platform, Publish First OER

And if you don’t like something you read, or find something that is wrong, or think of something that should have been added – let me know! I would love to see an expanded second edition with more reviewers and contributing authors. There were so many more people I wanted to ask to contribute, but I just ran out of time. I intentionally avoided the “one author/one chapter” structure so that you can add as much or as little as you like.