One of the side effects – good or bad – of our increasing utilization of Artificial Intelligence in education is that it brings to light all of the problems we have with knowing how a learner has “learned” something. This specific problem has been discussed and debated in Instructional Design courses for decades – some of my favorite class meetings in grad school revolved around digging into these problems. So it is good to see these issues being brought to a larger conversation about education, even if it is in the context of our inevitable extinction at the hands of our future robot overlords.

Dave Cornier wrote a very good post about the questions to ask about AI in learning. I will use that post to direct some responses mostly back to the AI community as well as those utilizing AI in education. Dave ends up questioning a scenario that is basically the popular “Netflix for Education” approach to Educational AI: the AI perceives what the learners choose as their favorite learning resource by likes, view counts, etc, and then proposes new resources to specific learners to help them learn more, in the way Netflix recommends new shows to watch based on the popularity of other shows (which were connected to each other by popularity metrics as well).

This, of course, leads to the problem that Dave points out: “If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.” Popular, as we all learn at some point, does not always equal good, helpful, correct, etc. However, people in the AI field will point out that they can build a system that relies on the expertise of experts and teachers in the field rather than likes, and I get that. Some have done that. But there is a bigger problem here.

Let’s back up to the part from Dave’s post about how AI accomplishes recommendations by simplifying the learners down to a few choices, much in the same way Netflix simplifies viewing choices down to a small list of genres. This is often true. However, this is true not because programmers wanted it that way – this is the model they inherited from education itself. Sure, it is true that in an ideal learning environment, the teacher talks to all learners and gets to make personal teaching choices for each one because of that. But in reality, most classes design one pathway for all learners to take: read this book, listen to these lectures, take this test, answer this discussion question while responding to two peers, wash, rinse, repeat.

AI developers know this, and to their credit, they are offering personalized learning solutions that at least expand on this. Many examinations of the problems with AI skip over this part and just look at ideal classrooms where learners and instructors have time to dig into individual learner complexities. But in the real world? Everyone follows the one path. So adding 7 or 10 or more options to the one that now to exists (for most)? Its at least a step in right direction, right?

Depends on who you ask. But that is another topic for anther day.

This is kind of where a lot of what is now called “personalized education” is at. I compare this state to all of those personalized gift websites, where you can go buy a gift like a mouse pad and get a custom message or name printed on it. Sure, the mouse pad is “personalized” with my name… but what if I didn’t need a mouse pad in the first place? You might say “well, there were only a certain set of gifts available and that was the best one out of the choices that were there.”

Sure, it might be a better gift than some plain mouse pad from Walmart to the person that needed a mouse pad. But for everyone else – not so much.

Like Dave and many have pointed out – someone is choosing those options and limiting the number of them. But to someone going from the linear choice of local TV stations to Netflix, at first that choice seems awesome. However, soon you start noticing the limitations of only watching something on Netflix. Then it starts getting weird. If I liked Stranger Things, I would probably like Tidying Up with Marie Kondo? Really?

The reality is, while people in the AI field will tell you that AI “perceives the learner and knowledge in a field,” it is more accurate to say that the AI “records choices that the learner makes about knowledge objects and then analyzes those choices to find patterns between the learner and knowledge object choices in ways that are designed to be predictive in some way for future learners.” If you just look at all that as “perceiving,” then you probably will end up with the Netflix model and all the problems that brings. But if you take a more nuanced look at what happens (it’s not “perceiving” as much as “recording choices” for example), and connect it with a better way of looking at the learner process, you will end up with better models and ideas.

So back to how we really don’t have that great of an idea of how learning actually happens in the brain. There are many good theories, and Stephen Downes usually highlights the best in emerging research in how we really understand the actual process of learning in the brain. But since there is still so much we either a) don’t know, or b) don’t know how to quantify and measure externally from the brain – then we can’t actually measure “learning” itself.

As a side note: this is, quite frankly, where most of the conversation on grading goes wrong. Grades are not a way to measure learning. We can’t stick a probe on people’s heads and measure a “learning” level in human brains. So we have to have some kind of external way to figure out if learning happens. As Dr. Scott Warren puts it: its like we are looking at this brick wall with a few random windows that really aren’t in the right spot and are trying to figure out what is happening on the other side of the wall.

Some people are clinging to the outmoded idea that brains are like computers: input knowledge/skills, output learning. Our brains don’t work like that. But unfortunately, that is often the way many look at the educational process. Instructors design some type of input – lectures, books, training, videos, etc – and then we measure the output with grades as way to say if “learning happened” or not.

The reality is, we technically just point learners towards something that they can use in their learning process (lectures, books, videos, games, discussions, etc), they “do” the learning, and then we have to figure out what they learned. Grades are a way to see how learners can apply what they learned to a novel artifact – a test, a paper, a project, a skill demonstration, etc. Grades in no way measure what students have learned, but rather how students can apply what they learned to some situation or context determined by someone else. That way – if they apply it incorrectly by, say, getting the question wrong – we assume they haven’t learned it well enough. Of course, an “F” on a test could mean the test was a poor way to apply the knowledge as much as it could say that the learner didn’t learn. Or that the learner got sidetracked while taking the test. Or, so on….

The learning that happens in between the choosing of the content/context/etc and the application of the knowledge gained on a test or paper or other external measurement is totally up to the learner.

So that is what AI is really analyzing in many designs – it is looking at what choices were made before the learning and what the learner was able to do with their learning on the other side of the learning based on some external application of knowledge/skills/etc. We have to look at AI something that affects and/or measures the bookends to the actual learning.

Rather than the Netflix approach to recommendations, I would say a better model to look to is the Amazon model of “people also bought this.” Amazon looks at each thing they sell as an individual object that people will connect in various ways to other individual objects – some connects that make sense, others that don’t. Sometimes people look at one item and buy other similar items instead, sometimes people buy items that work together, and sometimes people “in the know” buy random things that seem disconnected to newbies. The Amazon system is not perfect, but it does allow for greater individuality in purchasing decisions, and doesn’t assume that “because you bought this phone, you might also want to buy this phone as well because it is a phone, too.”

In other words, the Amazon model can see the common connections as well as the uncommon connections (even across their predefined categories), and let you the consumer decide which connections work for you or not. The Netflix model looks for the popular/common connections within their predefined categories.

I would submit that learners need ways to learn that can look at common learning pathways as well as uncommon pathways – especially across any categories we would define for them.

Of course, Amazon can collect data in ways that would be illegal (for good reason) in education, and the fact that they have millions of transactions each day means that they get detailed data about even obscure products in ways that would be impossible at a smaller scale in education. In no way should this come across as me proposing something inappropriate like “Amazon for Education!” The point I am getting at here is that we need a better way to look at AI in education:

  • Individuals are complex, and all systems need to account for complexity instead of simplifying for the most popular groups based on analytics.
  • AI should not be seen as something that perceives the learner or their knowledge or learning, but one that collects incomplete data on learners choices.
  • The goal of this collection should not just be to perceive learners and content, but to understand complex patterns made by complex people.
  • The categories and patterns selected by the creators of AI applications should not become limitations on the learners within that application.
  • While we have good models for how we learn, the actual act of “learning” should still be treated as a mysterious process (until that changes – if ever).
  • AI, like all education, does not measure learning, but how learning that occurred mysteriously in the learner was applied to an external context or artifact. This will be a flawed process, so the results of any AI application should be viewed within the bias and flaws created by the process.
  • The learners perception of what they learned and how well they were able to apply it to external context/artifact is mostly ignored or discarded as irrelevant self-reported data, and that should stop.

2 thoughts on “Artificial Intelligence and Knowing What Learners Know Once They Have “Learned”

  1. I like your idea of treating learning as a mystery. The best we can do is to seek evidence that learning has occurred. Competency models seem to be the best effort so far.

  2. I don’t really talk abut cognition per say, but rather measuring, which is a basis for understanding the teacher – student interface and this gets us a foundation for designing AI. I was taken by EC Lagemann’s (in her book An Elusive Science) statement that you can’t understand education today without realizing that Thornkide (Edward) and Dewey (John) lost. I beleive many of the difficulties we face (including AI)result from taking Thorndike well beyond what is reasonable. Furthermore, it is difficult to give Dewey a fair shake because you can’t understand Dewey’s approach to education while looking through the assumptions of Thorndike.
    I recently had a Medium conversation with David Ng which ended here: https://medium.com/@HowardJ_phd/first-yes-david-can-is-correct-thanks-for-the-correction-aa1acab7fd55
    And one of the best takes on assessment in this vain I found in the writing of Tom Sherrington here https://teacherhead.com/2017/07/16/towards-an-assessment-paradigm-shift/
    Tom doesn’t ground assessment in a theory of learning cognition so much as he grounds it in practice. Understand cognition through the ongoing interaction between student and teacher. Or said another way, don’t look for something hidden inside the brain as it questionable whether we can ever find it. Instead look at what is right infant of us; teachers and students growing. The true value of AI will come when it can look at data for these interactions, not some theorized construct.

Leave a Reply

Your email address will not be published. Required fields are marked *