The Artificial Component of Artificial Intelligence and the C-3P0 Rule

There have been many think pieces and twitter threads examining how the “intelligence” component of “Artificial Intelligence” is not real intelligence, or at least not anything like human intelligence. I don’t really want to jump into the debate about what counts as real intelligence, but I think the point about AI not being like human intelligence should be obvious in the “artificial” component of the term. To most, people it probably is – when discussing the concept of AI in an overall sense at least.

Nobody thinks you have to mow artificial grass. No one would take artificial sweetener and use it in all of the same cooking/baking applications that they would with sugar. By calling something “artificial,” we acknowledge that there are significant differences between the real thing and the artificial thing.

But like I said, most people would probably recognize that as true for AI. The problem usually comes when companies or researchers try to make it hard to tell if their AI tool/process/etc is human or artificial. Of course, some are researching if people can tell the difference between a human and their specific AI application (that they created without any attempt to specifically make it deceptively human), and that is a different process.

Which, of course, brings up some of the blurry lines in human/machine interface. Any time you have a process or tool or application that is designed for people to interface with, you want to make sure it is as user-friendly as possible. But where is the line between “user-friendly” and “tricking people into thinking they are working with a human”? Of course there is a facet of intent in that question, but beyond intent there are also unintended consequences of not thinking through these issues fully.

Take C-3P0 from Star Wars. I am sure that the technology exists in the Star Trek universe to create robots that look like real humans (just look at Luke’s new hand in The Empire Strikes Back). But the makers of protocol droids like C-3P0 still made them look like robots even though they were protocol droids that needed to have near perfect human traits for their interface. They mad e a choice to make their AI tool still look artificial. Yes, I know that ultimately these are movies and the film makers made C-3P0 look the way it did just because they thought it was cool and futuristic looking. But they also unintentionally created something I would call a “C-P30 Rule” that everyone working with AI should consider: make sure that your AI, no matter how smoothly it needs to interface with humans, has something about it that quickly and easily communicates to those that utilize it that it is artificial.

What Does It Take to Make an -agogy? Dronagogy, Botagogy, and Education in a Future Where Humans Are Not the Only Form of “Intelligence”

Several years ago I wrote a post that looked at every form of learning “-agogy” I could find. Every once in a while I think that I probably need to do a search to see if others have been added so I can do an updated post. I did find a new one today, but I will get to that in a second.

The basic concept of educational -agogy is that, because “agogy” means “lead” (often seen in the sense of education, but not always), you combine who is being led or the context for the leading with the suffix. Ped comes from the Greek word for “children,” andr from “men,” huet from “self,” and so on. It doesn’t always have to be Greek (peeragogy, for example) – but the focus is on who is being taught and not what topic or tool they are being taught.

I noticed a recent paper that looks to make dronagogy a term: A Framework of Drone-based Learning (Dronagogy) for Higher Education in the Fourth Industrial Revolution. The article most often mentions pedagogy as a component of dronagogy, so I am not completely sure of the structure they envision. But it does seem clear that drones are the topic and/or tool, and only in certain subjects. Therefore, dronology would have probably been a more appropriate term. They are essentially talking about the assembly and programming of drones, not teaching the actual drones.

But someday, something like dronagogy may actually be a thing (and “someday” as in pretty soon someday, not “a hundred years from now” someday). If someone hasn’t already, soon someone will argue that Artificial Intelligence has transcended “mere” programming and needs to be “led” or “taught” more than “programmed.” At what point will we see the rise of “botagogy” (you heard it here first!)? Or maybe “technitagogy” (from the Greek word for “artificial” – technitós)?

Currently, you only hear a few people like George Siemens talking about how humans are no longer the only form of “intelligence” on this planet. While there is some resistance to that idea (because AI is not as “intelligent” as many think it is), it probably won’t be much longer before there is wider acceptance that we actually are living in a future where humans are not the only form of “intelligence” around. Will we expand our view of leading/teaching to include forms of intelligence that may not be like humans… but that can learn in various ways?

Hard to say, but we will probably be finding out sooner than a lot of us think we will. So maybe I shouldn’t be so quick to question dronagogy? Will drone technology evolve into a form of intelligence someday? To be honest, that just sounds like a Black Mirror episode that we may not want to get into.

(Feature image by Franck V. on Unsplash)

What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Modernizing Websites: html5, Indieweb, and More?

On and off for the past few weeks I have been looking into modernizing some of my websites with things like html5 and indieweb. The main goal of this experimentation was to improve the LINK Research Lab web presence by getting some WebMention magic going on our website. The bonus is that I experiment with some of these on my own website before moving them onto a real website for the LINK Lab. I had to make sure they didn’t blow things up, after all.

However, the problem with that is my website was running on a cobbled together WordPress theme that was barely holding together, and looking dated. I was looking for a nice theme to switch over to quickly, but not having much success. Then I remembered that Alan Levine had a sweet looking html5 theme (wp-dimension). One weekend I gave it whirl, and I think we have a winner.

The great thing about Cog Dog’s theme is that it has a simple initial interface for those that want to take a quick look at my work, but also has the ability to allow people to dig deeper into any topic they want to. I had to download and delete all of the blog posts that were already on my website, as the theme turns blog posts into the quick look posts on the front page. Those old posts were just feedwordpress copies of every post I wrote on this blog – so no need to worry about that. Overall, a great theme that is easy to use that I highly recommend for anyone wanting to create a professional website fast.

Much of my current desire to update websites came from reading Stephen Downes’ post on OwnYourGram – a service that let’s you export your Instagram files to your own website. To be honest, the IndieWeb part on the OwnHourGram website was just not working for me, until I found the actual IndieWeb plugins for WordPress. When in doubt, look for the plugins that already exist. I added those, and it all finally worked great. I found that the posts it imported didn’t work that well with many WordPress themes (Instagram posts don’t have titles, but many WordPress themes ignore posts without titles – or renders them strangely on the front page). So I still need to tinker with that.

The main part I became the most interested in was how IndieWeb features like WebMentions can help you connect with conversations about your content on other websites (and also social media). That will probably be the most interesting feature that I want to start using on this website and the LINK Lab website as well. So now that I have it figured out, time to get it set up before it all changes :) I’m just digging into this after being a fan from a far for a while, so let’s see what else is out there.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.

Can You Automate OER Evaluation With The RISE Framework?

The RISE Framework is a learning analytics methodology for identifying OER resources in a course that may need improvement. On one level, this is an interesting development, since so few learning analytics projects are actually getting into how to improve the actual education of learners. But on the other hand, I am not sure if this framework has a detailed enough understanding of instructional design, either. A few key points seem to be missing. It’s still early, so we will see.

The basic idea of the RISE Framework is that analytics will create a graph that plots page clicks in OER resources on the x-axis, and grades on assessments on the y-axis. This will create a grid that shows where there were higher than average grades with higher than average clicks, higher than average grades with lower than average clicks, lower than average grades with higher than average clicks, and lower than average grades with lower than average clicks. This is meant to identify the resources that teachers should consider examining for improvement (especially focusing on the ones that got a high number of clicks but lower grade scores). Note that this is not meant to definitely say “this is where there is problem, so fix it now” but more ” there may or may not be a problem here, so check it out.” Keep that in mind while I explore some of my doubts here, because I would be a lot harsher on this if it was presented as a tool to definitely point out exact problems rather than what it is: a way to start the search for problems.

Of course, any system of comparing grades with clicks itself is problematic on many fronts, and the creators of the RISE Framework do take this into consideration when spelling out what each of the four quadrants could mean. For example, in the quadrant that specifies high grades with low content usage, they not only identify “high content quality” as the cause of this, but also “high prior knowledge,” “poorly written assessment,” and so on. So this is good – many factors outside of grades and usage are taken into account. This is because, on the grade front, we know that scores are a reflection of a massive number of factors – the quality of the content being only one of those (and not always the biggest one). As  noted, prior knowledge can affect grades (sometimes negatively – not always positively like the RISE framework appears to assume). Exhaustion or boredom or anxiety can impact grades. Again, I am glad that these are in the framework, but the affect these have on grades is assumed in one direction – rather than the complex directions they take in real life. For example, students that game the test or rubric can inflate scores without using the content much – even on well-designed assessments (I did that all of the time in college).

However, the bigger concern with the way grades are addressed in the RISE framework is that they are plotting assessment scores instead of individual item scores. Anyone that has analyzed assessment data can tell you that the final score on a test is actually an aggregate of many smaller items (test questions). That aggregate grade can mask many deficiencies at the micro level. That is why instructors prefer to analyze individual test questions or rubric lines than the aggregate scores of the entire test. Assessments could cover, say 45 questions of content that were well covered in the resources, and then 5 questions that are poorly covered. But the high scores on the 45 questions, combined with the fact that many will get some questions right by random guessing on the other 5, could result in test scores that mask a massive problem with those 5 questions. But teachers can most likely figure that out quickly without the RISE framework, and I will get to that later.

The other concern is with clicks on the OER. Well, they say that you can measure “pageviews, time spent, or content page ratings”… but those first two are still clicks, and the last one is a bit too dependent on the happiness of the raters (students) at any given moment to really be that quantitative. I wouldn’t outright discount it as a factor, but I will state that you are always going to find a close alignment with the test scores on that one for many reasons. In other words, it is a pre-biased factor – students that get a high score will probably rate the content as effective even if it wasn’t, and students that get a low score will probably blame the content quality whether it was really a factor or not.

Also, now that students know their clicks are being recorded, they are more and more often clicking around to make sure they get good numbers on those data points. I even do that when taking MOOCs, just in case: click through the content at a realistic pace even if I am really doing something else other than reading. People have learned to skim resources while checking their phone, clicking through at a pace that makes it seem like they are reading closely. Most researchers are very wary of using click data like pageviews or time spent to tell anything other than where students clicked, how long between clicks, and what was clicked on. Guessing what those mean beyond that? More and more, that is being discouraged in research (and for good reason).

Of course, I don’t have time to go into how relying on only content and assessment is poor way to teach a course, but I think we all know that. A robust and helpful learning community in a class can answer learning questions and help learners overcome bad resources to get good grades. And I am not referring to cheating here – Q&A forums in courses can often really help some learners understand bad readings – while also possibly making them feel like they are the problem, not the content.

Still, all of that is somewhat or directly addressed in the framework, and because it is a guide rather than definitive answer, variations like those discussed above are to be expected. I covered them just to make sure I was covering all critical bases.

The biggest concern I have with the RISE framework really comes here: “The framework assumes that both OER content and assessment items have been explicitly aligned with learning outcomes, allowing designers or evaluators to connect OER to the specific assessments whose success they are designed to facilitate.”

Well, since that doesn’t happen in many courses due to time constraints, that eliminates large chunks of courses. I can also tell you as an instructional designer, many people think they have well-aligned outcomes…. but don’t.

But, let’s assume that you do have a course with “content and assessment items have been explicitly aligned with learning outcomes.” If you have explicitly aligned assessments, you don’t need the RISE framework. To explicitly align assessment with a content is not just a matter of making sure the question tests exactly what is in the content, but to also point to exactly where the aligned content is for each question. Not just the OER itself, but the chapter and page number. Most testing systems today will give you an item-by-item breakdown of each assessment (because teachers have been asking for it). Any low course score on any specific question indicates some problem. At that point, it is best (and quickest) to just ask your learners:

  1. Did the question make sense? Was it well written?
  2. Did it connect to the content?
  3. Did the content itself make sense?

Plus, most content hosting systems have ways to track page clicks, so you can easily make your own matrix using clicks if you need to. The matrix in the framework might give you a good way to organize the data to see where your problem lies…. but to be honest, I think it would be quicker and more accurate to focus on the assessment questions instead of the whole test, and ask the learners about specific questions.

Also, explicit alignment can itself hide problems with the content. An explicit alignment would require that you test what is in the content, even if the content is bad. This is one of the many things you learn as an ID: don’t test what students don’t learn; write your test questions to match the content no matter what. A decently-aligned assessment can still produce grades from a very bad content source. One of my ID professors once told me something along the lines of “a good instructional designer can help students pass even with bad textbooks; a bad instructional designer can help them fail with the best textbook.”

Look – instructional designers have been dealing with good and bad textbooks for decades now. Same goes for instructors that serve as their own IDs. We have many ways to work around those.

I may be getting the RISE framework wrong, but comparing overall scores on assessments to certain click-stream activity in OER (sometimes an entire book) comes across like shooting fish in a barrel with a shotgun approach. Especially when well-aligned test questions can pinpoint specific sources of problems at a fairly micro-fine level.

Now then, if you could actually compare the grades on individual assessment items with the amount of time spent on the page or area that that specific item came from, you might be on to something. Then, if you could group students into the four quadrants on each item, and then compare quadrant results on all items in the same assessment together, you could probably identify the questions that are most likely to have some kind of issue. Then, have the system send out a questionnaire about the test to each student – but have the questionnaire be custom-built depending on which quadrant the student was placed in. In other words, each learner gets questions about the same, say, 5 test questions that were identified as problematic, but the specific question they get about each question will be changed to match which quadrant they were placed in for that quadrant:

We see that you missed Question 4, but you did spend a good amount of time on page 25 of the book, where this question was taken from. Would you say that:

  • The text on page 25 was not well-written
  • Question 4 was not well-written
  • The text on page 25 doesn’t really match Question 4
  • I visited page 25, but did not spend the full time there reading the text

Of course, writing it out this ways sounds creepy. You would have to make sure that learners opt-in for this after fully understanding that this is what would happen, and then you would probably need to make sure that the responses go to someone that is not directly responsible for their grade to be analyzed anonymously. Then report those results in a generic way: “survey results identified that there is probably not a good alignment between page 25 and question 4, so please review both to see if that is the case.”

In the end, though, I am not sure if you can get detailed enough to make this framework effective without diving deep into surveillance monitoring. Maybe putting the learner in control of these tools, and give them the option of sharing the results with their instructor if they feel comfortable?

But, to be honest, I am probably not in the target audience for this tool. My idea of a well-designed course involves self-determined learning, learner autonomy, and space for social interaction (for those that choose to do so). I would focus on competencies rather than outcomes, with learners being able to tailor the competencies to their own needs. All of that makes assessment alignment very difficult.

“Creating Online Learning Experiences” Book is Now Available as an OER

Well, big news in the EduGeek Journal world. I have been heading up a team of people to work on new book that was released as an OER through PressBooks today:

Creating Online Learning Experiences: A Brief Guide to Online Courses, from Small and Private to Massive and Open

Book Description: The goal of this book is to provide an updated look at many of the issues that comprise the online learning experience creation process. As online learning evolves, the lines and distinctions between the various classifications of courses has blurred and often vanished. Classic elements of instructional design remain relevant at the same time that newer concepts of learning experience are growing in importance. However, problematic issues new and old still have to be addressed. This book aims to be a handbook that explores as many of these issues and concepts as possible for new and experienced designers alike, whether creating traditional online courses, open learning experiences, or anything in between.

We have been working on this book on and off for three or more years now, so I am glad to finally get it out to the world. In addition to me, there were several great contributing writers: Brett Benham, Justin Dellinger, Amber Patterson, Peggy Semingson, Catherine Spann, Brittany Usman, and Harriet Watkins.

Also, on top of that, we recruited a great group of reviewers that dug through various parts and gave all kinds of helpful suggestions and edits: Maha Al-Freih, Maha Bali, Autumm Caines, Justin Dellinger, Chris Gilliard, Rebecca Heiser, Rebecca Hogue, Whitney Kilgore, Michelle Reed, Katerina Riviou, Sarah Saraj, George Siemens, Brittany Usman, and Harriet Watkins.

Still skeptical? How about an outline of topics, most of which we did try to filter through a critical lens to some degree:

  1. Overview of Online Courses
  2. Basic Philosophies
  3. Institutional Courses
  4. Production Timelines and Processes
  5. Effective Practices
  6. Creating Effective Course Activities
  7. Creating Effective Course Content
  8. Open Educational Resources
  9. Assessment and Grading Issues
  10. Creating Quality Videos
  11. Utilizing Social Learning in Online Courses
  12. Mindfulness in Online Courses
  13. Advanced Course Design
  14. Marketing of an Online Course

So, please download and read the book here if you like: Creating Online Learning Experiences

There is also a blog post from UTA libraries about the release: Libraries Launch Authoring Platform, Publish First OER

And if you don’t like something you read, or find something that is wrong, or think of something that should have been added – let me know! I would love to see an expanded second edition with more reviewers and contributing authors. There were so many more people I wanted to ask to contribute, but I just ran out of time. I intentionally avoided the “one author/one chapter” structure so that you can add as much or as little as you like.

Working With Resistant Faculty as an Instructional Designer

One of the questions I get most often from people new to instructional design is how to work with faculty that are resistant to making changes to their course ideas (or maybe even resistant to working with an instructional designer all together). To be honest, once you have gotten to the place of resistance, you can generally find all kinds of advice for dealing with disagreements that will work. There really isn’t anything special or different about your interactions with people that you don’t see eye to eye with even when you are an instructional designer.

However, I have found that there are ways to start off the working relationship in an instructional design context that can set a tone for collaboration or disagreement on down the line. There are a few things that I try to do with the first contact with faculty to get off on the right foot. So my biggest piece of advice is always to set up the relationship the right way from the beginning, and then you should have a smoother relationship with faculty to begin with (and if disagreements arise, a good foundation to work towards agreement).

The first thing I tell people to do is to get your view of the faculty members in the right context in your mind. Of course, this includes setting aside an pre-conceived notions you might have gained about them from other people – but it is more than that. I try to keep in mind what the work flow of the faculty actually looks like, especially how they are very busy. Not necessarily more or less busy than you are, but just on an entirely different context of busy. They are having to deal with hundreds of student emails, and then all kinds of research-related emails, and then all kinds of department-related issues, and so on. When you send them that initial contact email, you can probably guarantee that it will be filed away until there is a lull in their work flow – later that day, later that week, or even later that month. That filing system might be anything from a folder system in outlook to a paper notebook next to their computer (I have seen it all). But the key thing is that they are likely to put it aside without a whole lot of thought about it at first.

This is an important factor to remember. Some faculty might respond right away, but others will file and get back to you once the dozen or so urgent requests in front of them are taken care of. At this point, while you are waiting for a response, don’t make things more complex by having other people contacting them as well. Many instructional design groups will do this differently: the manager will contact the faculty to “introduce” the ID, then if there is no response from the faculty after a few days, the ID will then email again… possibly introducing more team members as they do so. By the time the faculty gets to that lull to respond, they have all these people contacting them and they have to figure out if they are all working on the same project, or different people working on similar projects. Then they have to figure out who specifically to reply to, who was just adding extra information to the discussion, and so on.

And right there is a place where you can start to get off on the wrong foot with faculty. Instead of responding to one person, they have to take extra time to read through these emails from different people to figure out what is going on. Again, some will be fine with that, but others will feel that you and your department are “piling it on” to try and pressure them to respond faster.

So, for the sake of focus, make sure to only have one person contacting the faculty member or members until they respond. If you need to send multiple emails to follow up and nudge the faculty, respond to your last message so those that use threaded email system will just end up with one email thread rather than several. Since the goal of having the first meeting is usually to set up a first meeting, you can make sure that the other people they need to meet are at the first meeting.  And if at all possible, wait to bring those people into the conversation at the first meeting. If you really have to bring them in earlier, then at least wait until after the faculty has first responded to the initial emails.

Quite often, a manager or other person like to make the first email to connect the ID and faculty, and then step out of the picture. If you can avoid that, I would. If the faculty doesn’t respond right away, then the manager will have to nudge. If the ID nudges, it introduces that complexity that I have found best to avoid at this stage. So if you are a manager, get used to letting your people do the initial contact. If you are an ID, get used to doing the initial contact. It just saves time and avoid miscommunication down the line.

Remember: that first response from faculty is usually the signal that they have the open head space to deal with course design – or that they are at least ready to free up some head space for the design. So feel free to nudge them if needed, but don’t add anything else to that nudge beyond your initial “let’s meet” message.

Also, I should mention this “let’s meet” message. Be careful how you phrase that request. So many people jump out of the gate with suggestions, like “we can meet once a month” or “once a week” or some other frequency based on what they think the course needs. And they are probably right about this suggestion. But remember that the faculty you are meeting with have already possibly thought about how many meetings they need with you as well. They may be flexible, but they also may have a specific need for meetings. If you come out right away and suggest a specific schedule, you may stress them out by not suggesting enough meetings compared to what they want, or maybe by suggesting more meetings than they thought they needed.

Of course, you might get lucky and suggest the exact frequency they were thinking of, the heavens will open, collaboration glitter will float down, and every one rejoices.

But you might also set up a foundation of frustration if you get it wrong. My suggestion? I always like to say that I want to “discuss a method and frequency for consistent communication to keep the course design process moving forward” or something to that effect. When you say something like this, what ever method or frequency they were thinking of will fit into that description, and they will feel like you are there to help their course, not impose deadlines.

Which, of course, you usually are… but you don’t want to default to that position from the beginning.

However, make sure you don’t jump out first with “how about meeting twice a week” or some other specific suggestion. From this point on in interacting with faculty, always lead with questions intending to draw out what the faculty thinks. I have found that leading with questions is a good way to collaborate more than disagree. Don’t just say “well, what we need to instead….” But also, don’t beat around the bush, either. Just ask them directly: how often do you want to meet, and in what context?

Of course, there is a good chance they will suggest something that is more often or less often than you thought, or they will suggest face-to-face meetings when you thought email would work, and so on. When this happens, try to find out (by asking questions) why they want their suggested frequency instead of going into “correction” mode.

  • “That seems to be a high frequency of meetings, and you are pretty experienced in online course design. How are you feeling about working on this specific course?”
  • “Do you think you will be able to meet the deadlines for the course design? Would it maybe help to have more frequent check-ins with me to meet deadlines?”
  • “I know you are used to face-to-face meetings with our organization. How do you feel about email check-ins? We could possibly meet less frequently if you think it will work for you to email me questions as needed.”

A quick note: multiple meetings per week is probably going to send the wrong message to faculty. They usually have multiple meetings only with students that are struggling the most in their class, or with colleagues that can’t stay on track when working on research projects. There is kind of this stigma against being asked to meet multiple times per week in many academic circles. Don’t be against that if they are the ones that say they need it, but don’t be the one to suggest it first. Not all faculty think this way, but I have learned the hard way to not be the one to bring it up with the ones that do have a preconceived notion about it.

So, really, from this point out, I would say if you stick to asking questions first rather than jumping into correction mode, and then follow other methods and guidelines for dealing with workplace conflict or disagreements, you will know how to deal with most situations. By taking into account how you start off the working relationship with faculty, you are getting started on a better foundation for future interactions. There is a lot more that I could cover, but this post is getting too long. If you have any suggestions for dealing with resistant faculty, let me know in the comments – there is still a lot I can learn in this area as well!

After the Cambridge Analytica Debacle, Is It Time to Ban Psychographics?

What are psychographics you may ask? Well, you may not, but if so: the simple definition is that they are a companion to demographics, but they try to figure out what those demographics tell us about the person behind the demographic factors. This article looks at some of the features that could go into psychographics, like figuring out if a person is “concerned with health and appearance” or “wants a healthy lifestyle, but doesn’t have much time” or what ever the factor may be. This article was written in 2013, long before the Cambridge Analytica debacle with Facebook. That entire debacle event should have people asking harder questions of Ed-Tech, such as:

Audrey Watters will surely be writing about her question and more soon (its a huge topic to write about already), and Autumm Caines has already weighed in on her experiences investigating Cambridge Analytica long before most of us were aware of them. Like many people, I had to dig up some refreshers about what psychographics are after Audrey Watters’ tweet to make sure I was thinking of the right concept. And now I want to question the whole concept of psychographics altogether. Maybe “ban” is too strong of a word, maybe not. You can be the judge.

Even in the fairly “innocent” tone of the 2013 article I linked above, there are still concerning aspects of psychographics shining through: interview your customers with the agenda of profiling them, and maybe consider telling them what you are doing if they are cool enough; you can’t trust what they say all the time, but you can trust their actions; and “people’s true motivations are revealed by the actions they take”

But really, are they? Don’t we all do things that we know we shouldn’t sometimes, just like we sometimes say things we know we don’t believe sometimes? Isn’t the whole idea of self-regulation based on us being able to overcome our true motivations and do things we know we need, even if we aren’t truly motivated?

The whole basis of psychographics in this article is that you can trust the actions more than the words. I’m not so sure that is true, or really even a “problem” per se. We are all human. We are inconsistent. We change our mind. We don’t do what we say should, or do things that we say we shouldn’t at times. It is part of being alive – that makes life interesting and frustrating. It’s not a bug in the system to be fixed by trickery.

(Side note: anyone that really digs into psychographics will tell you that it is more complex than it was in 2013, but I don’t really have a stomach to go any more complex than that.)

So is it really fair and accurate to do this kind of profiling on people? At certain levels, I get that companies need to understand their customers. But they already have focus groups and test cases and even comment cards to gather this data from. If they don’t think they are getting accurate enough information from those sources, why would they think they could get even more accurate information from trickier methods? Either way, all words and actions come from the same human brain.

Look at the example of what to do with psychographics in marketing in the 2013 article. That whole section is all about tricking a person to buy a product, via some pretty emotionally manipulative methods. I mean, the article flat out tells readers to use a customer’s kids to sell things to them: “Did she love the one about the smiley-face veggie platters for an after-school snack? Give her more ways to help keep her kids eating well.”

Really?

What about just giving her the options of what you sell and what they are for, and let her decide what she needs?

And what if she starts showing some signs of self-destructive behavior? If the psychographics are run by AI… will it recognize that and back off? Or will it only see those behaviors as signs of what to sell, and start pushing items to that person that they don’t need? Do you think this hasn’t already happened?

Maybe I am way off base comparing psychographics to profiling and emotional manipulation. I don’t think I am. But if there is even a chance that I am not off base, what should our reaction be as a society? I don’t want to go overboard and even go so far as to get rid of customer surveys and feedback forms. If a company gives me a form designed in a way that lets me tell them what kind of ads I want to see, I wouldn’t mind that. Well, in the past I wouldn’t have minded. After Cambridge Analytica, I would want believable assurances that they would stick with what I put in the form and not try to extrapolate between the lines. I would want assurance they aren’t doing… well… anything that Cambridge Analytica did. [Insert long list of ethical violations here]

But would most companies self-regulate and stay within ethical limits with all that data dangling in front of them? Ummmmm…. not so sure. We may need to consider legislating ethical limits on these activities, as well as outright banning others that prove too tempting. And then figure out how to keep the government in-line on these issues as well. Just because Cambridge Analytica and Facebook are in the hot-seat this week, that doesn’t mean some government department or agency won’t be in that same seat tomorrow.