So You Want to Go Online: OPMs vs In-House Development

As the Great OPM Controversy continues to rage, a lot is being said about developing online courses “in-house” (by hiring people to do the work rather than paying a company to do so). This is actually an area that I have a lot of experience in at various levels, so I wanted to address the pros and cons of developing in-house capacity for offering online programs. I have been out of the the direct instructional design business for a few years, so I will be a bit rusty here and there. Please feel free to comment if I miss anything or leave out something important. However, I still want to take a rough stab at a ballpark list of what needs consideration. First, I want to start with three given points:

  1. Everything I say here is assuming high-quality online courses, not just PowerPoints and Lecture Capture plopped online. But on the other hand, this is also assuming there won’t be any extra expenses like learning games or chat-bots or other expensive toys… errr… tools.
  2. In most OPM models, universities and colleges still have to supply the teachers, so that cost won’t be dealt with here, either. But make sure you are accounting for teacher pay (hopefully full time teachers more than adjuncts, and not just adding extra courses to faculty with already over-full loads).
  3. All of these issues I discuss are within the mindset of “scaling” the programs eventually to some degree or another, but I will get to the problems with scale later.

So the first thing to address is infrastructure, and I know there are a wide range of capacities here. Most universities and colleges have IT staff and support staff for things like email and campus computers. If you have that, you can hopefully build off of that. If you don’t…. well, the OPM model might be the better route for you as you are so far behind that you have to catch up with society, not just online learning. But I know most places are not in this boat. Some even already have technology and support in place for online courses – so you can just skip this part and talk directly with those people about their ability to support another program.

You also have to think about the support of technology, usually the LMS and possibly other software. If you have this in place, check to make sure the existing tools have capacity to take on more (they usually have some). If you have an IT department – talk with them about what it would take to add an LMS and any other tools (like data analysis tools) you would like to add. If you are talking one online program, you probably don’t need even one full time position to support what you need initially. That means you can make this a win/win for IT by helping them get that extra position for the ____ they have been wanting for a while if they can also share that position with online learning technology support part-time.

This is, of course, for a self-hosted LMS. All of the LMS providers out there will offer to host for you, and even provide support. It does cost, but shop around and realize there are vendors that will give you good service for a good price. But there are also some that won’t deal with you at all if you are not bringing a large numbers of courses online initially, so be careful there.

Then there is support for students and teachers. Again, this is something you can bundle from most LMS providers, or contract individually from various companies. If you already have student and faculty tech support of some kind on campus, talk with them to see what it would take to support __ number of new students in __ number of new online courses. They will have to increase staff, but since they often train and employ student workers to answer the calls/emails, this is also a win/win for your campus to get more money to more students. Assuming your campus fairly treats and pays its student workers, of course. If not, make sure to fix that ASAP. But keep in mind that this can be done for the cost of hiring a few more workers to handle increased capacity and then paying to train everyone in support to take online learning calls.

Then there will be the cost of the technology itself. Typically, this is the LMS cost plus other tools and plug-ins you might want to add in (data analytics, plagiarism detection, etc). Personally, I would say to avoid most of those bells and whistles at the beginning. Some of them – like plagiarism detection – are surveillance minded and send the wrong message to learners. Hire some quality instructional designers (I’ll get to that in a minute) and you won’t even need to use these tools. Others like data analytics might be of use down the line, but you might also find some of the things they do underwhelming for the price. With the LMS itself, note that there are other options like Domain of One’s Own that can replace the LMS with a wider range of options for different teachers and students (and they work with single sign on as well). There are also free open-source LMS if you want to self host. Then there are less expensive and more expensive LMS providers. Some that will allow you to have a small contract for a small program with the option to scale, others that want a huge commitment up front. Look around and remember: if it sounds like you are being asked to pay too much, you probably are.

So a lot of what I have discussed is going to vary in cost dramatically, depending on your needs and current capacity. However, if you remain focused on just what you need, and maybe sharing part of certain salaries with other departments to get part of those people’s time, and are also smart about scaling (more on that later), you are still looking at a cost that is in the tens of thousands range for what I have touched on so far. If you hit the $100k point, you are either a) over-paying for something, b) way behind the curve on some aspect, or c) deciding to go for some bells and whistles (which is fine if you need them or have people at your institution that want them – they usually cost extra with OPMs as well).

The next cost that almost anyone that wants to go online will need to pay for no matter what you do is course development. Many people think they can just get the instructors to do this – but just remember that the course will only be as good as their ability/experience in delivering online courses. You may find a few instructors that are great at it, but most at your school probably won’t fall into that category. I don’t say that as a bad thing in this context per se – most instructors don’t get trained in online course design, and even if they do, it is often specific to their field and not the general instructional design field. You will need people to make the course, which is where OPMs usually come in – but also in-house instructional designers as well.

With an average of 6-8 months lead time with a productive instructor, a quality instructional designer can complete 2-3 three quality 15 week online courses per semester. I know this for a fact, because as an instructional designer I typically completed 9 or so courses per year. And some IDs would consider that “slow.” More intense courses that are less ready to transition to online could take longer. But you can also break out of the 15 week course mindset when going online as well – just food for thought. If you are starting up a 10 course online program, you would probably want three instructional designers, with varying specialties. Why three IDs if just one could handle all ten courses in two years easily? Because there is a lot more to consider.

Once you start one online program, other programs will most likely follow suit fairly quickly. It almost always happens that way. So go ahead and get a couple more programs in the pipeline to get going once the IDs are ready. But you also need to build up and maintain infrastructure once you get those classes going. How do you fix design problems in the course? When do you revise general non-emergency issues? What about when you change instructors? And who trains all of these instructors on their specific course design? What about random one-off courses that want to go online outside of a program? Who handles course quality and accreditation? And so on. Quality, experienced instructional designers can handle all of these and more, even while designing courses. Especially if you get one that is a learning engineer or that at least specializes in learning engineering, because these infrastructural questions are part of their specialty.

The salary and benefits range of an instructional designer is between 50K-100K a year depending on experience and the cost of living where you are located. These are also positions that can work remotely if you are open to that – but you will want at least one on campus so they can talk to your students for feedback on the courses they are designing. But remote work is something to keep in mind because you also have to consider the cost of finding an office and getting computers and equipment for each new person you want to hire (either as IDs or the other positions described). Also don’t forget about the cost of benefits like health care, which is pretty standard for full-time IDs.

Another aspect to keep in mind is accreditation – that will take time and people, but that will be the case even if you go with an OPM as well. You will need to pull in people from across the campus that have experience with this, of course – but you will also have to find people that can handle this aspect regardless of what model you choose. And it can be a dozy, just FYI.

Another aspect to consider is advertising. This is a factor that will always cost, unless you are focused solely on transitioning an existing on campus program into an online one (and not planning on adding the online option to the on-campus one). But even then, if you want it to scale – you will need to advertise. Universities aren’t always the best at this. If yours is, then skip ahead. If not, you will need to find someone that can advertise your new program. Typically, this is where OPMs tend to shine. But it is also getting harder and harder to find those that will just let you pay for advertising separate from the entire OPM package.

I can’t really say what you need to spend here – but I will say to be realistic. Cap your initial courses at manageable amounts – not just for your instructors, but also for your support staff. I can’t emphasize enough that it is better to start off small and then scale up rather than open the floodgates from the beginning. Every course that I have seen that opens up the first offerings to massive numbers of students from the beginning has also experienced massive learner trauma. Don’t let companies or colleges gloss over those as “bumps in the road.” Those were actual people that were actually hurt by being that bump that got rolled over. Hurt that could have been avoided if you started small and scaled up at a manageable pace.

So while we are here, let’s talk scale. Scale is messy, no matter how you do it. Even going from one on-campus course to two on-campus courses has traditionally led to problems. All colleges have wanted to increase enrollments as much as possible since the beginning of academia, so its not like OPMs were the first to talk or try scale. However, we need to be real with ourselves about scale and the issues it can cause.

First of all, not all programs can scale. Nursing programs scale immensely because the demand for nurses is still massive. Also, nurses work their tails off, so Nursing instructors often personally take care of many problems of scale that some business models cause. I’m still not sure if the OPMs involved in those programs have even realized that is true yet. But not all programs can scale like a Nursing program can. Not all fields have the demand like Nursing does. Not all fields have the people with the mindset like Nurses have (no offense hopefully, but many of you know its true and its okay – I’m not sure if Nurses ever sleep).

All that to say – if you are not in Nursing, don’t expect to scale like Nursing can. Its okay. Just be realistic about. Also, be honest about any problems that are happening. Glossing over problems will only cause more problems in no time. Always have your foot on the brake, ready to stop the scaling before issues spiral out of hand.

Remember: education is a human endeavor, and people don’t react well to being herded like cattle. I feel like I have only touched the surface and left out so much, but I am as tired of typing as you probably are of reading. Hopefully this is giving some food for thought for the people that have been wondering about in-house program development.

So why go in-house development rather than OPM? Well, I have been making the case for the cost-saving benefits plus capacity-building benefits as well. Recently I read about an OPM that wanted to charge $600,000 to build one 10 course program. All that I have outlined here plus stuff I left out would easily half of that for a high-quality program. And I am one of those people that usually advocates for how expensive online courses can be to do right. But even I am thinking “Whoa!’ at $600K.

Look, if you are wanting to build a program in a field like Nursing that can realistically scale, and you want to deal with thousands of students being pushed through a program (along with all the massive problems that will bring), then you are probably one of five schools in the nation that fit that description and OPMs are probably the best bet for you. For the other 3000-4000+ institutions in the nation, here are some other factors to consider:

  • Hiring people usually means some or all of those people will live in your community, thus supporting local economies better.
  • Local people means people on your campus that can interact with your students and get their input and perspective.
  • Having your people do things also typically means more opportunities to hire students as GTAs, GRAs, assistants, etc – giving them real world skills and money for college.
  • When your academics and your GRAs are part of something, they usually research it and publish on it. The impact on the global knowledge arena could be massive, especially if you publish with OER models.
  • Despite what some say, HigherEd is constantly evolving. Not as fast as some would like, but it is happening. When the next shift happens, you will have the people on staff already to pivot to that change. If not, that will be another expensive contract to sign with the next OPM.

The last point I can’t emphasize enough. When the MOOC craze took off, my current campus turned to some of its experienced IDs – myself and my colleague Justin – to put a large number of MOOCs online. Now that analytics and AI are becoming more of a thing in education (again), they are turning to us and other IDs and people with Ed-Tech backgrounds on campus as well. For people that went the OPM route, these would all be more (usually expensive) contracts to buy. For our campus, it means turning to the people they are already paying. I don’t know what else to say to you if that doesn’t speak for itself.

Also, keep in mind that those who are not in academia don’t always understand the unique things that happen there. Recently I saw a group of people on Twitter upset about a college senior that couldn’t graduate because the one course they needed wasn’t offered that semester. The responses to this scenario are those that many in academia are used to hearing: “bet there is a simple software fix for this!” “what a money making scam!” “if only they cared to treat the student like a customer, they wouldn’t make this happen!” The implication is that the problem was on the University’s side for not caring about course scheduling enough to make graduation possible. Most people in academia are rolling their eyes at this – it is literally impossible for schools to get programs accredited if they don’t prove that they have created a pathway for learners to graduate on time. It makes good business sense that not all courses can be offered every semester, just like many business do not sell all products year round (especially restaurants). Plus, most teachers will tell you it is better to have 10 students in a course once a year than 2-3 students every semester – more interaction, more energy, etc. But schools literally have to map out a pathway for these variable offerings to work in order to just get the okay for the courses in the first place. Those of us in academia know this, but it seems that, based on what I saw on Twitter recently, many in the OPM space do not know this. We also know that there is always that handful of students that ignore the course offering schedules posted online, the advice of their advisers, and the warnings of their instructors because they think they can get the world to bend to their desires. I remember in the 90s telling two classmates they wouldn’t graduate on time if they weren’t in a certain class with me. They scoffed, but it turns out they in fact did not graduate on time. So something to keep in mind – outside perspectives and criticism can be helpful, but they can also completely misunderstand where the problems actually lie.

And look, I get it – there will always be institutions that prefer to get a “program in a box” for one fee no matter how large it is. If that is you, then more power to you. There are a few things I would ask if you go the OPM route: first of all, please find a way to be honest and open about the pros and cons of working with your OPM. They may not like it, but a lot of the backlash that OPMs are currently facing comes from people just not buying the “everything is awesome” line so many are pushing. The education world needs to know your successes as well as your failures. Failure is not a bad thing if you grow from it. Second, please keep in mind that while the “in-house” option looks expensive and complicated, going the OPM route will also be expensive and complicated. They can’t choose your options for you, so all the meetings I discuss here will also happen within an OPM model, just with difference people at the table. So don’t get an inflated ego thinking you are saving time or money going that route. Building a company is much different from building a degree program, so don’t buy into the logic that they are saving you start-up funds. They had to pay for a lot of things as a for-profit company that HigherEd institutions never have to pay for.

Finally, though, I will point out how you can also still sign contracts with various vendors for various parts of your process while still developing in-house, like many institutions have for decades. This is not always an all-or-nothing, either/or situation (see the response from Matthew Rascoff here for a good perspective on that, as well as Jonathan D. Becker’s response at the same link as a good case for in-house development). There are many companies in the OPM space that offer quality a la carte type services for a good price, like iDesign and Instructional Connections. Like I have said on Twitter, I would call those OPS (Online Program Support) more than OPM. Its just that this term won’t catch on. I have also heard the term OPE for Online Program Enablers, which probably works better.

The Great OPM Controversy

So if you have been following OPMs for a while, you are probably asking yourself “which particular controversy are you referring to?” Good point. Over the past week, there has been some controversy over an article by Kevin Carey that takes a harsh look at the pricing and income from online courses, especially related to OPMs. I took issue with the way the article throws all OPMs into the same bucket – Carey mentions 2U and iDesign in the same sentence, but doesn’t cover the massive differences between the two companies. Personally, I have concerns over even labeling companies like iDesign as OPMs, because they don’t offer to take over the entire online program creation process. They serve more as a specialty service for contract, a type of company that has existed for a long time in HigherEd and that adds great value when priced right.

(also, full disclosure: I have worked for iDesign in the past as a part-time side gig, and still would if their current employment model allowed for work on nights and weekends).

Carey also falls for the assumption that online courses should be cheaper, something that Matt Reed effectively discusses in his own response (just ignore where he briefly falls into the “MOOC attrition rate” misunderstanding). Despite these two points of disagreement, Carey does raise some legitimate hard questions about OPMs that we as a field should discuss.

Of course, with all of this attention, 2U was bound to respond. Today their CEO Chip Paucek wrote an article for Inside HigherEd. While I am glad that Paucek wants to have a constructive dialogue, there were problems with his response as well. Paucek starts off (after selling his company some) by stating that any real conversation about cost or value in online education has to be “grounded” in four specific principles: quality, access, outcomes, and sustainability (personally, I would add ethics and privacy concerns as well). But those are four good ones, and Paucek states that Carey’s article did not focus on those.

Okay, the quality aspect – as related to costs – he did miss. But access, outcomes, and sustainability are all important aspects of the cost of online access – and by addressing cost Carey is also focusing on those three aspects. I think it would be more accurate to say that Paucek felt that Carey did not focus on those aspects the way he wanted him to. They were still there, just not in a format that Paucek recognized maybe? Hard to say. But I felt that point was too forced in Paucek’s response. You can’t separate any discussion of cost from access, outcomes, and sustainability.

Paucek goes on to point out that face-to-face returning students typically have to quit their job and lose income to get a degree while still paying for living expenses. Which is still the case in some places, but not as much as it used to be. For example, I earned my Ph.D. while still working full time because the traditional on-campus program I was a part of adjusted their courses to be on nights and weekends. But the point by Paucek is:

Most master’s and doctorate-level students are working adults who historically had to quit a job, and often move, in order to attend a top-tier university for graduate school…. the average actual cost and debt burden of attending a 2U-powered program is significantly less once you factor in ongoing income and the room and board savings, which in some cities can be as high as 25 to 40 percent of tuition.

Which is true – for all online programs. If the 2U partner schools had built their own online programs, this statement would still be true. Its a bit disingenuous for an OPM to claim a historical benefit of all online / distance education as their own like this. It would be like a website designer claiming they personally are saving clients money by using WordPress, even though WordPress was free long before they started a web design company. Paucek also does this again by claiming that, on average, their partner programs students “are more diverse from both a race and gender perspective than students in comparable on-campus programs.” Again, that was typically true of many online programs long before OPMs came along.

Paucek also goes on the attack against schools that want to build in-house capability for online programs, because he sees this as being wasteful of institutional funds. This is partially true and partially not true. Paucek’s point is that

“…it’s also critical to discuss whether it’s reasonable, rational and appropriate for that investment and risk capital to be shouldered exclusively by schools or in collaboration with a strategic partner like 2U…. each one of our program partners would need to invest their own scarce capital and hire in-house talent to expertly deliver what we deliver.”

Yes, it is true that it takes a lot of money to build online programs in-house. But it also takes a lot of money to hire an OPM like 2U. However, here is the counterpoint: you can hire people that are already experts in online course design, online program management, accessibility, privacy, cybersecurity, etc. You don’t have to start from scratch even if you go the in-house route. I know this, because I am one of many, many experts out there that has the ability to do so. And we are not as expensive as one would think :)

And while Paucek tried to make it seem like it takes 10 years and nearly a billion dollars to develop a quality online program, the truth is that a lot of that went towards building a company – which is different from building an online program. Yes, it does take a lot of time and money to build a quality online program, but it takes a whole lot more to build a national / international company – and those are mostly costs that HigherEd programs will not have to shoulder. To be cliche, it is comparing apples to oranges to make this point. There is some financial overlap between building an OPM and building an online program at an existing institution, but there is a lot that is extra to build a company from scratch.

There are also many other important benefits to building programs in-house that few are talking about. Usually, these programs are built in-house by hiring local talent, which helps local economies. Then there are all of the schools that hire GRAs, GTAs, student assistants of all kinds to help build and administrate and even teach the courses. This helps to empower students by giving them valuable life and employment skills of all kinds. Then there are all of the research articles, blog posts, think pieces, etc that various instructors, staff, and students produce while participating in the process. When these are published through OER models, the additions to the global knowledge space of online learning are immense. Some OPMs participate in some of these benefits, but many keep the whole process behind closed doors to protect proprietary processes and products.

Of course, Paucek’s overarching points that creating quality online courses is expensive, and that we need to have open conversations about the process, are both important. However, I am of the opinion that OPMs should not be the one’s hosting this conversation (as Paucek suggests), as the points outlined in this article make apparent. We as the education community have been hosting it, and all disagreements aside, we have been doing a pretty good job of doing so.

Updating Types of Interactions in Online Education to Reflect AI and Systemic Influence

One of the foundation concepts in instructional design and other parts of the field of education are the types of interaction that occur in the educational process online. In 1989, Michael G. Moore first categorized three types of interaction in education: student-teacher, student-student, and student-content. Then, in 1994, Hillman, Willis, and Gunawardena expanded on this model, adding student-interface interactions. Four years later, Anderson & Garrison (1998) added three more interaction types to account for advances in technology: teacher-teacher, teacher-content, and content-content. Since social constructivist theory did not quite fit into these seven types of interaction, Dron decided to propose four more types of interaction in 2007: group-content, group-group, learner-group, and teacher-group. Some would argue that “student-student” and “student-content” still cover these newer additions, and to some degree that is true. But it also helps to look at the differences between these various terms as technology has advanced and changed interactions online – so I think the new terms are helpful. More recently, proponents of connectivism have proposed acknowledging patterns of “interactions with and learning from sets of people or objects [which] form yet another mode of interaction” (Wang, Chen, & Anderson, 2014, p. 125). I would call that networked with sets of people and/or objects.

The instructional designer within me likes to replace “student” with “learner” and “content” with “design” to more accurately describe the complexity of learners that are not students and learning designs that are not content. However, as we rely more and more on machine learning and algorithms, especially at the systemic level, we are creating new things that learners will increasingly be interacting with for the foreseeable future. I am wondering if it is time to expand this list of interactions to reflect that? Or is it long enough as it is?

So the existing ones I would keep, with “learner” exchanged for “student” and “design” exchanged for “content”:

  • learner-teacher (ex: instructivist lecture, learner teaching the teacher, or learner networking with teacher)
  • learner-learner (ex: learner mentorship, one-on-one study groups, or learner teaching another learner)
  • learner-design (ex: reading a textbook, watching a video, listening to audio, completing a project, or reading a website)
  • learner-interface (ex: web-browsing, connectivist online interactions, gaming, or computerized learning tools)
  • teacher-teacher (ex: collaborative teaching, cross-course alignment, or professional development)
  • teacher-design (ex: teacher-authored textbooks or websites, teacher blogs, or professional study)
  • group-design (ex: constructivist group work, connectivist resource sharing, or group readings)
  • group-group (ex: debate teams, group presentations, or academic group competitions)
  • learner-group (ex: individual work presented to group for debate, learner as the teacher exercises)
  • teacher-group (ex: teacher contribution to group work, group presentation to teacher)
  • networked with sets of people or objects (ex: Connectivism, Wikipedia, crowdsourced learning, or online collaborative note-taking)

The new ones I would consider adding include:

  • algorithm-learner (ex: learner data being sent to algorithms; algorithms sending communication back to learners as emails, chatbot messages, etc)
  • algorithm-teacher (ex: algorithms communicating aggregate or individual learner data on retention, plagiarism, etc)
  • algorithm-design (ex: algorithms that determine new or remedial content; machine learning/artificial intelligence)
  • algorithm-interface (ex: algorithms that reformat interfaces based on input from learners, responses sent to chatbots, etc)
  • algorithm-group (ex: algorithms that determine how learners are grouped in courses, programs, etc)
  • algorithm-system (ex: algorithms that report aggregate or individual learner data to upper level admin)
  • system-learner (ex: system-wide initiatives that attempt to “solve” retention, plagiarism, etc)
  • system-teacher (ex: cross-curricular implementation, standardized teaching approaches)
  • system-design (ex: degree programs, required standardized testing, and other systemic requirements)

Well… that gets too long. But I suspect that a lot of the new additions listed would fall under the job category of what many call “learning engineering” maybe? You might have noticed that it appears as if I removed “content-content” – but that was renamed “algorithm-design,” as that is mainly what I think of for “content-content.” But I could be wrong. I also left out “algorithm-algorithm,” as algorithms already interface with themselves and other algorithms by design. That is implied in “algorithm-design,” kind of in the same way I didn’t include learners interacting with themselves in self-reflection as that is implied in “learner-learner.” But I could be swayed by arguments for including those as well. I am also not sure how much “system-interface” interaction we have, as most systems interact with interfaces through other actors like learners, teachers, groups, etc. So I left that off. I also couldn’t think of anything for “system-group” that was different from anything else already listed as examples elsewhere. And I am not sure we have much real “system-system” interaction outside of a few random conversations at upper administrative levels that rarely trickle down into education without being vastly filtered through systemic norms first. Does it count as “system-system” interaction in a way that affects learning if the receiving system is going to mix it with their existing standards before approving and disseminating it first? I’m not sure.

While many people may not even see the need for the new ones covered here, please understand that these interactions are heavily utilized in surveillance-focused Ed-Tech. Of course, all education utilizes some form of surveillance, but to those Ed-Tech sectors that make it their business to promote and sell surveillance as a feature, these are interactions that we need to be aware of. I would even contend that these types of interaction are more important behind the scenes of all kinds of tech than many of us realize. So even if you disagree with this list, please understand that these interactions are a reality.

So – that is 20 types of interaction, with some more that maybe should have been included or not depending on your viewpoint (and I am still not sure we have advanced enough with “algorithm-interface” yet to give it it’s own category, but I think we will pretty soon). Someone may have done this already and I just couldn’t find it in a search – so I apologize if I missed others’ work. None of this is to say that any of these types of interactions are automatically good for learners just because I list them here – they just are the ones that are happening more and more as we automate more and more and/or take a systems approach to education. In fact, these new levels could be helpful in informing critical dialogue about our growing reliance on automation and surveillance in education as well.

Artificial Intelligence and Knowing What Learners Know Once They Have “Learned”

One of the side effects – good or bad – of our increasing utilization of Artificial Intelligence in education is that it brings to light all of the problems we have with knowing how a learner has “learned” something. This specific problem has been discussed and debated in Instructional Design courses for decades – some of my favorite class meetings in grad school revolved around digging into these problems. So it is good to see these issues being brought to a larger conversation about education, even if it is in the context of our inevitable extinction at the hands of our future robot overlords.

Dave Cornier wrote a very good post about the questions to ask about AI in learning. I will use that post to direct some responses mostly back to the AI community as well as those utilizing AI in education. Dave ends up questioning a scenario that is basically the popular “Netflix for Education” approach to Educational AI: the AI perceives what the learners choose as their favorite learning resource by likes, view counts, etc, and then proposes new resources to specific learners to help them learn more, in the way Netflix recommends new shows to watch based on the popularity of other shows (which were connected to each other by popularity metrics as well).

This, of course, leads to the problem that Dave points out: “If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.” Popular, as we all learn at some point, does not always equal good, helpful, correct, etc. However, people in the AI field will point out that they can build a system that relies on the expertise of experts and teachers in the field rather than likes, and I get that. Some have done that. But there is a bigger problem here.

Let’s back up to the part from Dave’s post about how AI accomplishes recommendations by simplifying the learners down to a few choices, much in the same way Netflix simplifies viewing choices down to a small list of genres. This is often true. However, this is true not because programmers wanted it that way – this is the model they inherited from education itself. Sure, it is true that in an ideal learning environment, the teacher talks to all learners and gets to make personal teaching choices for each one because of that. But in reality, most classes design one pathway for all learners to take: read this book, listen to these lectures, take this test, answer this discussion question while responding to two peers, wash, rinse, repeat.

AI developers know this, and to their credit, they are offering personalized learning solutions that at least expand on this. Many examinations of the problems with AI skip over this part and just look at ideal classrooms where learners and instructors have time to dig into individual learner complexities. But in the real world? Everyone follows the one path. So adding 7 or 10 or more options to the one that now to exists (for most)? Its at least a step in right direction, right?

Depends on who you ask. But that is another topic for anther day.

This is kind of where a lot of what is now called “personalized education” is at. I compare this state to all of those personalized gift websites, where you can go buy a gift like a mouse pad and get a custom message or name printed on it. Sure, the mouse pad is “personalized” with my name… but what if I didn’t need a mouse pad in the first place? You might say “well, there were only a certain set of gifts available and that was the best one out of the choices that were there.”

Sure, it might be a better gift than some plain mouse pad from Walmart to the person that needed a mouse pad. But for everyone else – not so much.

Like Dave and many have pointed out – someone is choosing those options and limiting the number of them. But to someone going from the linear choice of local TV stations to Netflix, at first that choice seems awesome. However, soon you start noticing the limitations of only watching something on Netflix. Then it starts getting weird. If I liked Stranger Things, I would probably like Tidying Up with Marie Kondo? Really?

The reality is, while people in the AI field will tell you that AI “perceives the learner and knowledge in a field,” it is more accurate to say that the AI “records choices that the learner makes about knowledge objects and then analyzes those choices to find patterns between the learner and knowledge object choices in ways that are designed to be predictive in some way for future learners.” If you just look at all that as “perceiving,” then you probably will end up with the Netflix model and all the problems that brings. But if you take a more nuanced look at what happens (it’s not “perceiving” as much as “recording choices” for example), and connect it with a better way of looking at the learner process, you will end up with better models and ideas.

So back to how we really don’t have that great of an idea of how learning actually happens in the brain. There are many good theories, and Stephen Downes usually highlights the best in emerging research in how we really understand the actual process of learning in the brain. But since there is still so much we either a) don’t know, or b) don’t know how to quantify and measure externally from the brain – then we can’t actually measure “learning” itself.

As a side note: this is, quite frankly, where most of the conversation on grading goes wrong. Grades are not a way to measure learning. We can’t stick a probe on people’s heads and measure a “learning” level in human brains. So we have to have some kind of external way to figure out if learning happens. As Dr. Scott Warren puts it: its like we are looking at this brick wall with a few random windows that really aren’t in the right spot and are trying to figure out what is happening on the other side of the wall.

Some people are clinging to the outmoded idea that brains are like computers: input knowledge/skills, output learning. Our brains don’t work like that. But unfortunately, that is often the way many look at the educational process. Instructors design some type of input – lectures, books, training, videos, etc – and then we measure the output with grades as way to say if “learning happened” or not.

The reality is, we technically just point learners towards something that they can use in their learning process (lectures, books, videos, games, discussions, etc), they “do” the learning, and then we have to figure out what they learned. Grades are a way to see how learners can apply what they learned to a novel artifact – a test, a paper, a project, a skill demonstration, etc. Grades in no way measure what students have learned, but rather how students can apply what they learned to some situation or context determined by someone else. That way – if they apply it incorrectly by, say, getting the question wrong – we assume they haven’t learned it well enough. Of course, an “F” on a test could mean the test was a poor way to apply the knowledge as much as it could say that the learner didn’t learn. Or that the learner got sidetracked while taking the test. Or, so on….

The learning that happens in between the choosing of the content/context/etc and the application of the knowledge gained on a test or paper or other external measurement is totally up to the learner.

So that is what AI is really analyzing in many designs – it is looking at what choices were made before the learning and what the learner was able to do with their learning on the other side of the learning based on some external application of knowledge/skills/etc. We have to look at AI something that affects and/or measures the bookends to the actual learning.

Rather than the Netflix approach to recommendations, I would say a better model to look to is the Amazon model of “people also bought this.” Amazon looks at each thing they sell as an individual object that people will connect in various ways to other individual objects – some connects that make sense, others that don’t. Sometimes people look at one item and buy other similar items instead, sometimes people buy items that work together, and sometimes people “in the know” buy random things that seem disconnected to newbies. The Amazon system is not perfect, but it does allow for greater individuality in purchasing decisions, and doesn’t assume that “because you bought this phone, you might also want to buy this phone as well because it is a phone, too.”

In other words, the Amazon model can see the common connections as well as the uncommon connections (even across their predefined categories), and let you the consumer decide which connections work for you or not. The Netflix model looks for the popular/common connections within their predefined categories.

I would submit that learners need ways to learn that can look at common learning pathways as well as uncommon pathways – especially across any categories we would define for them.

Of course, Amazon can collect data in ways that would be illegal (for good reason) in education, and the fact that they have millions of transactions each day means that they get detailed data about even obscure products in ways that would be impossible at a smaller scale in education. In no way should this come across as me proposing something inappropriate like “Amazon for Education!” The point I am getting at here is that we need a better way to look at AI in education:

  • Individuals are complex, and all systems need to account for complexity instead of simplifying for the most popular groups based on analytics.
  • AI should not be seen as something that perceives the learner or their knowledge or learning, but one that collects incomplete data on learners choices.
  • The goal of this collection should not just be to perceive learners and content, but to understand complex patterns made by complex people.
  • The categories and patterns selected by the creators of AI applications should not become limitations on the learners within that application.
  • While we have good models for how we learn, the actual act of “learning” should still be treated as a mysterious process (until that changes – if ever).
  • AI, like all education, does not measure learning, but how learning that occurred mysteriously in the learner was applied to an external context or artifact. This will be a flawed process, so the results of any AI application should be viewed within the bias and flaws created by the process.
  • The learners perception of what they learned and how well they were able to apply it to external context/artifact is mostly ignored or discarded as irrelevant self-reported data, and that should stop.

The Artificial Component of Artificial Intelligence and the C-3P0 Rule

There have been many think pieces and twitter threads examining how the “intelligence” component of “Artificial Intelligence” is not real intelligence, or at least not anything like human intelligence. I don’t really want to jump into the debate about what counts as real intelligence, but I think the point about AI not being like human intelligence should be obvious in the “artificial” component of the term. To most, people it probably is – when discussing the concept of AI in an overall sense at least.

Nobody thinks you have to mow artificial grass. No one would take artificial sweetener and use it in all of the same cooking/baking applications that they would with sugar. By calling something “artificial,” we acknowledge that there are significant differences between the real thing and the artificial thing.

But like I said, most people would probably recognize that as true for AI. The problem usually comes when companies or researchers try to make it hard to tell if their AI tool/process/etc is human or artificial. Of course, some are researching if people can tell the difference between a human and their specific AI application (that they created without any attempt to specifically make it deceptively human), and that is a different process.

Which, of course, brings up some of the blurry lines in human/machine interface. Any time you have a process or tool or application that is designed for people to interface with, you want to make sure it is as user-friendly as possible. But where is the line between “user-friendly” and “tricking people into thinking they are working with a human”? Of course there is a facet of intent in that question, but beyond intent there are also unintended consequences of not thinking through these issues fully.

Take C-3P0 from Star Wars. I am sure that the technology exists in the Star Trek universe to create robots that look like real humans (just look at Luke’s new hand in The Empire Strikes Back). But the makers of protocol droids like C-3P0 still made them look like robots even though they were protocol droids that needed to have near perfect human traits for their interface. They mad e a choice to make their AI tool still look artificial. Yes, I know that ultimately these are movies and the film makers made C-3P0 look the way it did just because they thought it was cool and futuristic looking. But they also unintentionally created something I would call a “C-P30 Rule” that everyone working with AI should consider: make sure that your AI, no matter how smoothly it needs to interface with humans, has something about it that quickly and easily communicates to those that utilize it that it is artificial.

What Does It Take to Make an -agogy? Dronagogy, Botagogy, and Education in a Future Where Humans Are Not the Only Form of “Intelligence”

Several years ago I wrote a post that looked at every form of learning “-agogy” I could find. Every once in a while I think that I probably need to do a search to see if others have been added so I can do an updated post. I did find a new one today, but I will get to that in a second.

The basic concept of educational -agogy is that, because “agogy” means “lead” (often seen in the sense of education, but not always), you combine who is being led or the context for the leading with the suffix. Ped comes from the Greek word for “children,” andr from “men,” huet from “self,” and so on. It doesn’t always have to be Greek (peeragogy, for example) – but the focus is on who is being taught and not what topic or tool they are being taught.

I noticed a recent paper that looks to make dronagogy a term: A Framework of Drone-based Learning (Dronagogy) for Higher Education in the Fourth Industrial Revolution. The article most often mentions pedagogy as a component of dronagogy, so I am not completely sure of the structure they envision. But it does seem clear that drones are the topic and/or tool, and only in certain subjects. Therefore, dronology would have probably been a more appropriate term. They are essentially talking about the assembly and programming of drones, not teaching the actual drones.

But someday, something like dronagogy may actually be a thing (and “someday” as in pretty soon someday, not “a hundred years from now” someday). If someone hasn’t already, soon someone will argue that Artificial Intelligence has transcended “mere” programming and needs to be “led” or “taught” more than “programmed.” At what point will we see the rise of “botagogy” (you heard it here first!)? Or maybe “technitagogy” (from the Greek word for “artificial” – technitós)?

Currently, you only hear a few people like George Siemens talking about how humans are no longer the only form of “intelligence” on this planet. While there is some resistance to that idea (because AI is not as “intelligent” as many think it is), it probably won’t be much longer before there is wider acceptance that we actually are living in a future where humans are not the only form of “intelligence” around. Will we expand our view of leading/teaching to include forms of intelligence that may not be like humans… but that can learn in various ways?

Hard to say, but we will probably be finding out sooner than a lot of us think we will. So maybe I shouldn’t be so quick to question dronagogy? Will drone technology evolve into a form of intelligence someday? To be honest, that just sounds like a Black Mirror episode that we may not want to get into.

(Feature image by Franck V. on Unsplash)

What if We Could Connect Interactive Content Like H5P to Artificial Intelligence?

You might have noticed some chatter recently about H5P, which can create interactive content (videos, questions, games, content, etc) that works in a browser through html5. The concept seems to be fairly similar to the E-Learning Framework (ELF) from APUS and other projects started a few years ago based on html5 and/or jquery – but those seem to mostly be gone or kept a secret. The fact that H5P is easily shareable and open is a good start.

Some of our newer work on self-mapped learning pathways is starting to focus on how to build tools that can help learners map their own learning pathway through multiple options. Something like H5P will be a great tool for that. I am hoping that the future of H5P will include ways to harness AI in ways that can mix and match content in ways beyond what most groups currently do with html5.

To explain this, let me take a step back a bit and look at where our current work with AI and Chatbots currently sits and point to where this could all go. Our goal right now is to build branching tree interfaces and AI-driven chatbots to help students get answers to FAQs about various courses. This is not incredibly ground-breaking at this point, but we hope to take this in some interesting directions.

So, the basic idea with our current chatbots is that you create answers first and them come up with a set of questions that serve as different ways to get to that answer. The AI uses Natural Language Processing and other algorithms to take what is entered into a chatbot interface and match the entered text with a set of questions:

Diagram 1: Basic AI structure of connecting various question options to one answer. I guess the resemblance to a snowflake shows I am starting to get into the Christmas spirit?

You put a lot of answers together into a chatbot, and the oversimplified way of explaining it is that the bot tries to match each question from the chatbot interface with the most likely answer/response:

Diagram 2: Basic chatbot structure of determining which question the text entered into the bot interface most closely matches, and then sending that response back to the interface.

This is our current work – putting together a chatbot fueled FAQ for the upcoming Learning Analytics MOOCs.

Now, we tend to think of these things in terms of “chatting” and/or answering questions, but what if we could turn that on its head? What if we started with some questions or activities, and the responses from those built content/activities/options in a more dynamic fashion using something like H5P or conversational user interfaces (except without the part that tries to fool you that a real person is chatting with you)? In other words, what if we replaced the answers with content and the questions with responses from learners in Diagram 1 above:

Diagram 3: Basic AI structure of connecting learners responses to content/activities/learning pathway options.

And then we replaced the chatbot with a more dynamic interactive interface in Diagram 2 above:

Diagram 4: Basic example of connecting learners with content/activity groupings based on learner responses to prompts embedded in content, activities, or videos.

The goal here would be to not just send back a response to a chat question, but to build content based on what learner responses – using tools like H5P to make interactive videos, branching text options, etc. on the fly. Of course, most people see this and think how it could be used to create different ways of looking at content in a specific topic. Creating greater diversity within a topic is a great place to start, but there could also be bigger ways of looking at this idea.

For example, you could look at taking a cross-disciplinary approach to a course and use a system to come up with ways to bring in different areas of study. For example, instead of using the example in Diagram 4 above to add greater depth to a History course, what if it could be used to tap into a specific learner’s curiosities to, say, bring in some other related cross disciplinary topics:

Diagram 5: Content/activity groupings based on matching learner responses with content and activities that connect with cross disciplinary resources.

Of course, there could be many different ways to approach this. What if you could also utilize a sociocultural lens with this concept? What if you have learners from several different countries in a course and want to bring in content from their contexts? Or you teach in a field that is very U.S.-centric that needs to look at a more global perspective?

Diagram 6: Content/activity groupings based on matching learner responses with content and activities that focus on different countries.

Or you could also look at dynamic content creation from an epistemological angle. What if you had a way to rate how instructivist or connectivist a learner is (something I started working on a bit in my dissertation work)? Or maybe even use something like a Self-Regulated Learning Index? What if you could connect learners with lessons and activities closer to what they prefer or need based on how self-regulated, connectivist, experienced, etc they are?

Diagram 7: The content/activity groupings above are based on a scale I created in my dissertation that puts “mostly instructivist” at 1.0 and “mostly connectivist” at 2.0.

You could also even look at connecting an assignment bank to something like this to help learners get out of the box ideas for how to prove what they have been learning:

Diagram 8: Content/activity groupings based on matching learner responses with specific activities they might want to try from an assignment bank.

Even beyond all of this, it would be great to build a system that allows for mixes of responses to each prompt rather than just one (or even systems that allow you to build on one response with the next one in specific ways). The red lines in the diagrams above represent what the AI sees as the “best match,” but what if it was indicating the percentage of what content should come from which content pool? The cross-disciplinary image above (Diagram 5) could move from just picking “Art” as the best match to making a lesson that is 10% Health, 20% History, 50% Art, and so on. Or the first response would be some related content on “Art,” then another prompt would pull in a bit from “Health.”

Then the even bigger question is: can these various methods be stacked on top of each other, so that you are not forced to choose sociocultural or epistemological separately, but the AI could look at both at once? Probably so, but would a tool to create such a lesson be too complex for most people to practically utilize?

Of course, something like this is ripe for bias, so that would have to be kept in mind at all stages. I am not sure exactly how to counteract that, but hopefully if we try to build more options into the system for the learner to choose from, this will start dealing with and exposing that. We would also have to be careful to not turn this into some kind of surveillance system to watch learners’ every move. Many AI tools are very unclear about what they do with your data. If students have to worry about data being misused by the very tool that is supposed to help them, that will cause stress that is detrimental to learning. So in order for something like this to work, openness and transparency about what is happening – as well as giving learners control over their data – will be key to building a system that is actually usable (trusted) by learners.

Social Presence, Immediacy, Virtual Reality, and Artificial Intelligence

While doing some research for my current work on AI and Chatbots, I was struck by how much some people are trying to use bots to fool people into thinking they are really humans. This seems to be a problematic road to go down, as we know that people are not necessarily against interacting with non-human agents (like those of us that prefer to get basic information like bank account balances over the phone from a machine rather than bother a human). At the core, I think these efforts are really aimed at humanizing those tools, which is not a bad aim. I just don’t think we should ever get away from openness about who or what we are having learners interact with.

I was reminded about Second Life (remember that?) and how we used to question how some people would build traditional structures like rooms and stairs in spaces where your avatars could fly. At the time it was the “cool, hip” way to mock the people that you didn’t think “understood” Second Life. However, I am wondering if maybe there was something to this approach that we missed?

Concepts like social presence and immediacy have fallen out of the limelight in education, but they still have immense value (and many people still promote them thankfully). We need something in our educational efforts, whether in classrooms or at a distance online, that connects us to other learners in ways that we can feel, sense, connect with, etc. What if one way of doing that is by creating human-based structures in our virtual/digital interactions?

I’m not saying to ditch anything experimental and just recreate traditional classroom simulations in virtual reality, or re-enact standard educational interactions with chat bots. But what if incorporating some of those elements could help bring about more of a human element?

To be honest, I am not sure where the right “balance” of these two concepts would be. If I enter a virtual reality space that is just like a building in real life, I will probably miss out on the affordances of exploration that virtual reality could bring to the table. But if I walk into some wild trippy learning space that looks like a foreign planet to me, I will have to spend more time figuring out the way things work than actually learning about the topic I am interested in. I would also feel a bit out of contact with humanity of there is little to tie me back to what I am used to in real life.

The same could be said about the interactions we are designing for AI and chatbots. On one hand, we don’t need to mimic the status quo in the physical world just because it is what we have always done. But we also don’t need to do things that are way out there just because we can, either. Somewhere there is probably a happy medium of humanizing these technologies enough for us to connect with them (without trying to trick people into thinking they are humans) while still not replicating everything we already know just because that is what we know. I know some Social Presence Theory people would balk at the idea of those ideas being applied to technology, but I am thinking more of how we can use those concepts to inform our designs – just in a more meta fashion. Something to mull over for now.

Modernizing Websites: html5, Indieweb, and More?

On and off for the past few weeks I have been looking into modernizing some of my websites with things like html5 and indieweb. The main goal of this experimentation was to improve the LINK Research Lab web presence by getting some WebMention magic going on our website. The bonus is that I experiment with some of these on my own website before moving them onto a real website for the LINK Lab. I had to make sure they didn’t blow things up, after all.

However, the problem with that is my website was running on a cobbled together WordPress theme that was barely holding together, and looking dated. I was looking for a nice theme to switch over to quickly, but not having much success. Then I remembered that Alan Levine had a sweet looking html5 theme (wp-dimension). One weekend I gave it whirl, and I think we have a winner.

The great thing about Cog Dog’s theme is that it has a simple initial interface for those that want to take a quick look at my work, but also has the ability to allow people to dig deeper into any topic they want to. I had to download and delete all of the blog posts that were already on my website, as the theme turns blog posts into the quick look posts on the front page. Those old posts were just feedwordpress copies of every post I wrote on this blog – so no need to worry about that. Overall, a great theme that is easy to use that I highly recommend for anyone wanting to create a professional website fast.

Much of my current desire to update websites came from reading Stephen Downes’ post on OwnYourGram – a service that let’s you export your Instagram files to your own website. To be honest, the IndieWeb part on the OwnHourGram website was just not working for me, until I found the actual IndieWeb plugins for WordPress. When in doubt, look for the plugins that already exist. I added those, and it all finally worked great. I found that the posts it imported didn’t work that well with many WordPress themes (Instagram posts don’t have titles, but many WordPress themes ignore posts without titles – or renders them strangely on the front page). So I still need to tinker with that.

The main part I became the most interested in was how IndieWeb features like WebMentions can help you connect with conversations about your content on other websites (and also social media). That will probably be the most interesting feature that I want to start using on this website and the LINK Lab website as well. So now that I have it figured out, time to get it set up before it all changes :) I’m just digging into this after being a fan from a far for a while, so let’s see what else is out there.

Are MOOCs Fatally Flawed Concepts That Need Saving by Bots?

MOOCs are a problematic concept, as are many things in education. Using bots to replicate various functions in MOOCs is also a problematic concept. Both MOOCs and bots seems to go in the opposite direction of what we know works in education (smaller class sizes and more interaction with humans). So, teaching with either or both concepts will open the doors for many different sets of problems.

However… there are also even bigger problems that our society is imposing on education (at least in some parts of the world): defunding of schools, lack of resources, and eroding public trust being just a few. I don’t like any of those, and I will continue to speak out against them. But I also can’t change them overnight.

So what do we do with the problems of less resources, less teachers, more students, and more information to teach as the world gets more complex? Some people like to just focus on fixing the systemic issues causing these problems. And we need those people. But even once they do start making headway…. it will still be years before education improves from where it is. And how long until we even start making headway?

The current state of research into MOOCs and/or bots is really about dealing with the reality of where education is right now. Despite there being some larger, well-funded research projects into both, the reality is that most research is very low (or no) budget attempts to learn something about how to create some “thing” that can help a shrinking pool of teachers educate a growing mass of students. Imperfect solutions for an imperfect society. I don’t fully like it, but I can’t ignore it.

Unfortunately, many people are causing an unnecessary either/or conflict between “dealing with scale as it is now” and “fixing the system that caused the scale in the first place.” We can work at both – help education scale now, while pushing for policy and culture change to better support and fund education as a whole.

On top of all of that, MOOCs tend to be “massively” misunderstood (sorry, couldn’t resist that one). Despite what the hype claims, they weren’t created as a means to scale or democratize education. The first MOOCs were really about connectivism, autonomy, learner choices, and self-directed learning. The fact that they had thousands of learners in them was just a thing that happened due to the openness, not an intended feature.

Then the “second wave” of MOOCs came along, and that all changed. A lot of this was due to some unfortunate hype around MOOCs published in national publications that proclaimed some kind of “educational utopia” of the future, where MOOCs would “democratize” education and bring quality online learning to all people.

Most MOOC researchers just scoffed at that idea – and they still do. However, they also couldn’t ignore the fact that MOOCs do bring about scaled education in various ways, even if that was not the intention. So that is where we are at now: if you are going to research MOOCs, you have to realize that the context of that research will be about scale and autonomy in some way.

But it seems that the misunderstandings of MOOCs are hard-coded into the discourse now. Take the recent article “Not Even Teacher-Bots Will Save Massive Open Online Courses” by Derek Newton. Of course, open education and large courses existed long before that were coined “MOOCs”… so it is unclear what needs “saving” here, or what it needs to be saved from. But the article is a critique of a study out of the University of Edinburgh (I believe this is the study, even though Newton never links to it for you to read it for yourself) that sought “to increase engagement” by designing and deploying “a teacher-bot (botteacher) in at least one MOOC.” Newton then turns around and says “the idea that a pre-programmed bot could take over some teaching duties is troubling in Blade Runner kind of way.” Right there you have your first problematic switch-a-roo. “Increasing engagement” is not the same as “taking over some teaching duties.” That is like saying that lane departure warning lights on cars is the same as taking over some driving duties. You can’t conflate something that assists with something that takes over. Your car will crash if you think “lane departure warnings” are “self-driving cars.”

But the crux of Newton’s article is that because the “bot-assisted platform pulls in just 423 of 10,145, it’s fair to say there may be an engagement problem…. Botty probably deserves some credit for teaching us, once again, that MOOCs are fatally flawed and that questions about them are no longer serious or open.”  Of course, there are fatal flaws in all of our current systems – political, religious, educational, etc. – yet questions about all of those can still be serious or open. So you kind of have to toss out that last part as opinion and not logic.

The bigger issues is that calling 423 people an “engagement problem” is an unfortunate way to look at education. That is still a lot of people, considering most courses at any level can’t engage 30 students. But this misunderstanding comes from the fact that many people still misunderstand what MOOC enrollment means.  10,000 people signing up for a MOOC is not the same as 10,000 people signing up for a typical college course. Colleges advertise to millions of perspective students, who then have to go through a huge process of applications and trials to even get to register for a course. ALL of that is bypassed for a MOOC. You see a course and click to register. Done. If colleges did the same, they would also get 10,000+ signing up for a course. But they would probably only get 50-100 showing up for the first class – a lot less than any first week in most MOOCs.

Make no mistake: college courses would have just as bad of engagement rates if they removed the filters of application and enrollment to who could sign up. Additionally, the requirement of “physical re-location” for most would make those engagement rates even worse than MOOCs if the entire process were considered.

Look at it this way: 30 years ago, if someone said “I want to learn History beyond what a book at the library can tell me,” they would have to go through a long and expensive process of applying to various universities, finally (possibly) getting accepted at one, and then moving to where that University was physically located. Then, they would have to pay hundreds or thousands of dollars for that first course. How many tens of thousands of possible students get filtered out of the process because of all of that? With MOOCs, all of that is bypassed. Find a course on History, click to enroll, and you are done.

When we talk about “engagement” in courses, it is typically situated in a traditional context that filters out tens of thousands of people before the course even starts. To then transfer the same terminology to MOOCs is to utilize an inaccurate critique based on concepts rooted in a completely different filtering mechanism.

Unfortunately, these fundamentally flawed misunderstandings of MOOC research are not just one-off rarities. This same author also took a problematic look at a study I helped out with Aras Bozkurt and Whitney Kilgore. Just look at the title or Newton’s previous article: Are Teachers About To Be Replaced By Bots? Yeah, we didn’t even go that far, and intentionally made sure to stay as far away from saying that as possible.

Some of the critique of our work by Newton is just very weird, like where he says: “Setting aside existential questions such as whether lines of code can search, find, utter, reply or engage discussions.” Well, yes – they can do that. Its not really an existential question at all. Its a question of “come sit at a computer with me and I will show you that a bot is doing all of that.” Google has had bots doing this for a long, long time. We have pretty much proven that Russian bots are doing this all over the world.

Then Newton gets into pull quotes, where I think he misunderstands what we meant by the word “fulfill.” For example, it seems Newton misunderstood this quote from our article: “it appears that Botty mainly fulfils the facilitating discourse category of teaching presence.” If you read our quote in context, it is part of the Findings and Discussion section, where we are discussing what the bot actually did. But it is clear from the discussion that we don’t mean that Botty “fully fills” the discourse category, but that what it does “fully qualifies” as being in that category. Our point was in light of “self-directed and self-regulated learners in connectivist learning environments” – a context where learners probably would not engage with the instructor in the first place. In this context, yes it did seem that Botty was filling in for an important instructor role in a way that fills satisfies the parameters of that category. Not perfectly, and not in a way that replaces the teacher. It was in a context where the teacher wasn’t able to be present due to the realities of where education is currently in society – scaled and less supported.

Newton goes on to say: “What that really means is that these researchers believe that a bot can replace at least one of the three essential functions of teaching in a way that’s better than having a human teacher.”

Sorry, we didn’t say “replace” in an overall context, only “replace” in a specific context that is outside of the teacher’s reach. We also never said “better than having a human teacher.” That part is just a shameful attempt at putting words in our mouths that we never said. In fact, you can search the entire article and find we never said the word “better” about anything.

Then Newton goes on to mis-use another quote of ours (“new technological advances would not replace teachers just because teachers are problematic or lacking in ability, but would be used to augment and assist teachers”). His response to this is to say that we think “new technology would not replace teachers just because they are bad but, presumably, for other reasons entirely.”

Sorry, Newton, but did you not read the sentence directly after the one you quoted? We said “The ultimate goal would not be to replace teachers with technology, but to create ways for non-human teachers to work in conjunction with human teachers in ways that remove all ontological hierarchies.”  Not replacing teachers…. working in conjunction. Huge difference.

Newton continues with injecting inaccurate ideas into the discussion, such as “Bots are also almost certain to be less expensive than actual teachers too.” Well, actually, they currently aren’t always less expensive in the long run. Then he tries to connect another quote from us about how lines between bots and teachers might get blurred as proof that we… think they will cost less? That part just doesn’t make sense.

Newton also did not take time to understand what we meant by “post-humanist,” as evidenced by this statement of his: “the analysis of Botty was done, by design, through a “post-humanist” lens through which human and computer are viewed as equal, simply an engagement from one thing to another without value assessment.” Contrast his statement with our actual statement on post-humanism: “Bayne posits that educators can essentially explore how to retain the value of teacher presence in ways that are not in opposition to some forms of automation.” Right there we clearly state that humans still maintain value in our study context.

Then Newton pulls his most shameful bait and switch of the whole article at the end: pulling one of our “problematic questions” (where we intentionally highlighted problematic questions for sake of critique) and attributing it as our conclusion: “the role of the human becomes more and more diminished.” Newton then goes on to state: “By human, they mean teacher. And by diminished, they mean irrelevant.”

Sorry Newton, that is simply not true. Look at our question following soon after that one, where we start the question with “or” to negate what our list of problematic questions ask: “Or should AI developers maintain a strong post-humanist angle and create bot-teachers that enhance education while not becoming indistinguishable from humans?” Then, maybe read our conclusion after all of that and the points it makes, like “bot-teachers can possibly be viewed as a learning assistant on the side.”

The whole point of our article was to say: “Don’t replace human teachers with bot teachers. Research how people mistake bots for real people and fix that problem with the bots. Use bots to help in places where teachers can’t reach. But above all, keep the humans at the center of education.”

Anyways, after a long side-tangent about our article, back to the point of misunderstanding MOOCs, and how researchers of MOOCs view MOOCs. You can’t evaluate research about a topic – whether MOOCs or bots or post-humanism or any topic – through a lens that fundamentally misunderstands what the researchers were examining in the first place. All of these topics have flaws and concerns, and we need to critically think about them. But we have to do so through the correct lens and contextual understanding, or else we will cause more problems that we solve in the long run.